3 Tips for Improving Your Owncast Viewers' Experience
Introduction
Self-hosting applications is one of my favorite ways to learn about new technologies. With open-source software available for just about anything, setting them up and using them always teaches me something new.
One of my favorites is Owncast, a self-hosted live stream and chat server which lets you set up a live stream similar to the offerings seen on large platforms like YouTube and Twitch. I personally use it to live stream somewhat regularly on my stream.logal.dev subdomain. What I love most about Owncast is its simplicity: it is easy to set up, easy to use, and even easier for viewers to interact with.
One downside (or perk, depending on your perspective) of self-hosting applications like this is that many of the details abstracted away by larger platforms become challenges you need to manage yourself. Owncast may be easy to set up, but I have learned it has surprising depth, and even small misconfigurations can negatively impact the viewer experience. For me, the ideal viewer experience comes down to two key factors: the visual quality of the live stream itself and avoiding any buffering during playback. From video encoding intricacies to low-level networking settings in the Linux kernel, every detail plays a role.
After spending a lot of time fine-tuning my stream, I feel it's in a very good state from a configuration standpoint. This blog post serves to share three key changes I implemented on my stream to improve the experience for my viewers, and hopefully they will be helpful for yours too.
Tip #1: Use As Much Compute As Possible
It is easy to assume video quality is all about bitrate; the higher, the better. While bitrate does play a major role, relying on it alone has limitations. If raising the bitrate were the only way to improve visual quality, we would have to keep increasing it until the video looks acceptable. But in the context of live streaming, this approach quickly becomes impractical. Higher bitrates demand more network bandwidth, making streams less accessible for viewers with slower or unstable connections. For these viewers, higher quality could mean more buffering or not being able to watch at all.
Thankfully, bitrate is not the only factor which affects visual quality. Let's examine how a different setting influences it. Below are two videos of the same clip of me jumping off a cliff in Space Engineers, named example A and example B. They were both encoded using the open-source libx264 encoder at a constant bitrate of 6,000 kbit/s, the maximum supported by Owncast without modifications.
Notice the difference? Example A is noticeably lower visual quality, with blocky artifacts present everywhere and much of the detail lost in the rocks and grass. Example B, on the other hand, looks much better. While not entirely free of artifacts, it is visibly cleaner and more pleasant to watch overall.
So, what caused the difference? It wasn't the bitrate; both videos were encoded at the same 6,000 kbit/s. The key change was the encoder preset. Example A used libx264's ultrafast preset while Example B used the fast preset. So why does the encoder preset have such a big impact on quality? Let's break it down by thinking about how video encoding works.
Sketching Like an Artist Under Pressure
Imagine a video encoder as a highly-skilled artist tasked with recreating a detailed masterpiece for sharing. You ask the artist to copy the piece, but with a strict time limit to ensure it's ready quickly. If given 10 seconds to work, they will grab a pencil and quickly sketch something vaguely resembling the original, but much of the detail will be lost. However, if given 10 minutes, they will have time to use a wider range of tools and techniques to refine the sketch. While the final result won't be an exact match, it will closely resemble the original with enough time and effort.
Video is simply a series of individual images, or frames, displayed in rapid succession to create the illusion of motion. In offline video encoding, the artist isn't under a strict time constraint. They have a big pile of frames which make up a video to recreate and can take as much time as needed to carefully refine each one before sharing the final product with viewers. However, in a live stream, the artist must work quickly because viewers are watching the output in real time. For a 60 fps live stream, the artist has only about 16 milliseconds per frame to complete their work before falling behind. If they take too long, viewers will eventually start experiencing problems like buffering or stream crashes.
Bringing It All Together
Bringing this back to standard video encoding terminology, a slower encoder preset allows the encoder to spend more time on each frame applying better compression techniques, better preserving detail and accuracy from the original. In offline video encoding, where there's no strict time constraint, the slowest encoder preset is typically preferred to achieve the highest quality.
With that in mind, it might seem like the obvious choice is to set the encoder preset as slow as possible for maximum quality. However, live streaming relies on real-time encoding where each frame must be processed fast enough to keep up with the live input and reach viewers on time. Choosing the right encoder preset is a balance: too slow, and the encoder falls behind causing buffering or dropped frames; too fast, and the video unnecessarily loses visual quality due to less efficient compression.
On a modern high-power server with up-to-date hardware, using a slow encoder preset is usually not an issue. However, Owncast is able to run on a wide range of devices, including older machines, low-power single-board computers, and virtual machines on shared servers. These setups often have limited processing power, so a faster encoder preset is required to ensure the encoder can keep up.
Implementing this Tip
The key is to experiment and find the right balance where you are using the slowest encoder preset possible without overloading your server. In Owncast, the encoder preset is controlled for each output variant using the "CPU or GPU Utilization" slider shown below. Conduct several test streams, gradually increasing the encoder preset while monitoring CPU usage. Ideally, you want to use most of your CPU without hitting 100%, leaving some headroom for fluctuations in encoding demand and other system processes. Keep in mind that different types of content can impact CPU usage; mostly static video requires less processing than high-motion gameplay footage. If your CPU struggles to handle the encoder preset level you want, it may be time to consider an upgrade.
Alternatively, you can offload video encoding entirely to dedicated hardware. Many modern GPUs, including Intel Integrated Graphics, NVIDIA, and AMD cards, include built-in video encoders designed specifically for this task. These encoders operate independently from the CPU, effectively bypassing any CPU limitations entirely. However, keep in mind most virtual machines and dedicated servers from cloud hosting providers don't include these accelerators, so this approach is generally easier to implement in a self-hosted environment with your own hardware.
A Note on Passthrough Mode
Owncast also offers a setting for output variants called passthrough mode which bypasses Owncast's encoding entirely and simply serves the video provided by the broadcasting software. While this might seem like an easy way to avoid the double compression issue, it comes with significant drawbacks.
The second encoding step in Owncast is essential for ensuring the video stream is properly formatted for HLS, the protocol used by Owncast to deliver the live stream to viewers. Skipping this step forces Owncast to rely on the broadcasting software's output, which is often not optimized for HLS. As a result, the viewing experience can range from generally acceptable to the stream failing to play altogether.
Owncast's documentation strongly advises against using passthrough mode, and I fully agree with it. Avoid using it if you can.
Tip #2: Use a Higher Bitrate in Your Broadcasting Software
The Owncast documentation contains the below recommendation for setting up broadcasting software:
You will want to configure your broadcasting software to match the highest quality you can offer your viewers. That means if your Owncast server can only handle 720p@2500k you should not configure your broadcasting software to send 1080p@6000k. The more conversion work you ask Owncast to do the more resources it will use on your server, making it even harder to offer the best qualities to your viewers.
I agree with the general premise of not sending more data to Owncast than necessary. It makes sense to match the resolution, as sending 1080p video to Owncast when it's configured to only serve 720p is clearly excessive. However, the bitrate is a more nuanced topic. After speaking with several Owncast streamers, it seems many configure their broadcasting software to send at the same bitrate as their top output variant. In many cases, since Owncast's maximum supported bitrate without modifications is 6,000 kbit/s, they set their broadcasting software to output the same. As it turns out, this approach can sometimes lead to visual quality problems.
Double Compression and Generation Loss
Let's consider the diagram below of an average Owncast setup based on how streamers typically interpret the documentation. First, we start with a raw, uncompressed video signal which we want to capture and stream to viewers. Uncompressed video requires an enormous amount of bandwidth, approximately 2.98 Gbit/s for 1080p at 60 fps, which far exceeds the capabilities of most networks. To make transmission feasible, broadcasting software, like Open Broadcaster Software, processes the signal and encodes it using a video encoder to significantly reduce the bitrate. In this case, it is set to 6,000 kbit/s. The compressed video signal is then transmitted over a network to a server running Owncast where it undergoes another round of encoding. Owncast re-encodes the video, also at 6,000 kbit/s, and generates the final stream which is delivered to viewers over a network.
Double compression is common in this context and addresses many low-level technical requirements, but it can introduce issues if not configured properly. If the first compression step, the broadcasting software, is too aggressive, the video feed Owncast receives may already contain visible compression artifacts. Since Owncast's video encoder is simply trying to match whatever it is given, it will encode those artifacts and potentially introduce new ones into the final stream, further degrading the visual quality.
This effect is known as generation loss. Each time a signal is encoded with a lossy compression algorithm, additional errors are introduced and carried through to the output. The impact becomes especially noticeable after just a few rounds of aggressive compression or when the signal is repeatedly compressed many times. For example, the figure below shows an image and how it degrades after being encoded 200, 900, and 2,000 times using JPEG compression.
Implementing this Tip
One simple solution to reduce the impact of generation loss in this context is to increase the bitrate produced by your broadcasting software. By doing this, its encoder can send more information and provide a clearer video stream to Owncast's encoder, improving the final output. A good starting point is to take your highest output variant bitrate in Owncast, double it, and set that as your broadcasting software's bitrate.
However, it's important to keep in mind your network's limitations. If your streaming computer and Owncast server are on the same local network, you can generally use a very high bitrate without issues. However, if your Owncast server is hosted externally and you're live streaming to it over the internet, you will need to factor in the upload speed provided by your internet service provider and the quality of your connection. You do not need to worry about your viewers' network since the feed from the broadcasting software isn't sent directly to them.
Since every setup is different, I cannot suggest universal numbers or settings which will work for everyone, so it's important to experiment and find the settings which work best for your specific setup.
Tip #3: Use BBR Congestion Control
I host my live stream in the United States but have viewers from all around the world. Early on in my live streaming journey, I received feedback from overseas viewers who indicated they were experiencing buffering issues, even though both the server and clients had high-bandwidth internet connections. This issue, as it turns out, can stem from deeper aspects of how TCP works.
TCP is a complex protocol with many essential functions, one of which is congestion control. This function adjusts the amount of data sent across the network when it becomes overloaded. Without congestion control, individual networks or even the internet, would face congestive collapse where protocols repeatedly attempt to resend data over already congested links, worsening the problem and severely limiting the transmission of useful information.
There are various algorithms for implementing congestion control, so it is a modular part of TCP. The default congestion control algorithm in Linux, Windows, and macOS is CUBIC, which relies on packet loss as an indicator of network congestion. In this approach, when the TCP stack detects that packets need to be resent, it assumes congestion has occurred and reduces the transmission speed. While this method works well in most cases, small amounts of packet loss are common across many internet paths, which can make throughput less than ideal with CUBIC, especially over long geographic distances.
Google, the company behind YouTube, faced similar challenges when scaling their platform to deliver large amounts of video content to viewers worldwide. To address these issues, they developed a new congestion control algorithm called BBR. Unlike CUBIC, which relies on packet loss as an indicator of congestion, BBR uses multiple signals, including the available bandwidth and round-trip time, to create a more accurate model of the network and its characteristics. This allows BBR to adjust transmission rates more effectively, improving throughput and reducing latency.
The slide below from Google made especially clear that BBR maintains good throughput with varying levels of packet loss, whereas CUBIC quickly deteriorates under similar conditions.
Given everything I had learned about BBR and how the problems it aimed to address were similar to mine, I decided to switch to it on my infrastructure. I don't have objective numbers to quantify the exact benefit of switching to BBR, but the results have been noticeable in a subjective sense. After implementing BBR, the number of buffering complaints I received dropped to effectively zero. This leads me to believe the change had a positive impact in my case.
Implementing this Tip
BBR is available in the Linux kernel since version 4.9, making it straightforward to implement. Simply set the below sysctl flags on your web server. These settings are only required on the sending side (the web server); viewers do not need to take any action.
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
A Note on QUIC
If you're using Caddy as your web server or reverse proxy, chances are you have support for HTTP/3 enabled.
HTTP/3 swaps out TCP as the underlying transport protocol with a new one called QUIC. QUIC is a brand-new connection-oriented protocol built on top of the otherwise connectionless UDP protocol, which means all protocol logic must be handled by userspace applications. In contrast, most of TCP is handled directly by the kernel. As a result, new protocol features, such as BBR congestion control, depend on the userspace applications implementing QUIC rather than the kernel. Unfortunately, Caddy, and specifically its QUIC implementation through the quic-go library, does not support BBR and has no plans to add it in the near future. This also means any clients speaking HTTP/3 with your web server will not benefit from the sysctl configuration applied earlier.
Personally, I have opted to disable HTTP/3 on all my infrastructure for the time being. I understand there are some theoretical benefits; however the trade-off is bypassing decades of optimizations accumulated by the Linux TCP stack and my network hardware. For large companies who have the resources to optimize and modify their custom QUIC stacks, specialized hardware, and dedicated smartphone apps, it probably works well for them. But for smaller environments like mine, QUIC is simply less efficient and results in higher CPU usage and lower throughput rates.
I will revisit this topic at a later time as the ecosystem evolves.
Final Thoughts
These three tips had the biggest impact on my stream, but there are plenty of other settings worth exploring. I also didn't cover additional features like S3 storage and content delivery networks, which bring their own unique advantages and trade-offs.
The key is to experiment and find what works best for your setup. Some tweaks will improve performance, while others might introduce new problems. It's an iterative process, especially since every streaming environment is unique. Keep refining, keep testing, and you will find the right balance for your setup.