HLS vs WebRTC: Comparing Two Video Streaming Protocols

Much like the rest of the technology around us, audio/video/data delivery between sources have become another round of the never-ending Apple vs Google fight. In this article, it takes the form of HTTP Live Streaming (HLS) and WebRTC, which are both widely used in their respective spaces.

HLS, the pioneer in the streaming world, made its debut in 2009, followed by the release of WebRTC in 2011. Despite their frequent comparison, these two technologies were created for entirely different purposes. And to this day, the trade-off between scale and low-latency remains a crucial consideration in the streaming industry.

HLS boasts a robust infrastructure that ensures quality for every viewer, but this comes at the expense of latency. Latencies for HLS streams typically range from 15 to 30 seconds, which was acceptable in the past but becomes increasingly problematic as technology advances.

WebRTC, on the other hand, was designed as a low-latency solution for web chat, conferencing, and data sharing, without the need for additional downloads. It is an open project that developers can customize to their needs. However, when WebRTC ventures into the realm of HLS, scale and quality become challenging to achieve.

Considering their unique origins and initial purposes, it comes as no surprise that HLS and WebRTC deliver audio, video, and data in fundamentally different ways.

 

How HLS and WebRTC Differ

HLS divides streaming data into "chunks," typically ranging from 2 to 10 seconds, which are then transmitted to and received by the end user. This enables an HLS stream to have varying qualities for each individual chunk: your first 10 seconds may come in 720p HD, but your connection gets worse so your second chunk may get delivered in 360p. Each chunk can change based on different variables, often leading to a better viewing experience for the end user.  However, viewers are starting to find the latency of HLS streams to be increasingly unacceptable.

On the other hand, WebRTC takes a different approach by sending packets instead of chunks. These packets can be thought of as tiny chunks, with each packet having a maximum size of 1200 bytes - that's about equal to [x number of seconds of an HD stream]. This allows for a much faster delivery as the transmission occurs directly between the clients, without the need for a middleman server. The use of packets is a crucial aspect of WebRTC, especially in enabling real-time communication. However, due to their small size and rapid transmission, there is a higher risk of errors that can lead to significant issues. These errors include packet loss, incorrect ordering, duplication, and the potential for jitter.

Phenix revolutionized video streaming with its video delivery infrastructure designed specifically for ultra low latency video streaming at broadcast scale; the entire platform was designed and built from the ground-up to deliver high-quality video in less than 1/2 second to millions of viewers anywhere in the world.