Next.js real-time video streaming: HLS.js and alternatives


Real-time video streaming is essential for modern web applications. It supports activities like live broadcasting, video conferencing, and media interaction. For developers using Next.js, finding a reliable streaming solution is important for creating smooth user experiences.

Next.js Real-Time Video Streaming: HLS.js And Alternatives

One popular choice is HLS.js, a JavaScript library that enables adaptive bitrate streaming using HTTP Live Streaming (HLS) directly in the browser.

In this article, we will show you how to set up real-time video streaming in Next.js using HLS.js. You’ll learn about integration and advanced features like adaptive streaming and token-based authentication. We will also compare HLS.js with other open source options to help you pick the best solution based on cost, latency, features, and scalability.

What is HLS.js?

HLS.js is a free JavaScript library that allows browsers to play HTTP Live Streaming (HLS) content. HLS is a popular video streaming method created by Apple. It helps deliver live and on-demand video smoothly over standard web servers.

HLS.js works well even when browsers don’t support HLS natively, which often varies across platforms. It reads HLS manifest files (usually with the .m3u8 extension), separates the video and audio streams, and then feeds them into a standard HTML5 element. This structure helps adapt to changing network conditions, ensuring the best video quality with little buffering.

HLS.js features:

  • Adaptive Bitrate Streaming (ABR): HLS.js automatically changes video quality based on internet speed and device performance. It switches between quality levels defined in the HLS manifest to give viewers the best experience without interruptions
  • Live and on-demand streaming: It supports both live broadcasts and video-on-demand (VOD) content, handling various types of HLS streams
  • Fragmented MP4 (fMP4) and MPEG-2 Transport Stream (TS) support: HLS.js works with common HLS segment formats, allowing different content types
  • Audio-only streams: It can manage streams that only include audio
  • Extensive event handling: HLS.js has many events that help developers track the streaming process, handle errors, monitor buffering, and manage playback for a customized experience
  • Low Latency HLS (LL-HLS) support: HLS.js includes features for Low Latency HLS, which is important for reducing delays during live events like sports
  • Performance optimization: Users can optimize performance with settings like buffer tuning, pre-loading segments, and managing network requests

Prerequisites for our demo project

Before implementing HLS.js in Next.js, ensure you have:

  • Node.js (v18+) installed
  • A Next.js (v14+) project set up
  • Basic knowledge of React Hooks and JavaScript
  • Basic understanding of HTML5 Video

Project setup

Start by creating a fresh Next.js project using the official create-next-app command:

npx create-next-app@latest next-video-streaming

During the setup process, you’ll be prompted to configure your project. For video streaming applications, we recommend the following selections:

TypeScript: Yes (recommended for better development experience)
ESLint: Yes (helps maintain code quality)
Tailwind CSS: Yes (for styling components)
App Router: Yes (uses the modern App Router)
Import alias: Yes (helps with cleaner imports)

Navigate to your project directory:

cd video-streaming-app

Next, we will install hls.js for our video streaming functionality:

npm install hls.js

If you’re using TypeScript, you might also want to install the type definitions:

npm install --save-dev @types/hls.js

Streaming implementation

To start a live broadcast using HLS, you need to create and manage an HLS playlist that keeps updating with new video segments. This section explains how to add live streaming to your Next.js application.

Starting a live broadcast using an HLS playlist

HLS splits video into small segments and creates a playlist file (.m3u8) that lists these segments. For live streaming, this playlist updates regularly with new segments as the broadcast continues.

Setting up a live stream component

Create a new component for live streaming:

"use client";
import 
        throw new Error(data.error  from 'react';
import Hls from 'hls.js';

export default function LiveStream(
        throw new Error(data.error : ) {
  const videoRef = useRef(null);
  const hlsRef = useRef(null);
  const [isLive, setIsLive] = useState(false);
  const [error, setError] = useState(null);
  useEffect(() => {
    const video = videoRef.current;
    if (Hls.isSupported() && video) {
      const hls = new Hls({
        // Enable live streaming optimizations
        liveSyncDurationCount: 3,
        liveMaxLatencyDurationCount: 5,
      });
      hlsRef.current = hls;
      // Load the live playlist
      hls.loadSource(playlistUrl);
      hls.attachMedia(video);
      return () => {
        hls.destroy();
      };
    } else if (video?.canPlayType('application/vnd.apple.mpegurl')) {
      // Safari native HLS support
      video.src = playlistUrl;
      setIsLive(true);
    } else {
      setError('Live streaming not supported in this browser');
    }
  }, [playlistUrl]);
  const startBroadcast = () => {
    const video = videoRef.current;
    if (video && isLive) {
      video.play();
    }
  };
  const stopBroadcast = () => {
    const video = videoRef.current;
    if (video) {
      video.pause();
    }
  };
  return (
    

{isLive ? 'LIVE' : 'OFFLINE'}

{error ? (

Error: {error}

) : (
); }

The LiveStream component shows how to stream video. It takes an HLS playlist URL (.m3u8) and uses the Hls class to load the stream into an HTML5 element. Important configurations like liveSyncDurationCount and liveMaxLatencyDurationCount help keep the stream close to real-time, reducing delays while ensuring smooth playback.

For browsers that don’t support HLS.js, such as Safari with its native HLS support, the component directly sets the video source to the playlist URL. This way, the component works well across most modern browsers.

Now, import the component into your page.js file:

'use client';
import LiveStream from "@/components/live-stream";
export default function Home() {
  return (
    
  );
}

Start the dev server and go to your browser. You will be able to start and stop the stream.

Managing stream states using HLS.js events

HLS.js has a system of events that helps you manage and control streaming. By listening to these events, you can track the streaming status (like live, buffering, or errors) and improve the user experience with immediate updates. The key events for live streaming are:

  • Hls.Events.MANIFEST_LOADED: This happens when the main playlist is loaded
  • Hls.Events.LEVEL_LOADED: This is triggered when a quality level playlist is loaded
  • Hls.Events.FRAG_LOADED: This occurs when video segments load successfully
  • Hls.Events.ERROR: This handles different types of streaming errors
  • Hls.Events.BUFFER_APPENDED: This indicates when video data is added to the buffer

Now, to integrate event handling into the LiveStream component, modify the useEffect Hook:

useEffect(() => {
  const video = videoRef.current;

  if (Hls.isSupported() && video) {
    const hls = new Hls({
      liveSyncDurationCount: 3,
      liveMaxLatencyDurationCount: 5,
    });
    hlsRef.current = hls;

    hls.loadSource(playlistUrl);
    hls.attachMedia(video);

    // Event listeners
    hls.on(Hls.Events.MANIFEST_PARSED, () => {
      setIsLive(true);
    });
    hls.on(Hls.Events.ERROR, (event, data) => {
      if (data.fatal) {
        setError(`Stream error: ${data.details}`);
      }
    });

    return () => hls.destroy();
  } else if (video?.canPlayType('application/vnd.apple.mpegurl')) {
    video.src = playlistUrl;
    setIsLive(true);
  } else {
    setError('Live streaming not supported in this browser');
  }
}, [playlistUrl]);

This code updates the isLive state when the manifest is parsed and sets an error message for fatal errors, improving user feedback.



HLS.js advanced features

As your streaming app becomes more complex and gains more users, it’s important to add features that enhance performance, security, and user experience. This section discusses three key advanced features: 1) adaptive bitrate streaming to provide different video quality levels, 2) secure token-based authentication for user safety, and 3) performance optimization through configuration adjustments.

Handling multiple quality levels for Adaptive Bitrate Streaming

Adaptive Bitrate Streaming (ABR) is important for providing a smooth video experience, no matter how good or bad the internet connection is or what kind of device you are using. HLS.js uses smart ABR methods that automatically change the video quality based on current network speed, how much data is stored, and the device being used.

Understanding ABR in HLS.js

HLS.js determines the best video quality based on several factors:

  • Network bandwidth: This measures how fast segments download
  • Buffer health: This shows how much content is stored ahead of what you’re watching
  • Device capabilities: This includes the screen size and processing power of your device
  • Historical performance: This looks at past decisions for changing quality and their results

In our Next.js application, we can allow users to choose video quality levels for more control.
HLS.js automatically adjusts video quality based on the playlist’s variant streams. However, you can improve the user experience by letting them select quality levels. Here is an example of how to set this up in the LiveStream component:

"use client";
import { useEffect, useRef, useState } from 'react';
import Hls from 'hls.js';
export default function LiveStream({ playlistUrl }: { playlistUrl: string }) {
  const videoRef = useRef(null);
  const hlsRef = useRef(null);
  const [isLive, setIsLive] = useState(false);
  const [error, setError] = useState(null);
  const [qualityLevels, setQualityLevels] = useState([]);
  const [currentQuality, setCurrentQuality] = useState(null);
  useEffect(() => {
    const video = videoRef.current;
    if (Hls.isSupported() && video) {
      const hls = new Hls({
        // Enable live streaming optimizations
        liveSyncDurationCount: 3,
        liveMaxLatencyDurationCount: 5,
      });
      hlsRef.current = hls;
      // Load the live playlist
      hls.loadSource(playlistUrl);
      hls.attachMedia(video);
        // Event listeners
        hls.on(Hls.Events.MANIFEST_PARSED, () => {
            setQualityLevels(hls.levels as unknown as Hls.Level[]); // store available quality levels
            setIsLive(true);
        });
        hls.on(Hls.Events.LEVEL_SWITCHED, (event, data) => {
            console.log(`Switched to quality level: ${data.level}`);
            setCurrentQuality(data.level);
        });
        hls.on(Hls.Events.ERROR, (event, data) => {
            if (data.fatal) {
            setError(`Stream error: ${data.details}`);
            }
        });
      return () => {
        hls.destroy();
      };
    } else if (video?.canPlayType('application/vnd.apple.mpegurl')) {
      // Safari native HLS support
      video.src = playlistUrl;
      setIsLive(true);
    } else {
      setError('Live streaming not supported in this browser');
    }
  }, [playlistUrl]);
  const changeQuality = (levelIndex: number) => {
    if (hlsRef.current) {
      hlsRef.current.currentLevel = levelIndex;
    }
  };
  const startBroadcast = () => {
    const video = videoRef.current;
    if (video && isLive) {
      video.play();
    }
  };
  const stopBroadcast = () => {
    const video = videoRef.current;
    if (video) {
      video.pause();
    }
  };
  return (
    

{isLive ? 'LIVE' : 'OFFLINE'}

{error ? (

Error: {error}

) : (
)}

); }

The Hls.Events.MANIFEST_PARSED event gathers the available quality levels and saves them in the state. Each quality level includes information like height for resolution. The changeQuality function lets users manually pick a quality level by setting hls.currentLevel. HLS.js will automatically change quality levels unless the currentLevel is set. You can adjust how quickly it changes quality with settings like abrEwmaFastLive and abrEwmaSlowLive to respond to network changes. The component shows buttons for each quality level, which let users switch resolutions easily:

Livestream Screen Demo With Quality Buttons

If you need specific behavior from ABR, you can also create your own switching logic.

Implementing token-based authentication for secure streams

Protecting HLS streams is important for premium content, private broadcasts, and subscription services. HLS.js offers different ways to authenticate users by using a flexible system that lets you add authentication tokens to playlist requests and segment downloads.

HLS authentication works in two main steps:

  • Playlist authentication: This secures access to the main .m3u8 manifest file
  • Segment authentication: This protects individual video segments, such as .ts or .mp4 files

The typical authentication flow is:

  1. The user logs into your application
  2. The server creates a temporary token
  3. The client uses this token in HLS requests
  4. The server checks the token before delivering content

Implementing custom request headers

HLS.js lets you change HTTP requests using the xhrSetup callback. Here’s how to set up token-based authentication:

"use client";
import { useEffect, useRef, useState } from 'react';
import Hls from 'hls.js';
interface AuthResponse {
  authenticated: boolean;
  user?: {
    id: string;
    username: string;
    subscription: string;
  };
  stream?: {
    id: string;
    name: string;
    url: string;
  };
  accessGranted?: boolean;
  error?: string;
}
interface SecureVideoPlayerProps {
  streamId: string;
  authToken: string;
  onAuthError: (error: string) => void;
}
export default function SecureVideoPlayer({ streamId, authToken, onAuthError }: SecureVideoPlayerProps) {
  const videoRef = useRef(null);
  const hlsRef = useRef(null);
  const [isLive, setIsLive] = useState(false);
  const [error, setError] = useState(null);
  const [isLoading, setIsLoading] = useState(true);
  const [isAuthenticated, setIsAuthenticated] = useState(false);
  const [user, setUser] = useState(null);
  const [streamUrl, setStreamUrl] = useState(null);
  const [qualityLevels, setQualityLevels] = useState([]);
  const [currentQuality, setCurrentQuality] = useState(null);
  // Authenticate with the API
  const authenticate = async () => {
    try {
      setIsLoading(true);
      setError(null);
      const response = await fetch(`/api/auth?token=${authToken}&streamId=${streamId}`, {
        method: 'GET',
        headers: {
          'Content-Type': 'application/json',
        },
      });
      const data: AuthResponse = await response.json();
      if (!response.ok) {
        throw new Error(data.error || 'Authentication failed');
      }
      if (!data.authenticated || !data.accessGranted) {
        throw new Error('Access denied to this stream');
      }
      setIsAuthenticated(true);
      setUser(data.user);
      setStreamUrl(data.stream?.url || null);
    } catch (err) {
      const errorMessage = err instanceof Error ? err.message : 'Authentication failed';
      setError(errorMessage);
      onAuthError(errorMessage);
    } finally {
      setIsLoading(false);
    }
  };
  // Initialize HLS stream after authentication
  useEffect(() => {
    if (!isAuthenticated || !streamUrl) return;
    const video = videoRef.current;
    if (!video) return;
    if (Hls.isSupported() && video) {
      const hls = new Hls({
        xhrSetup: (xhr) => {
          xhr.setRequestHeader('Authorization', `Bearer ${authToken}`);
          xhr.setRequestHeader('X-Custom-Auth', authToken);
          xhr.setRequestHeader('X-Timestamp', Date.now().toString());
          xhr.setRequestHeader('X-Stream-ID', streamId);
        },
        liveSyncDurationCount: 3,
        liveMaxLatencyDurationCount: 5,
        manifestLoadingTimeOut: 10000,
        fragLoadingTimeOut: 20000,
      });
      hlsRef.current = hls;
      hls.loadSource(streamUrl);
      hls.attachMedia(video);
      hls.on(Hls.Events.MANIFEST_PARSED, () => {
        setQualityLevels(hls.levels as unknown as Hls.Level[]);
        setIsLive(true);
        setIsLoading(false);
      });
      hls.on(Hls.Events.LEVEL_SWITCHED, (event, data) => {
        console.log(`Switched to quality level: ${data.level}`);
        setCurrentQuality(data.level);
      });
      hls.on(Hls.Events.ERROR, (event, data) => {
        if (data.fatal) {
          switch (data.type) {
            case Hls.ErrorTypes.NETWORK_ERROR:
              if (data.response?.code === 401 || data.response?.code === 403) {
                setError('Authentication failed. Please log in again.');
                onAuthError('Authentication failed. Please log in again.');
              } else {
                setError('Network error occurred');
              }
              break;
            case Hls.ErrorTypes.MEDIA_ERROR:
              setError('Media error occurred');
              break;
            default:
              setError('An unknown error occurred');
              break;
          }
        }
      });
      return () => {
        hls.destroy();
      };
    } else if (video?.canPlayType('application/vnd.apple.mpegurl')) {
      video.src = streamUrl;
      setIsLive(true);
      setIsLoading(false);
    } else {
      setError('Live streaming not supported in this browser');
    }
  }, [isAuthenticated, streamUrl, authToken, streamId, onAuthError]);
  useEffect(() => {
    authenticate();
  }, [streamId, authToken]);
  const changeQuality = (levelIndex: number) => {
    if (hlsRef.current) {
      hlsRef.current.currentLevel = levelIndex;
    }
  };
  const startBroadcast = () => {
    const video = videoRef.current;
    if (video && isLive) {
      video.play();
    }
  };
  const stopBroadcast = () => {
    const video = videoRef.current;
    if (video) {
      video.pause();
    }
  };
  if (isLoading) {
    return (
      

Authenticating and loading stream...

); } if (error) { return (

Error: {error}

); } return (
{user && (

User: {user.username} Subscription: {user.subscription}

)}

{isLive ? 'LIVE' : 'OFFLINE'} {isAuthenticated && ( Authenticated )}

{isLoading && ( )}
{qualityLevels.length > 0 && (

Quality Options:

{qualityLevels.map((level, index) => ( ))}

)}

); }

In the code above, we used the xhrSetup function in the HLS instance to add an Authorization header with a Bearer token to every HTTP request made by HLS.js, such as for .m3u8 playlists and .ts/fMP4 segments. We pass the authToken as a prop, which we can retrieve after user signup through our authentication service from the backend API. This process ensures secure access for each user.

Additionally, we should listen for the Hls.Events.ERROR event to capture authentication failures, such as an invalid token, so that we can display clear error messages:

Error Message Demo

We have already created a mock user database, a mock stream data, and an authentication form. We kept it short for the purpose of the article. For the full code, visit the GitHub repository.

Optimizing performance with HLS.js configurations

HLS.js offers many configuration options to help improve performance across various situations. By setting it up correctly, you can reduce startup time, decrease buffering, and enhance the overall viewing experience.

Here are some settings for HLS.js to improve performance in the LiveStream component. Add this option to the Hls instance:

  const hls = new Hls({
    liveSyncDurationCount: 3,
    liveMaxLatencyDurationCount: 5,
    maxBufferLength: 30,
    maxMaxBufferLength: 60,
    lowLatencyMode: true,
    backBufferLength: 90,
    abrEwmaFastLive: 3,
    abrEwmaSlowLive: 9,
    enableWorker: true,
    startLevel: -1,
  });
  • liveSyncDurationCount: 3: This keeps the player three segments behind the live broadcast, balancing latency and stability
  • liveMaxLatencyDurationCount: 5: If latency exceeds five segments, the stream skips segments to stay close to real-time
  • lowLatencyMode: true: This activates features that help achieve less than three seconds of latency
  • maxBufferLength: 30: The buffer is limited to 30 seconds to reduce memory use
  • maxMaxBufferLength: 60: The buffer size is capped at 60 seconds to prevent too much buffering on fast networks
  • backBufferLength: 90: This keeps 90 seconds of past content available for DVR features
  • abrEwmaFastLive: 3: This quickly adjusts to network changes for live streams
  • abrEwmaSlowLive: 9: This helps stabilize stream quality during small changes
  • enableWorker: true: This uses a Web Worker to handle processing, improving performance on the main thread
  • startLevel: -1: This allows HLS.js to automatically choose the best quality based on the network conditions

Hls.js alternatives

HLS.js is a strong option for video streaming in Next.js. However, developers can also choose from several open source alternatives, including Video.js, Stream’s API, Daily.co, Node.js/FFmpeg, and Dolby OptiView.

Each alternative has its own strengths, making them suitable for different use cases, such as live broadcasting, video conferencing, or low-latency streaming.


More great articles from LogRocket:


Video.js

Video.js is a popular open source tool for playing HTML5 videos. It works consistently across different browsers and devices and can use Flash or other technologies when HTML5 video isn’t fully supported.

While Video.js mainly focuses on playing videos, it has many plugins that let you customize it and add more features with special plugins. Video.js also provides a complete user interface with options for ads, analytics, and virtual reality.

Key Video.js features include:

  • Cross-browser compatibility: It works well on many browsers and devices, both desktop and mobile, to provide a consistent playback experience
  • Customizable UI: It allows developers to easily customize the video player’s look using CSS to match their application’s style
  • Plugin architecture: It has a strong and clear plugin system that allows developers to add new features, such as analytics and advertising, or support for advanced streaming protocols. The videojs-contrib-hls plugin is important for HLS support
  • Adaptive streaming support (via plugins): With the right plugins, it can play HLS and DASH streams, automatically adjusting video quality based on network conditions.
  • Accessibility: It offers support for captions, subtitles, and audio descriptions, improving the viewing experience for everyone.
  • Event-driven API: It provides many events for controlling playback, checking status, and working with other application logic

Pros of Video.js

  • This tool has been developing for over ten years and has a large, active community
  • Its plugin system allows for many different uses
  • It provides clear guides and API references
  • Its large user base means there are many resources for help and best practices

Cons of Video.js

  • To use HLS playback, you need an extra plugin (videojs-contrib-hls), which adds a small overhead
  • Compared to HLS.js, which focuses only on HLS parsing, Video.js can be larger because of its wider range of features
  • Although it supports HLS, setting up and optimizing advanced features like specific ABR strategies might take more work than HLS.js’s default setup

Use case

Here’s how to install Video.js via npm or yarn:

npm install video.js @types/video.js

Here’s how to implement it in your project:

import { useEffect, useRef } from 'react';
import videojs from 'video.js';
import 'video.js/dist/video-js.css';

export default function VideoJSPlayer({ options }) {
  const videoRef = useRef(null);
  const playerRef = useRef(null);

  useEffect(() => {
    if (!playerRef.current) {
      const videoElement = videoRef.current;
      if (!videoElement) return;

      playerRef.current = videojs(videoElement, options, () => {
        console.log('Player ready');
      });
    }

    return () => {
      if (playerRef.current) {
        playerRef.current.dispose();
        playerRef.current = null;
      }
    };
  }, [options]);

  return (
    
  );
}

Stream API (video)

Stream provides a set of APIs for building real-time applications. Its Video API offers a simple, managed way to handle live streaming and video playback on demand. Instead of using client-side libraries for playback, Stream’s API manages the entire streaming process from capturing and encoding video to delivering it globally. This approach reduces the technical burden on developers, letting them concentrate on building their applications.

Its key features include:

  • Managed streaming infrastructure: Stream takes care of video uploads, converts videos to different quality levels and formats, stores them, and delivers them globally using a CDN
  • Live and on-demand: It supports live broadcasts in real-time and also allows for the storage and delivery of pre-recorded videos
  • Low-latency options: It is designed to provide fast streaming for interactive experiences
  • Scalability: Stream easily handles a large number of viewers without needing developers to manage servers or networks
  • Integration with other stream products: You can easily combine it with Stream’s chat or activity feed APIs to create engaging, interactive experiences
  • Recording and archiving: It often includes features to record live streams for later viewing

Pros of using Stream

  • It reduces the time it takes to launch a product by simplifying complex backend streaming systems
  • You don’t have to manage FFmpeg, media servers, or content delivery networks (CDNs)
  • It uses a global network to keep performance consistent for large audiences
  • It provides a complete set of features for professional streaming right away

Cons of using Stream

  • It is a paid API service, which means you will incur costs based on usage, like minutes, bandwidth, and storage
  • It gives developers less control over encoding, packaging, and delivery compared to managing their own system
  • Using a specific API can create a dependency on that service provider

Use case

Here’s how to install Stream via npm or yarn:

npm install @stream-io/video-react-sdk

Here’s a basic implementation of Stream:

import {
  StreamCall,
  StreamVideo,
  StreamVideoClient,
  User,
} from "@stream-io/video-react-sdk";

const apiKey = "your-api-key";
const userId = "user-id";
const token = "authentication-token";
const user: User = { id: userId };

const client = new StreamVideoClient({ apiKey, user, token });
const call = client.call("default", "my-first-call");
call.join({ create: true });

export const LiveStream = () => {
  return (
    
      
        /*  */
      
    
  );
};

Daily.co

Daily.co offers an easy-to-use API and SDKs that let you add real-time video and audio calls to both web and mobile apps. It uses WebRTC technology to handle video conferencing smoothly, whether it’s one-on-one or with many people.

Daily.co provides APIs for peer-to-peer video rooms with features like screen sharing, recording, and end-to-end encryption. This makes it a great choice for live interactions where quick connections and clear communication are essential.

Its key features include:

  • WebRTC focus: It is designed for real-time, low-latency video and audio calls for one-on-one or group communication
  • Cross-platform SDKs: Daily.co provides SDKs for JavaScript, React, React Native, iOS, and Android to ensure a consistent user experience across devices
  • Interactive live streaming: It allows up to 100,000 participants to join a real-time session, all able to send and receive video
  • Recording and compositing: It offers cloud recording options for sessions and can combine multiple video feeds into one output
  • Screen sharing: It includes built-in tools for sharing screens during calls
  • AI-ready features: It integrates with AI platforms to provide features like real-time transcription, background blur, and noise cancellation
  • Global mesh network: It uses a distributed network to reduce latency and improve video quality based on location

Pros of using Daily.co

  • It offers very fast response times
  • It simplifies WebRTC, providing an easy-to-use API for developers
  • It is designed to deliver clear video quality for multiple participants
  • It works well for apps that focus on live interactions

Cons of using Daily.co

  • This is a paid service with costs that can be high, especially for large-scale broadcasts
  • While it can record videos, it excels in real-time communication rather than for large-scale, cost-effective VOD use
  • The SDKs are open, but the main system is controlled by Daily.co

Use case

daily-react makes it easier to integrate Daily.co into your React/Next applications using npm:

npm i @daily-co/daily-react

To get started with using Daily.co in a Next.js app, include DailyProvider in your app:

import { DailyProvider, useCallObject } from '@daily-co/daily-react';

function App() {
  // Create an instance of the Daily call object
  const callObject = useCallObject();

  return {children};
}

Node.js/FFmpeg

The combination of Node.js and FFmpeg represents the most flexible and customizable approach to video streaming. This setup gives you complete control over the video streaming process. You use Node.js as the backend to manage video tasks, and FFmpeg, which is an open source multimedia tool written in C for processing video and audio files.

FFmpeg can work with almost any media format. It can transcode, package, segment (for HLS/DASH), and stream videos, among other things. With this combination, you can create your own custom media server.

Its features include:

  • You have complete control over the entire streaming process, from getting the content to delivering it
  • FFmpeg works with many video and audio formats
  • You can convert videos into different formats and bitrates for adaptive streaming
  • You can create HLS (.m3u8 playlists, .ts or fMP4 segments) or DASH manifests and segments
  • You can process live camera feeds or other sources for real-time streaming
  • Node.js lets you control FFmpeg commands programmatically, allowing for flexible stream management

Pros of using Node.js/FFmpeg

  • It lets you customize every part of your streaming solution to meet your specific needs
  • After the initial setup, your main expenses will be for infrastructure, such as servers and bandwidth. This can be cheaper than using managed services if you have a large scale
  • You have full ownership and control of your entire streaming system
  • You can add the latest streaming enhancements or custom features that aren’t available in ready-made solutions

Cons of using Node.js/FFmpeg

  • You need extensive knowledge about video encoding, streaming protocols, and managing servers
  • Creating and keeping a strong streaming solution from the ground up is a difficult task
  • Your team is responsible for scaling, monitoring, and keeping the infrastructure running
  • To achieve very low latency, you must carefully optimize and manage your FFmpeg and network settings

Basic implementation:

const { spawn } = require('child_process');
const express = require('express');
const app = express();

app.use(express.static('public')); // Serve HLS files

const ffmpeg = spawn('ffmpeg', [
  '-i', 'input-stream-url', // e.g., RTMP or live feed
  '-c:v', 'libx264',
  '-c:a', 'aac',
  '-f', 'hls',
  '-hls_time', '4',
  '-hls_segment_filename', 'public/stream_%03d.ts',
  'public/stream.m3u8',
]);

ffmpeg.stderr.on('data', (data) => console.log(data.toString()));

app.listen(3000, () => console.log('Server running on port 3000'));

Dolby OptiView

Dolby OptiView is a video streaming solution that provides high-quality and fast streaming across different platforms. It combines the technology of Dolby.io with THEOplayer’s efficient HTML5 video player.

Dolby OptiView focuses on sports, media, entertainment, and interactive applications such as live sports betting, auctions, and online gaming. It aims to enhance real-time engagement, ensure consistent performance across platforms, and create opportunities for revenue through features like server-guided ad insertion (SGAI).

Features include:

  • WebRTC-based streaming with Dolby’s audio and video enhancements
  • Support for low-latency group calls and broadcasts
  • APIs for video, audio, and real-time communication
  • Integration with JavaScript frameworks like Next.js
  • Basic open-source SDKs for custom streaming implementations
  • Advanced noise cancellation and video optimization

Pros of using Dolby OptiView

  • Dolby’s enhancements provide superior audio and video quality
  • Open-source WebRTC components are free and can be customized
  • The low-latency streaming works well for real-time applications
  • Developers have access to well-documented APIs and SDKs
  • It supports both web and mobile platforms

Cons of using Dolby OptiView

  • Open source components have limits. Some advanced features require a paid Dolby platform
  • Setting up WebRTC can be complicated for large deployments
  • It is not optimized for traditional streaming protocols like HLS or DASH
  • Premium features depend on Dolby’s ecosystem

Use case

You can integrate Dolby OptiView in your Next.js/React using npm or yarn:

npm install @dolbyio/comms-sdk-web

Here is a basic live stream implementation:

"use client";
import { useEffect, useRef, useState } from 'react';
import VoxeetSDK from '@dolbyio/comms-sdk-web';

function DolbyPlayer({ appKey, appSecret }) {
  const videoRef = useRef(null);
  const [isJoined, setIsJoined] = useState(false);

  useEffect(() => {
    // Initialize Dolby SDK
    VoxeetSDK.initialize(appKey, appSecret);
  }, [appKey, appSecret]);

  const joinStream = async () => {
    try {
      // Open session
      await VoxeetSDK.session.open({ name: 'User' });

      // Create/join conference
      const conference = await VoxeetSDK.conference.create({ alias: 'stream' });
      await VoxeetSDK.conference.join(conference);

      setIsJoined(true);
    } catch (error) {
      console.error('Join failed:', error);
    }
  };

  const leaveStream = async () => {
    try {
      await VoxeetSDK.conference.leave();
      await VoxeetSDK.session.close();
      setIsJoined(false);
    } catch (error) {
      console.error('Leave failed:', error);
    }
  };

  return (
    
); } export default function Home() { const appKey = 'your-dolby-app-key'; const appSecret="your-dolby-app-secret"; return ( ); }
Tool Latency Cost Best for Scalability
HLS.js 3–10s Free Custom HLS players High (client-side)
Video.js 3–10s Free Multi-format playback Moderate
Stream API <500ms Paid Interactive apps Very High (managed)
Daily.co <1s Freemium Video conferencing High (managed)
Node.js/FFmpeg Configurable Free (infrastructure) DIY streaming pipelines Self-hosted
Dolby OptiView <1s Enterprise High-quality broadcasts High (managed)

Conclusion

To implement real-time video streaming in Next.js, consider several key factors: latency, scalability, development resources, and long-term maintenance. This article discusses HLS.js as a strong choice for HTTP Live Streaming. We demonstrated how to create production-ready streaming applications that include features like adaptive bitrate streaming, secure authentication, and optimized performance. We also looked at alternatives, including Video.js, Stream, and Daily.co.

When choosing your streaming solution, think about your specific needs, use HLS.js or Video.js for reliable and widely compatible streaming, choose Daily.co for real-time interaction, use Stream’s API for quick development with managed infrastructure, apply Node.js/FFmpeg for full customization, or take advantage of Dolby OptiView for high-quality experiences.

With what you’ve learned from this article, you are ready to create engaging video streaming experiences that grow with your application and adapt to user needs!

LogRocket: Full visibility into production Next.js apps

Debugging Next applications can be difficult, especially when users experience issues that are difficult to reproduce. If you’re interested in monitoring and tracking state, automatically surfacing JavaScript errors, and tracking slow network requests and component load time, try LogRocket.

LogRocket captures console logs, errors, network requests, and pixel-perfect DOM recordings from user sessions and lets you replay them as users saw it, eliminating guesswork around why bugs happen — compatible with all frameworks.

LogRocket’s Galileo AI watches sessions for you, instantly identifying and explaining user struggles with automated monitoring of your entire product experience.

The LogRocket Redux middleware package adds an extra layer of visibility into your user sessions. LogRocket logs all actions and state from your Redux stores.

LogRocket Dashboard Free Trial Banner

Modernize how you debug your Next.js apps — start monitoring for free.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment