Convert Figma logo to code with AI

Haivision logosrt

Secure, Reliable, Transport

3,057
839
3,057
329

Top Related Projects

OpenVidu Platform main repository

OBS Studio - Free and open source software for live streaming and screen recording

45,445

Mirror of https://git.ffmpeg.org/ffmpeg.git

13,642

Pure Go implementation of the WebRTC API

Janus WebRTC Server

4,233

Ultimate camera streaming application with support RTSP, RTMP, HTTP-FLV, WebRTC, MSE, HLS, MP4, MJPEG, HomeKit, FFmpeg, etc.

Quick Overview

SRT (Secure Reliable Transport) is an open-source video transport protocol and technology stack developed by Haivision. It optimizes streaming performance across unpredictable networks, providing high-quality, low-latency video transmission over the public internet. SRT is designed to address the challenges of live video streaming in various network conditions.

Pros

  • Provides low-latency, high-quality video streaming over unreliable networks
  • Offers end-to-end security with AES encryption
  • Supports various streaming protocols and integrates well with existing workflows
  • Active community and ongoing development with regular updates

Cons

  • May require additional setup and configuration compared to traditional streaming methods
  • Limited documentation for advanced use cases and troubleshooting
  • Performance can vary depending on network conditions and hardware capabilities
  • Learning curve for developers new to video streaming technologies

Code Examples

  1. Creating an SRT socket and connecting to a server:
#include <srt/srt.h>

int main() {
    SRTSOCKET client = srt_create_socket();
    sockaddr_in sa;
    sa.sin_family = AF_INET;
    sa.sin_port = htons(9000);
    inet_pton(AF_INET, "127.0.0.1", &sa.sin_addr);
    
    srt_connect(client, (sockaddr*)&sa, sizeof sa);
    // ... use the socket for streaming
    srt_close(client);
    return 0;
}
  1. Setting SRT options:
#include <srt/srt.h>

int main() {
    SRTSOCKET sock = srt_create_socket();
    int latency = 120;
    srt_setsockopt(sock, 0, SRTO_LATENCY, &latency, sizeof latency);
    
    int payloadSize = 1316;
    srt_setsockopt(sock, 0, SRTO_PAYLOADSIZE, &payloadSize, sizeof payloadSize);
    // ... continue with socket configuration and usage
    return 0;
}
  1. Sending data over an SRT socket:
#include <srt/srt.h>

int main() {
    SRTSOCKET sock = // ... create and connect socket
    const char* data = "Hello, SRT!";
    int dataSize = strlen(data);
    
    int result = srt_send(sock, data, dataSize);
    if (result == SRT_ERROR) {
        // Handle error
    }
    // ... continue with streaming
    return 0;
}

Getting Started

To get started with SRT:

  1. Clone the repository:

    git clone https://github.com/Haivision/srt.git
    
  2. Build SRT:

    cd srt
    ./configure
    make
    sudo make install
    
  3. Include SRT in your project:

    #include <srt/srt.h>
    
  4. Initialize SRT in your application:

    srt_startup();
    // ... your SRT code here
    srt_cleanup();
    

Remember to link against the SRT library when compiling your application.

Competitor Comparisons

OpenVidu Platform main repository

Pros of OpenVidu

  • Provides a complete WebRTC-based video conferencing solution
  • Offers easy-to-use APIs and SDKs for multiple platforms
  • Includes features like screen sharing, recording, and custom layouts

Cons of OpenVidu

  • Limited to WebRTC protocol, which may not be ideal for all use cases
  • Potentially higher latency compared to SRT in some scenarios
  • Less flexibility for low-level network optimizations

Code Comparison

OpenVidu (JavaScript):

var OV = new OpenVidu();
var session = OV.initSession();
session.connect(token)
    .then(() => console.log("Connected"))
    .catch(error => console.error(error));

SRT (C++):

SRTSOCKET sock = srt_create_socket();
srt_connect(sock, "127.0.0.1", 9000, 1000);
char data[1500];
srt_recv(sock, data, sizeof(data));

OpenVidu focuses on providing a high-level API for video conferencing, while SRT offers low-level network transport functionality. OpenVidu is more suitable for quickly implementing video chat applications, whereas SRT is better for custom streaming solutions requiring fine-grained control over network performance.

OBS Studio - Free and open source software for live streaming and screen recording

Pros of obs-studio

  • More comprehensive feature set for video recording and live streaming
  • Larger community and ecosystem of plugins
  • Cross-platform support (Windows, macOS, Linux)

Cons of obs-studio

  • Larger codebase and potentially more complex to contribute to
  • Focused on end-user application rather than a specific protocol

Code Comparison

obs-studio (C++):

bool OBSBasic::StreamingActive()
{
    if (!outputHandler)
        return false;
    return outputHandler->StreamingActive();
}

srt (C):

int srt_connect(SRTSOCKET u, const struct sockaddr* name, int namelen)
{
    SRTSOCKET_CHECK(u, CUDT::connect, -1);
    return CUDT::connect(u, name, namelen);
}

Summary

obs-studio is a full-featured streaming and recording software, while srt focuses on the Secure Reliable Transport protocol. obs-studio offers a more comprehensive solution for content creators but may be more complex. srt provides a specialized protocol implementation that can be integrated into various applications. The code examples show obs-studio's application-level logic versus srt's lower-level network operations.

45,445

Mirror of https://git.ffmpeg.org/ffmpeg.git

Pros of FFmpeg

  • Comprehensive multimedia framework with extensive codec support
  • Large, active community and extensive documentation
  • Versatile command-line interface for various multimedia tasks

Cons of FFmpeg

  • Steeper learning curve due to its vast feature set
  • Can be resource-intensive for complex operations

Code Comparison

FFmpeg example (video transcoding):

ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 22 -c:a copy output.mp4

SRT example (streaming):

srt_startup();
SRTSOCKET sock = srt_create_socket();
srt_connect(sock, "127.0.0.1", 1234);
srt_send(sock, buffer, size);

Key Differences

  • FFmpeg is a comprehensive multimedia framework, while SRT focuses on low-latency streaming
  • SRT is designed specifically for reliable data transport over unpredictable networks
  • FFmpeg offers a wide range of audio/video processing capabilities, whereas SRT specializes in secure, reliable streaming

Use Cases

  • FFmpeg: Transcoding, format conversion, video editing, and general multimedia processing
  • SRT: Live video streaming, broadcast contribution, and remote production scenarios

Both projects serve different primary purposes but can be complementary in certain streaming workflows.

13,642

Pure Go implementation of the WebRTC API

Pros of webrtc

  • Designed for real-time communication with built-in support for audio/video
  • Implements the full WebRTC stack, enabling peer-to-peer connections
  • Active development with frequent updates and a large community

Cons of webrtc

  • More complex to implement and use compared to SRT
  • Higher overhead due to additional features and protocols
  • May have higher latency in some scenarios

Code comparison

SRT (C++):

SRTSOCKET client = srt_create_socket();
srt_connect(client, "127.0.0.1", 9000);
srt_send(client, buffer, len);

webrtc (Go):

peerConnection, _ := webrtc.NewPeerConnection(config)
dataChannel, _ := peerConnection.CreateDataChannel("data", nil)
dataChannel.OnOpen(func() {
    dataChannel.Send([]byte("Hello"))
})

Summary

SRT focuses on low-latency video streaming over unreliable networks, while webrtc provides a comprehensive solution for real-time communication including audio, video, and data channels. SRT is simpler to implement but has fewer features, whereas webrtc offers more flexibility but with increased complexity. The choice between them depends on specific project requirements and use cases.

Janus WebRTC Server

Pros of Janus-Gateway

  • Versatile WebRTC server supporting various protocols and use cases
  • Modular architecture allowing easy extension and customization
  • Active community and regular updates

Cons of Janus-Gateway

  • Higher complexity and steeper learning curve
  • Potentially higher resource usage due to its comprehensive feature set

Code Comparison

Janus-Gateway (JavaScript plugin example):

static void my_plugin_init(janus_callbacks *callback, const char *config_path) {
    JANUS_LOG(LOG_INFO, "My plugin initialized!\n");
    // Plugin initialization code
}

SRT (C++ usage example):

srt_startup();
SRTSOCKET sock = srt_create_socket();
srt_connect(sock, "127.0.0.1", 1234);
// SRT connection and data transfer code

Summary

Janus-Gateway is a comprehensive WebRTC server with a focus on flexibility and extensibility, while SRT is a specialized protocol for low-latency video streaming. Janus-Gateway offers a wider range of features but may be more complex to set up and use. SRT, on the other hand, is more focused on its specific use case and may be simpler to implement for video streaming applications.

4,233

Ultimate camera streaming application with support RTSP, RTMP, HTTP-FLV, WebRTC, MSE, HLS, MP4, MJPEG, HomeKit, FFmpeg, etc.

Pros of go2rtc

  • Supports multiple streaming protocols (RTSP, WebRTC, MSE, HLS, etc.)
  • Written in Go, offering good performance and cross-platform compatibility
  • Includes a web interface for easy configuration and management

Cons of go2rtc

  • Less mature and less widely adopted compared to SRT
  • May have fewer advanced features for professional broadcasting
  • Limited documentation and community support

Code Comparison

SRT (C++):

int main(int argc, char* argv[])
{
    srt_startup();
    SRTSOCKET sock = srt_create_socket();
    // ... (connection logic)
    srt_close(sock);
    srt_cleanup();
    return 0;
}

go2rtc (Go):

func main() {
    s := server.NewServer()
    s.AddHandler(rtsp.Handler)
    s.AddHandler(webrtc.Handler)
    s.Run()
}

Summary

SRT is a mature, industry-standard protocol for low-latency video streaming, while go2rtc is a more versatile streaming server supporting multiple protocols. SRT focuses on reliable transport over unreliable networks, whereas go2rtc offers a broader range of streaming options and an integrated web interface. The choice between them depends on specific project requirements and the desired level of protocol support.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Secure Reliable Transport (SRT) Protocol

About SRT | Features | Getting Started | Build Instructions | Sample Apps and Tools | Contribute | License | Releases

SRT

License: MPLv2.0 Latest release Quality Gate Status codecov Build Status Linux and macOS Build Status Windows

Ubuntu 23.04 Fedora 37 Debian Homebrew Vcpkg ConanCenter

What is SRT?

Secure Reliable Transport (SRT) is a transport protocol for ultra low (sub-second) latency live video and audio streaming, as well as for generic bulk data transfer1. SRT is available as an open-source technology with the code on GitHub, a published Internet Draft, and a growing community of SRT users.

SRT is applied to contribution and distribution endpoints as part of a video stream workflow to deliver the best quality and lowest latency video at all times.

SecureEncrypts video streams
ReliableRecovers from severe packet loss
TransportDynamically adapts to changing network conditions

In live streaming configurations, the SRT protocol maintains a constant end-to-end latency. This allows the live stream's signal characteristics to be recreated on the receiver side, reducing the need for buffering. As packets are streamed from source to destination, SRT detects and adapts to real-time network conditions between the two endpoints. It helps compensate for jitter and bandwidth fluctuations due to congestion over noisy networks.

SRT implements AES encryption to protect the payload of the media streams, and offers various error recovery mechanisms for minimizing the packet loss that is typical of Internet connections, of which Automatic Repeat reQuest (ARQ) is the primary method. With ARQ, when a receiver detects that a packet is missing it sends an alert to the sender requesting retransmission of this missing packet. Forward Error Correction (FEC) and Connection Bonding, which adds seamless stream protection and hitless failover, are also supported by the protocol.

To learn more about the protocol subscribe to the Innovation Labs Blog on  slack logo

To ask a question join the conversation in the #development channel on  slack logo

Features

:point_down: Click on the ► button to expand a feature description.

Pristine Quality and Reliability

No matter how unreliable your network, SRT can recover from severe packet loss and jitter, ensuring the integrity and quality of your video streams.

Low Latency

SRT’s stream error correction is configurable to accommodate a user’s deployment conditions. Leveraging real-time IP communications development to extend traditional network error recovery practices, SRT delivers media with significantly lower latency than TCP/IP, while offering the speed of UDP transmission with greatly improved reliability.

Content Agnostic

Unlike some other streaming protocols that only support specific video and audio formats, SRT is payload agnostic. Because SRT operates at the network transport level, acting as a wrapper around your content, it can transport any type of video format, codec, resolution, or frame rate.

Easy Firewall Traversal with Rendezvous Mode

The handshaking process used by SRT supports outbound connections without the potential risks and dangers of permanent exterior ports being opened in a firewall, thereby maintaining corporate LAN security policies and minimizing the need for IT intervention.

AES Encryption

Using 128/192/256-bit AES encryption trusted by governments and organizations around the world, SRT ensures that valuable content is protected end-to-end from contribution to distribution so that no unauthorized parties can listen.

Forward Error Correction (FEC) and Packet Filter API

SRT 1.4 sees the introduction of the packet filter API. This mechanism allows custom processing to be performed on network packets on the sender side before they are sent, and on the receiver side once received from the network. The API allows users to write their own plugin, thereby extending the SRT protocol's capabilities even further with all kinds of different packet filtering. Users can manipulate the resulting packet filter data in any way, such as for custom encryption, packet inspection, or accessing data before it is sent.

The first plugin created as an example of what can be achieved with the packet filter API is for Forward Error Correction (FEC) which, in certain use cases, can offer slightly lower latency than Automatic Repeat reQuest (ARQ). This plugin allows three different modes:

  • ARQ only – retransmits lost packets,
  • FEC only – provides the overhead needed for FEC recovery on the receiver side,
  • FEC and ARQ – retransmits lost packets that FEC fails to recover.

Connection Bonding

Similar to SMPTE-2022-7 over managed networks, Connection Bonding adds seamless stream protection and hitless failover to the SRT protocol. This technology relies on more than one IP network path to prevent disruption to live video streams in the event of network congestion or outages, maintaining continuity of service.

This is accomplished using the socket groups introduced in SRT v1.5. The general concept of socket groups means having a group that contains multiple sockets, where one operation for sending one data signal is applied to the group. Single sockets inside the group will take over this operation and do what is necessary to deliver the signal to the receiver.

Two modes are supported:

  • Broadcast - In Broadcast mode, data is sent redundantly over all the member links in a group. If one of the links fails or experiences network jitter and/or packet loss, the missing data will be received over another link in the group. Redundant packets are simply discarded at the receiver side.

  • Main/Backup - In Main/Backup mode, only one (main) link at a time is used for data transmission while other (backup) connections are on standby to ensure the transmission will continue if the main link fails. The goal of Main/Backup mode is to identify a potential link break before it happens, thus providing a time window within which to seamlessly switch to one of the backup links.

Access Control (Stream ID)

Access Control enables the upstream application to assign a Stream ID to individual SRT streams. By using a unique Stream ID, either automatically generated or customized, the upstream application can send multiple SRT streams to a single IP address and UDP port. The Stream IDs can then be used by a receiver to identify and differentiate between ingest streams, apply user password access methods, and in some cases even apply automation based on the naming of the Stream ID. For example, contribution could be sent to a video production workflow and monitoring to a monitoring service.

For broadcasters, Stream ID is key to replacing RTMP for ingesting video streams, especially HEVC/H.265 content, into cloud service or CDNs that have a single IP socket (address + port) open for incoming video.

Getting Started with SRT

The SRT APIIETF Internet DraftSample Apps
Reference documentation for the SRT library APIThe SRT Protocol Internet DraftInstructions for using test apps (srt-live-transmit, srt-file-transmit, etc.)
SRT Technical OverviewSRT Deployment GuideSRT CookBook
Early draft technical overview (precursor to the Internet Draft)A comprehensive overview of the protocol with deployment guidelinesDevelopment notes on the SRT protocol
Innovation Labs BlogSRTLab YouTube ChannelSlack
The blog on Medium with SRT-related technical articlesTechnical YouTube channel with useful videosSlack channels to get the latest updates and ask questions
Join SRT Alliance on Slack

Additional Documentation

Build Instructions

Linux (Ubuntu/CentOS) | Windows | macOS | iOS | Android | Package Managers

Requirements

  • C++03 or above compliant compiler.
  • CMake 2.8.12 or above as a build system.
  • OpenSSL 1.1 to enable encryption, otherwise build with -DENABLE_ENCRYPTION=OFF.
  • Multithreading is provided by either of the following:
    • C++11: standard library (std by -DENABLE_STDCXX_SYNC=ON CMake option),
    • C++03: Pthreads (for POSIX systems it's built in, for Windows there is a ported library).
  • Tcl 8.5 is optional and is used by ./configure script. Otherwise, use CMake directly.

Build Options

For detailed descriptions of the build system and options, please read the SRT Build Options document.

Sample Applications and Tools

The current repo provides sample applications and code examples that demonstrate the usage of the SRT library API. Among them are srt-live-transmit, srt-file-transmit, and other applications. The respective documentation can be found here. Note that all samples are provided for instructional purposes, and should not be used in a production environment.

The srt-xtransmit utility is actively used for internal testing and performance evaluation. Among other features it supports dummy payload generation, traffic routings, and connection bonding. Additional details are available in the srt-xtransmit repo itself.

Python tools that might be useful during development are:

  • srt-stats-plotting - A script designed to plot graphs based on SRT .csv statistics.
  • lib-tcpdump-processing - A library designed to process .pcap(ng) tcpdump or Wireshark trace files and extract SRT packets of interest for further analysis.
  • lib-srt-utils - A Python library containing supporting code for running SRT tests based on an experiment configuration.

Contributing

Anyone is welcome to contribute. If you decide to get involved, please take a moment to review the guidelines:

For information on contributing to the Internet Draft or to submit issues please go to the following repo. The repo for contributing in SRT CookBook can be found here.

License

By contributing code to the SRT project, you agree to license your contribution under the MPLv2.0 License.

Release History

Footnotes

  1. The term “live streaming” refers to continuous data transmission (MPEG-TS or equivalent) with latency management. Live streaming based on segmentation and transmission of files like in the HTTP Live Streaming (HLS) protocol (as described in RFC8216) is not part of this use case. File transmission in either message or buffer mode should be considered in this case. See Section 7. Best Practices and Configuration Tips for Data Transmission via SRT of the Internet Draft for details. Note that SRT is content agnostic, meaning that any type of data can be transmitted via its payload.