Top Related Projects
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Build and run Docker containers leveraging NVIDIA GPUs
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
Quick Overview
The NVIDIA-AI-IOT/deepstream_python_apps repository contains Python bindings and sample applications for NVIDIA DeepStream SDK. It provides a set of tools and examples for building AI-powered video analytics applications using DeepStream, leveraging NVIDIA GPUs for high-performance processing of video streams.
Pros
- Enables rapid development of video analytics applications using Python
- Leverages NVIDIA GPU acceleration for efficient processing of multiple video streams
- Provides a wide range of sample applications and use cases
- Integrates well with other NVIDIA AI tools and frameworks
Cons
- Requires NVIDIA hardware for optimal performance
- Limited documentation compared to some other video processing libraries
- Steeper learning curve for developers not familiar with NVIDIA's ecosystem
- May have compatibility issues with certain Python versions or operating systems
Code Examples
- Initializing a DeepStream pipeline:
import sys
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
# Initialize GStreamer
Gst.init(None)
# Create Pipeline element
pipeline = Gst.Pipeline()
- Adding a video source to the pipeline:
# Create Source element
source = Gst.ElementFactory.make("filesrc", "file-source")
source.set_property('location', 'sample_video.mp4')
# Add source to pipeline
pipeline.add(source)
- Adding an NVIDIA inference element:
# Create nvinfer element
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
pgie.set_property('config-file-path', "config_infer_primary.txt")
# Add pgie to pipeline
pipeline.add(pgie)
Getting Started
- Install NVIDIA DeepStream SDK and its dependencies.
- Clone the repository:
git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git
- Navigate to the desired sample application directory:
cd deepstream_python_apps/apps/deepstream-test1
- Run the sample application:
python3 deepstream_test_1.py <input_video_file>
Note: Ensure you have the necessary NVIDIA drivers and CUDA toolkit installed on your system before running DeepStream applications.
Competitor Comparisons
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Pros of TensorRT
- Highly optimized for NVIDIA GPUs, offering superior performance
- Supports a wide range of deep learning frameworks
- Provides advanced network optimization techniques like layer fusion and precision calibration
Cons of TensorRT
- Steeper learning curve compared to DeepStream Python Apps
- Less focus on end-to-end video analytics pipeline
- May require more manual optimization for specific use cases
Code Comparison
TensorRT:
import tensorrt as trt
logger = trt.Logger(trt.Logger.WARNING)
builder = trt.Builder(logger)
network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
DeepStream Python Apps:
import pyds
is_aarch64 = platform.uname()[4] == 'aarch64'
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
pgie.set_property('config-file-path', "config_infer_primary.txt")
The code snippets demonstrate the initial setup for each framework. TensorRT focuses on building and optimizing neural networks, while DeepStream Python Apps emphasizes video pipeline construction using GStreamer elements for inference.
Build and run Docker containers leveraging NVIDIA GPUs
Pros of nvidia-docker
- Provides GPU acceleration for Docker containers, enabling efficient use of NVIDIA GPUs
- Simplifies deployment of GPU-accelerated applications across different environments
- Supports a wide range of NVIDIA GPU architectures and driver versions
Cons of nvidia-docker
- Limited to Docker containerization, not applicable for other container runtimes
- Requires additional setup and configuration compared to standard Docker installations
- May have compatibility issues with certain applications or frameworks
Code Comparison
deepstream_python_apps:
import pyds
pipeline = Gst.parse_launch("nvarguscamerasrc ! nvvidconv ! nveglglessink")
pipeline.set_state(Gst.State.PLAYING)
nvidia-docker:
docker run --gpus all nvidia/cuda:11.0-base nvidia-smi
Summary
nvidia-docker focuses on enabling GPU acceleration for Docker containers, while deepstream_python_apps is specifically designed for building AI and video analytics applications using NVIDIA DeepStream SDK. nvidia-docker provides a more general-purpose solution for GPU-accelerated containerization, whereas deepstream_python_apps offers specialized tools for video processing and analysis using NVIDIA technologies.
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
Pros of jetson-inference
- Simpler and more straightforward for beginners
- Focuses specifically on Jetson platforms
- Includes pre-built Docker containers for easy setup
Cons of jetson-inference
- Limited to specific use cases and models
- Less flexibility for complex pipelines
- Fewer options for customization and integration with other tools
Code Comparison
jetson-inference:
import jetson.inference
import jetson.utils
net = jetson.inference.detectNet("ssd-mobilenet-v2")
camera = jetson.utils.videoSource("csi://0")
display = jetson.utils.videoOutput("display://0")
deepstream_python_apps:
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
Gst.init(None)
pipeline = Gst.parse_launch("nvarguscamerasrc ! nvvidconv ! nvinfer config-file-path=config.txt ! nvvideoconvert ! nvdsosd ! nveglglessink")
pipeline.set_state(Gst.State.PLAYING)
The jetson-inference code is more concise and easier to understand for simple tasks, while deepstream_python_apps offers more complex pipeline creation for advanced use cases.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
DeepStream Python Apps
This repository contains Python bindings and sample applications for the DeepStream SDK.
SDK version supported: 7.0
This release only supports Ubuntu 22.04 for DeepStreamSDK 7.0 with Python 3.10 and gst-python 1.20.3! Ubuntu 20.04 for DeepStreamSDK 6.3 with Python 3.8 support is NOW DEPRECATED
The bindings sources along with build instructions are available under bindings! We include one guide for contributing to bindings and another guide for advanced use-cases such as writing bindings for custom data structures.
Please report any issues or bugs on the DeepStream SDK Forums. This enables the DeepStream community to find help at a central location.
Setup
Once you have DeepStreamSDK pre-requisites and DeepStreamSDK installed on the system, navigate to <DS_ROOT>/sources/ dir which is /opt/nvidia/deepstream/deepstream/sources/ and git clone deepstream_python_apps repo here.
The latest bindings can be installed from release section. You can also build the bindings from source using the instructions in the bindings readme if needed.
Python Bindings
DeepStream pipelines can be constructed using Gst Python, the GStreamer framework's Python bindings. For accessing DeepStream MetaData, Python bindings are provided as part of this repository. This module is generated using Pybind11.
These bindings support a Python interface to the MetaData structures and functions. Usage of this interface is documented in the HOW-TO Guide and demonstrated in the sample applications.
Python Bindings Breaking API Change
The binding for function alloc_nvds_event_msg_meta() now expects a NvDsUserMeta pointer which the NvDsEventMsgMeta is associated with. Please refer to deepstream-test4 and bindschema.cpp for reference.
Sample Applications
Sample applications provided here demonstrate how to work with DeepStream pipelines using Python.
The sample applications require MetaData Bindings to work.
To run the sample applications or write your own, please consult the HOW-TO Guide
We currently provide the following sample applications:
- deepstream-test1 -- 4-class object detection pipeline, also demonstrates support for new nvstreammux
- deepstream-test2 -- 4-class object detection, tracking and attribute classification pipeline
- deepstream-test3 -- multi-stream pipeline performing 4-class object detection, also supports triton inference server, no-display mode, file-loop and silent mode
- deepstream-test4 -- msgbroker for sending analytics results to the cloud
- deepstream-imagedata-multistream -- multi-stream pipeline with access to image buffers
- deepstream-ssd-parser -- SSD model inference via Triton server with output parsing in Python
- deepstream-test1-usbcam -- deepstream-test1 pipeline with USB camera input
- deepstream-test1-rtsp-out -- deepstream-test1 pipeline with RTSP output, demonstrates adding software encoder option to support Jetson Orin Nano
- deepstream-opticalflow -- optical flow and visualization pipeline with flow vectors returned in NumPy array
- deepstream-segmentation -- segmentation and visualization pipeline with segmentation mask returned in NumPy array
- deepstream-nvdsanalytics -- multistream pipeline with analytics plugin
- runtime_source_add_delete -- add/delete source streams at runtime
- deepstream-imagedata-multistream-redaction -- multi-stream pipeline with face detection and redaction
- deepstream-rtsp-in-rtsp-out -- multi-stream pipeline with RTSP input/output - has command line option "--rtsp-ts" for configuring the RTSP source to attach the timestamp rather than the streammux
- deepstream-preprocess-test -- multi-stream pipeline using nvdspreprocess plugin with custom ROIs
- deepstream-demux-multi-in-multi-out -- multi-stream pipeline using nvstreamdemux plugin to generated separate buffer outputs
- deepstream-imagedata-multistream-cupy -- access imagedata buffer from GPU in a multistream source as CuPy array - x86 only
- deepstream-segmask -- access and interpret segmentation mask information from NvOSD_MaskParams
- deepstream-custom-binding-test -- demonstrate usage of NvDsUserMeta for attaching custom data structure - see also the Custom User Meta Guide
Detailed application information is provided in each application's subdirectory under apps.
Top Related Projects
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Build and run Docker containers leveraging NVIDIA GPUs
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot