The Video Intelligence Streaming API supports standard live streaming protocols like RTSP, RTMP, and HLS. The AIStreamer ingestion pipeline behaves as a streaming proxy, converting from live streaming protocols to bidirectional streaming gRPC connection.
To support live streaming protocols, the Video Intelligence API uses the GStreamer open media framework.
Step 1: Create a named pipe
A named pipe is created to communicate between GStreamer and the AIStreamer ingestion proxy. The two processes are running inside the same Docker container.
- path_to_pipe: file path in your local environment. For example, /user/local/Desktop/
- name_of_pipe: name of pipe you provide. For example,
my-football-game
$ export PIPE_NAME=/path_to_pipe/name_of_pipe $ mkfifo $PIPE_NAME
Example: /user/local/Desktop/my-football-game
Step 2: Run AIStreamer ingestion proxy
These C++ examples, available for your use, inclue a single binary that supports all features. To build the examples, follow these build instructions.
The following example shows how to use the binary from the command line.
$ export GOOGLE_APPLICATION_CREDENTIALS=/path_to_credential/credential_json $ export CONFIG=/path_to_config/config_json $ export PIPE_NAME=/path_to_pipe/name_of_pipe $ export TIMEOUT=3600 $ ./streaming_client_main --alsologtostderr --endpoint "dns:///alpha-videointelligence.googleapis.com" \ --video_path=$PIPE_NAME --use_pipe=true --config=$CONFIG --timeout=$TIMEOUT
$GOOGLE_APPLICATION_CREDENTIALS
specifies the file path of the
JSON file containing your service account key.
You can find an example configuration file—$CONFIG
at github.
Make sure to set the correct timeout flag in the command line. If you need to stream a 1 hour of video, timeout value should be at least 3600 seconds.
Step 3: Run GStreamer pipeline
GStreamer supports multiple live streaming protocols including but not limited to:
HTTP Live Streaming (HLS)
Real-time Streaming Protocol (RTSP)
Real-time Protocol (RTP)
Real-time Messaging Protocol (RTMP)
WebRTC
Streaming from Webcam
The Video Intelligence API uses the GStreamer pipeline to convert from these live streaming protocols to a decodable video stream, and writes the stream into the named pipe created in Step 1.
The following examples demonstrate how to use the live streaming library using HLS, RTSP and RTMP protocols.
HTTP Live Streaming (HLS)
$ export PIPE_NAME=/path_to_pipe/name_of_pipe $ export HLS_SOURCE=http://abc.def/playlist.m3u8 $ gst-launch-1.0 -v souphttpsrc location=$HLS_SOURCE ! hlsdemux ! filesink location=$PIPE_NAME
Real-time Streaming Protocol (RTSP)
$ export PIPE_NAME=/path_to_pipe/name_of_pipe $ export RTSP_SOURCE=rtsp://ip_addr:port/stream $ gst-launch-1.0 -v rtspsrc location=$RTSP_SOURCE ! rtpjitterbuffer ! rtph264depay \ ! h264parse ! flvmux ! filesink location=$PIPE_NAME
Real-time Message Protocol (RTMP)
$ export PIPE_NAME=/path_to_pipe/name_of_pipe $ export RTMP_SOURCE=rtmp://host/app/stream $ gst-launch-1.0 -v rtmpsrc location=$RTMP_SOURCE ! flvdemux ! flvmux ! filesink location=$PIPE_NAME
Build instructions
The binary example
is built using Bazel. A
Docker example
that has all build dependencies configured is also provided. You can find the
compiled streaming_client_main
binary in the $BIN_DIR
directory of the
Docker image.
For more information on using Docker, see Using Docker & Kubernetes.
Flow control
The Video Intelligence Streaming API server has inherent flow control.
In the following two cases,StreamingAnnotateVideoRequest
requests are rejected, and gRPC streaming connections are stopped immediately:
The AIStreamer ingestion client is sending requests to Google servers too frequently.
The AIStreamer ingestion client is sending too much data to Google servers (beyond 20Mbytes per second).
Visualizer
The visualizer code provided in AIStreamer should only be considered as a code example. The visualizer may not be compatible with the user's local environment. AIStreamer users should not rely on the client code to visualize annotation results.