Amazon Kinesis Video Streams FAQs Secure Video Ingestion For Analysis & Storage Amazon Web Services Flashcards
Kinesis Video Streams is ideal for building media streaming applications for camera-enabled IoT devices and for building real-time computer vision-enabled ML applications that are becoming prevalent in a wide range of use cases such as the following:
Q: What are common use cases for Kinesis Video Streams?
There are four key contributors to latency in an end-to-end media data flow.
Q: How do I think about latency in Amazon Kinesis Video Streams?
Kinesis Video Streams provides a PutMedia API to write media data to a Kinesis video stream. In a PutMedia request, the producer sends a stream of media fragments. As fragments arrive, Kinesis Video Streams assigns a unique fragment number, in increasing order. It also stores producer-side and server-side time stamps for each fragment, as Kinesis Video Streams-specific metadata.
Q: What is the Kinesis Video Streams PutMedia operation?
Kinesis Video Streams Producer SDK's core is built in C, so it is efficient and portable to a variety of hardware platforms. Most developers will prefer to use the C, C++ or Java versions of the Kinesis Video Streams producer SDK. There is also an Android version of the producer SDK for mobile app developers who want to stream video data from Android devices.
Q: In which programming platforms is the Kinesis Video Streams Producer SDK available?
The Kinesis Video Streams producer SDK does all the heavy lifting of packaging frames and fragments, establishes a secure connection, and reliably streams video to AWS. However there are many different varieties of hardware devices and media pipelines running on them. To make the process of integration with the media pipeline easier, we recommend having some knowledge of: 1) the frame boundaries, 2) the type of a frame used for the boundaries, I-frame or non I-frame, and 3) the frame encoding time stamp.
Q: What should I be aware of before getting started with the Kinesis Video Streams producer SDK?
You can use the GetMediaForFragmentList API to retrieve media data for a list of fragments (specified by fragment number) from the archived data in a Kinesis video stream. Typically a call to this API operation is preceded by a call to the ListFragments API.
Q: What is the GetMediaForFragmentList API?
You can use the ListFragments API to return a list of Fragments from the specified video stream and start location - using the fragment number or timestamps - within the retained data.
Q: What is the ListFragments API?
You can store data in their streams for as long as you like. Kinesis Video Streams allows you to configure the data retention period to suit your archival and storage requirements.
Q: How long can I store data in Kinesis Video Streams?
In general, if you want to consume video streams and then manipulate them to fit your custom application's needs, then there are two key steps to consider. First, get the bytes in a frame from the formatted stream vended by the GetMedia API. You can use the stream parser library to get the frame objects. Next, get the metadata necessary to decode a frame such as the pixel height, width, codec id, and codec private data. Such metadata is embedded in the track elements. The parser library makes extracting this information easier by providing helper classes to collect the track information for a fragment.
Q: If I have a custom processing application that needs to use the frames (and fragments) carried by the Kinesis video stream, how do I do that?
An Amazon Kinesis video stream has the following requirements for providing data through HLS:
Q: What are the basic requirements to use the Kinesis Video Streams HLS APIs?
An Amazon Kinesis video stream has the following requirements for providing data through DASH:
Q: What are the basic requirements to use the Kinesis Video Streams DASH APIs?
There are two different playback modes supported by both HLS and DASH: Live and On Demand.
Q: What are the available playback modes for HLS or DASH streaming in Kinesis Video Streams?
The latency for live playback is typically between 3 and 5 seconds, but this could vary. We strongly recommend running your own tests and proof-of-concepts to determine the target latencies. There are a variety of factors that impact latencies, including the use case, how the producer generates the video fragments, the size of the video fragment, the player tuning, and network conditions both streaming into AWS and out of AWS for playback. For low-latency playback, see the FAQs on WebRTC–based streaming.
Q: What is the delay in the playback of video using the API?
A Kinesis video stream supports a maximum of ten active HLS or DASH streaming sessions. If a new session is created when the maximum number of sessions is already active, the oldest (earliest created) session is closed. The number of active GetMedia connections on a Kinesis video stream does not count against this limit, and the number of active HLS sessions does not count against the active GetMedia connection limit. See Kinesis Video Streams Limits for more details.
Q: What are the relevant limits to using HLS or DASH?
AWS Elemental MediaLive is a broadcast-grade live video encoding service. It lets you create high-quality video streams for delivery to broadcast televisions and internet-connected multiscreen devices, like connected TVs, tablets, smart phones, and set-top boxes. The service functions independently or as part of AWS Media Services.
Q: What’s the difference between Kinesis Video Streams and AWS Elemental MediaLive?
Kinesis Video Streams provides managed end-points for WebRTC signaling that allows applications to securely connect with each other for peer-to-peer live media streaming. Next, it includes managed end-points for TURN that enables media relay via the cloud when applications cannot stream peer-to-peer media. It also includes managed end-points for STUN that enables applications to discover their public IP address when they are located behind a NAT or a firewall. Additionally, it provides easy to use SDKs to enable camera IoT devices with WebRTC capabilities. Finally, it provides client SDKs for Android, iOS, and for Web applications to integrate Kinesis Video Streams WebRTC signaling, TURN, and STUN capabilities with any WebRTC compliant mobile or web player.
Q: What does Amazon Kinesis Video Streams manage on my behalf to enable live media streaming with WebRTC?
With Kinesis Video Streams WebRTC, you can easily build applications for live media streaming or real-time audio or video interactivity between camera IoT devices, web browsers, and mobile devices for usecases such as helping parents keep an eye on their baby’s room, enable home-owners use a video doorbell to check who’s at the door, allow owners of camera-enabled robot vacuums to remotely control the robot by viewing the live camera stream on a mobile phone, and much more.
Q: What can I build using Kinesis Video Streams WebRTC capability?
You can get started by building and running the sample applications in the Kinesis Video Streams SDKs for WebRTC available for Web browsers, Android or iOS based mobile devices, and for Linux, Raspbian, and MacOS based IoT devices. You can also run a quick demo of this capability in the Kinesis Video Streams management console by creating a signaling channel, and running the demo application to live stream audio and video from your laptop’s built-in camera and microphone.
Q: How do I get started with Kinesis Video Streams WebRTC capability?
Applications use Kinesis Video Streams STUN end point to discover their public IP address when they are located behind a NAT or a firewall. An application provides its public IP address as a possible location where it can receive connection requests from other applications for live streaming. The default option for all WebRTC communication is direct peer-to-peer connectivity but if the NAT or firewall does now allow direct connectivity (e.g. in case of symmetric NATs), applications can connect to the Kinesis Video Streams TURN end points for relaying media via the cloud. The GetIceServerConfig API provides the necessary TURN end point information that applications can use in their WebRTC configuration. This configuration allows applications to use TURN relay as a fallback when they are unable to establish a direct peer-to-peer connection for live streaming.
Q: How do applications live stream peer-to-peer media when they are located behind a NAT or a firewall?
End to end encryption is a mandatory feature of WebRTC, and Kinesis Video Streams enforces it on all the components, including signaling and media or data streaming. Regardless of whether the communication is peer-to-peer or relayed via Kinesis Video Streams TURN end points, all WebRTC communications are securely encrypted through standardized encryption protocols. The signaling messages are exchanged using secure Websockets (WSS), data streams are encrypted using Datagram Transport Layer Security (DTLS), and media streams are encrypted using Secure Real-time Transport Protocol (SRTP).
Q: How does Kinesis Video Streams secure the live media streaming with WebRTC?
For a producer that is transmitting video data into the video stream, you will experience a 2 - 10 second lag in the live playback experience in the Kinesis Video Streams management console. The majority of the latency is added by the producer device as it accumulates frames into fragments before it transmits data over the internet. Once the data enters into the Kinesis Video Streams endpoint and you request playback, the console will get H.264 media type fragments from the durable storage, trans-package the fragments into a media format suitable for playback across different internet browsers. The trans-packaged media content will then be transferred to your location where you requested the playback from over the internet.
Q: What is the delay in the playback of video on the Kinesis Video Streams management console?
Server-side encryption is a feature in Kinesis Video Streams that automatically encrypts data before it's at rest by using an AWS KMS customer master key (CMK) that you specify. Data is encrypted before it is written to the Kinesis Video Streams storage layer, and it is decrypted after it is retrieved from storage. As a result, your data is always encrypted at rest within the Kinesis Video Streams service.
Q: What Is Server-Side Encryption for Kinesis Video Streams?
Server-side encryption is always enabled on Kinesis video streams. If a user-provided key is not specified when the stream is created, the default key (provided by Kinesis Video Streams) is used.
Q: How do I get started with server-side encryption?
No. Amazon Kinesis Video Streams is not available in AWS Free Tier.
Q: Is Amazon Kinesis Video Streams available in AWS Free Tier?
Furthermore, Kinesis Video Streams will only charge for media data it successfully received, with a minimum chunk size of 4 KB. For comparison, a 64 kbps audio sample is 8 KB in size, so the minimum chunk size is set low enough to accommodate the smallest of audio or video streams.
Q: How much does Kinesis Video Streams cost?
Kinesis Video Streams will charge you for total amount of data durably stored under any given stream. The total amount of stored data per video stream can be controlled using retention hours.
Q: How does Kinesis Video Streams bill for data stored in streams?
For using the Amazon Kinesis Video Streams WebRTC capability, you are charged based on the number of signaling channels that are active in a given month, number of signaling messages sent and received, and TURN streaming minutes used for relaying media. A signaling channel is considered active in a month if at any time during the month a device or an application connects to it. TURN streaming minutes are metered in 1 minute increments. Please see the pricing page for more details.
Q: How am I charged for using Kinesis Video Streams WebRTC capability?
Our Amazon Kinesis Video Streams SLA guarantees a Monthly Uptime Percentage of at least 99.9% for Amazon Kinesis Video Streams.
Q: What does the Amazon Kinesis Video Streams SLA guarantee?
You are eligible for a SLA credit for Amazon Kinesis Video Streams under the Amazon Kinesis Video Streams SLA if more than one Availability Zone in which you are running a task, within the same region has a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle.
Q: How do I know if I qualify for a SLA Service Credit?