Streaming Video Alliance Projects

Share this page:

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on print

Current Projects

Below are the current projects underway in the various Streaming Video Alliance working groups. Click on a project to learn more. If you aren’t a member but want to get involved, join today!

Search and Filter

Check the boxes of the filters you want to use. Uncheck them to clear them. Enter any text into the search field (results will refresh as you type).

Working Groups

Project Status

Search

Legend

Stage 1. Member comment window. 30 days for members to review the document and vote to progress it forward.

Stage 2. Board review. 30 days for the board to review, provide feedback, and vote to move forward.

Stage 3. Member ratification. Final member vote to approve the document, as is, for publication.

Project Name

Working Group

Description

Stage

5G and Edge Cloud for Streaming Video Networking and Transport

Both 5G and edge cloud have the opportunity to significantly alter the streaming video experience. 5G can provide the massive bandwidth capacity and throughput that has not been available to date on mobile networks, while edge cloud can ensure ultra low latencies which will drive new forms of highly personalized, immersive interactive entertainment. As mobile video consumption continues to grow, 5G will bring entertainment out of the home and truly democratize access to content. On the network side, massive improvements in Quality-of-Service and Quality-of-Experience through dedicated network slicing will drive a true broadcast experience. This document is intended to provide a market and technology overview of 5G and Edge Compute, specifically as it pertains to streaming video, as well as recommendations for how both rights holders and service providers can take advantage of these new technologies.

Write
Best Practices for End-to-End Workflow Monitoring Measurement/QoE

There has been an increasing consensus in the streaming video industry that the design and operation of the entire video workflow needs to be driven by quality assurance and quality controls at points throughout the delivery chain. In this document, we propose best practices for end-to-end workflow monitoring to provide a clear picture to stakeholders (network operation center agents, operation engineers, video delivery architects, managing executives, content owners, and content providers) about potential factors that impact a consumer’s streaming experience. The document will then offer recommendations for a framework which provides end-to-end monitoring and improved failure detection in order to enhance overall quality as well as increase customer retention and resulting market share.

Stage 1
Best Practices for Reducing Live Streaming Latency Live Streaming

As the popularity of live streaming has grown, it has become increasingly important to mitigate inherent latency in segmented HTTP streaming video. Many of the initial formats, such as HLS and MPEG-DASH, did not take into account the desire for synchronization between traditional broadcast and a streaming counterpart as live streaming had not yet taken hold. But as more major broadcasters and rights holders are pushing their live content, such as sports, through streaming platforms many of those formats (and others) have adopted low-latency modifications. Still, the format is but one piece of the entire streaming workflow and there are numerous other technologies and components which are not specifically optimized to reduce live streaming latency. It is critical to identify all of the technologies involved in streaming and understand where remediation or optimization can occur improve the overall latency of the streaming experience.

Review
Capacity Footprint API Open Caching
Video delivery is a complex ecosystem of content and network providers. The content providers rely on one or more network providers (ISPs and global or regional CDNs) to connect them to their users, and at high quality levels. One of the constraints to providing a high quality experience is network capacity constraints due to high demand or temporary infrastructure failures. Historically, it has been challenging to provide the necessary data at the right level of detail: Too little, the content provider can’t make traffic routing decisions. Too much, the CDN or ISP doesn’t have the flexibility needed to manage their network effectively. Often times, sharing this information has been ad hoc, making it challenging to reuse with multiple partners.
Code
Configuration Integration API Open Caching

Cache configurations must be made on a per-CDN basis. This can result in a lot of duplicative effort as well as introduce the potential for errors. For example, what if a configuration is applied to the wrong CDN? It is critical for network operations personnel to be able to ensure that the correct CDN configuration is applied to the caches of each delivery partner. To make that happen, the mechanism to push configurations must be unified and programmatic. The best way to accomplish that would be through an Application Programming Interface (API) that handles a majority of the features necessary to configure a caching solution for content providers. Enabling cache configuration deployment in this manner would enable configurations to be applied to CDNs and Open Caching Nodes simultaneously and in a manner that mitigates the potential for human error.

Define
End-to-End Ad Monitoring Advertising

Ad-supported video streaming is growing. Dedicated services like PlutoTV, Xumo (purchased by member company Comcast), and Tubi, have demonstrated that advertising can be an intrinsic part of the streaming experience, much like broadcast. But, unlike broadcast, delivering ads within streaming video involves multiple systems all working together, from stitching the ad into the video to delivering it as part of playback to measuring the time watched. Yet there is little interoperability between these different systems and technologies. As such, it is very difficult to provide an operational end-to-end view of the distribution and consumption of ads.

Define
Open Caching Capacity Interface Open Caching

Capacity planning for streaming video delivery is a complex operation that requires deep understanding of many variables within the delivery path. Once an understanding of capacity is achieved, various network tools, such as traffic management, can be used to shape where video segments are sent so that no one part of the delivery infrastructure receives more than its capable of handling to provide the expected quality of experience and service. One of those variables is the cache. To understand how much capacity an individual cache has such as available bandwidth (an aspect of the NIC), available memory, and available disc, a traffic management tool must have real-time access to the data from an individual node in a delivery infrastructure that includes OCNs.

Write
Open Caching Performance Measurement Specification Open Caching

Measuring delivery performance, especially at the cache, is critical to enabling operations personnel to make informed decisions about content delivery such as which delivery networks to utilize for which content and when to take caches out of service. But that requires that all caches within the delivery infrastructure provide the requisite data to determine performance. In the initial specification, the Open Caching Nodes did not include metrics to determine individual node performance. The document produced from this project identifies those metrics and a means by which they can be retrieved directly from the OCN.

Stage 1
Open Caching Relayed Token Authentication Open Caching

The security of delivering video streams, from origin to edge cache and from edge cache to player, is of critical concern to video distributors. Although DRM and other security mechanisms provide a way to protect the playback of content to only authorized viewers, these mechanisms must be employed in conjunction with other security features like URL tokenization. Prior to this project, the Open Caching specifications did not provide support for authenticating tokenized URLs (which is often used within CDN environments for the delivery of video streams and assets). By providing for this functionality, Open Caching can be included in a video distributors ecosystem of caches and service providers.

Stage 3
Open Caching Request Routing Functional Specification (Version 2.0) Open Caching

With Manifest rewrite, a video platform can change the URLs for individual segments by rewriting the manifest. This works best with HLS which has complete URLs for each segment in the media playlists, so any segment can be pointed to any source. For live, the segment list is fetched again by the client as new segments are created, and a server can change the latest segment to point to an alternate source (such as another CDN) the segments were not pointing at before. This effectively moves clients to the new source in a controlled fashion. However, this requires an explicit list of URLs and, thereby, does not work with DASH which uses the same URL for all segments in a bitrate variant. Changing the manifest during the session does not work for VoD either, since the manifest or playlist is only downloaded once when the stream is initialized which results in all segment requests being directed to the CDN, or CDNs, decided at the start of the session. Manifest rewrite can be used to redirect part of the traffic to a separate server with diagnostic capabilities and collect server side metrics to gain insights into the session.

Stage 2
Recommendations for Mitigating Latency in Streaming VR Video Delivery Workflows Virtual Reality/360-Degree Video

VR streaming video heralds a new video experience but it’s potential may never be realized if the possibility looms of latencies which might potentially make viewers physically ill. The intent of this project is to establish end-to-end VR streaming video delivery workflows and measure the latency which may impact the viewer’s Quality of Experience. The PoC will include a careful analysis of the various latencies caused by technologies and components within the workflow of high resolution VR streaming video content. The data gathered from the PoC will serve as the basis for a report and subsequent best practices document which will provide a set of recommendations for improving VR streaming video workflows to mitigate latency. This document will also potentially serve as input into other VR working groups should it be determined specific issues require more directed research. It is also anticipated a second POC will be conducted based on the findings to measure if the recommendations produce a noticeable improvement in delivery latency.

Gather
Securing Streaming Video Privacy and Protection

Video piracy is on the rise. In order to curb the increase in theft of video assets, every participant in the streaming video delivery chain needs to secure the content from ingest to delivery to subscribers’ end devices. When content is not secured end-to-end, video pirates will find the weakest link to obtain content and re-distribute illegally. And as video streaming consumption continues to grow, new providers, such as OTT platforms, will join the market necessitating everyone is securing their content and delivery systems as comprehensively as possible so the industry itself isn’t contributing to the problem.

Write
Technical Evaluation and Measurements Live Streaming

It is critical to test assumptions when making recommendations. Although the initiating document for this project suggested very informed best practices to reduce live stream latency, based on the collective experience of those involved and their companies, substantiation requires testing and validation. A first work item for this project will be to define the work bench structure relying on lab resources being made available at Liberty Global. During this first work item, the group will define the common part of the test workflow, identify the contributors for each building block, and define a baseline which can be used as an anchor to compare with for other low latency streaming technologies defined in subsequent work items. Then, four work items corresponding to four different streaming technologies have been identified and selected for the tests. The definition of the tests that will be carried out for each of the work items as well as the test bench. The resulting outcome of the tests (i.e., measurements) will be collated into a document resulting in a set of measurement results and reference architectures for low latency live streaming.

Define
Scroll to Top