• 06 February 2025
  • EVS.com
Containerization
The broadcast industry is undergoing a significant transformation, driven by the need to optimize capital expenditures (CapEx), maximize equipment utilization, and achieve greater operational resilience, flexibility, and business agility.

This shift has ushered in the progressive virtualization of traditional proprietary systems on SDI and ST2110 networks, along with the adoption of software technologies such as microservices.  At the heart of this evolution are software containerization and orchestration—game-changing technologies that are reshaping how broadcasters deploy and manage their workflows. Let’s take a closer look at how these technologies work, the benefits and challenges they bring, and how EVS is leveraging them to meet the unique demands of the broadcast industry.
 

Understanding the basics
Virtualization, orchestration and containerization

Key to software-defined broadcasting systems, microservices are task-specific applications that break down complex broadcast operations into smaller, independent components, enabling faster development, easier scaling, and streamlined maintenance. To fully leverage microservices, broadcasters need efficient deployment and management technologies:

  • Virtualization: Divides physical hardware computing platforms into virtual machines (VMs), each running its own operating system (OS) and applications. This abstraction layer optimizes hardware utilization and simplifies infrastructure management.
  • Containerization: Packages software into isolated units called containers, which include everything needed to run a microservice and operate consistently across different environments.
  • Orchestration: Platforms like Kubernetes manage containerized applications, ensuring services run efficiently, handle failures, and scale resources to meet the required load. 
Advantages
Adopting these technologies offers many advantages for broadcasters:

Flexibility: Orchestration platforms enable tailored deployments, where software services and systems can be spun up or shut down on demand, enabling quick adaptation to changing production needs.

Resource optimization:  Containers consume fewer resources than VMs by sharing a single host OS kernel, making large-scale deployments more efficient. In fact, when creating a new production environment, a “deploy and destroy” approach can be adopted, releasing resources once the production is complete.

Uniform deployment: With containerized architectures, software can be deployed uniformly across computing resources in any environment whether in OB vans, venues, broadcast centers, data centers, or cloud platforms.

Modular deployment: Containers facilitate modular and scalable software deployment, enabling rapid roll-out and roll back of software versions. When combined with microservices and High-Availability (HA) design principles, this approach delivers scalable, resilient solutions that can grow with the broadcaster’s needs.

Simplified updates: By abstracting the operating system and libraries, containers streamline updates and continuous integration, making it easier to maintain system consistency and functionality.

Cloud integration: On public cloud platforms, containers seamlessly use cloud-native services, which can be beneficial for de-facto standard components that can be deployed with the same versions and compatibility on-premise.

Sustainability: By reducing compute resource needs and optimizing energy consumption, containerized workflows contribute to minimizing the environmental footprint of broadcasting operations.
 

Challenges
While containerization offers transformative benefits, its integration into broadcasting comes with challenges:

Real-time performance: Containers often lack the fine-grained control needed to ensure consistent low-latency performance, which may affect the quality and reliability of live video streams. Additionally, the container networking layer can introduce latency and impact synchronization, especially when transitioning between SDI and IP workflows, further complicating real-time applications.

Interoperability: Broadcast infrastructures often rely on specialized hardware, such as GPU or FPGA-based cards, to handle the demanding processing requirements of HD and UHD video - capabilities that standard COTS hardware without dedicated acceleration cards may lack. This incompatibility can limit the deployment of containerized applications across heterogeneous multi-vendor environments, making it harder to ensure interoperability. 

Scaling limitations: Scaling of high-bandwidth video workflows remains complex, as distributed storage solutions can introduce latency, and orchestration tools like Kubernetes aren’t always optimized for the specific needs of video broadcasting off-the-shelf. 

Security risks: Containers share a common OS kernel, which can expose vulnerabilities if one container is compromised. In workflows that handle proprietary or sensitive content, robust security measures are essential to protect against potential breaches.

Vendor dependencies: The containerized approach also raises issues around isolation and vendor-specific dependencies, as many media applications still rely on proprietary systems and codecs that complicate the deployment of a seamless, flexible, and modular workflow.

Skills gap: Many teams are experienced in configuring virtualized or VM environments but lack the expertise needed to manage orchestrated and containerized infrastructures. Bridging this skills gap requires investment in training and upskilling.
 
 

Real-world applications

At EVS, we’ve developed a pragmatic deployment strategy that is tailored to various scenarios, ensuring optimal performance, reliability, and ease of implementation. Our approach strikes a balance between innovation and practicality, making sure the transition to containerized workflows aligns with broadcasters' operational needs. 

Broadcast data centers

In large-scale environments like broadcast data centers, resource demands are high, and reliability is paramount. EVS employs Kubernetes with custom configurations to manage microservices efficiently. These services can run either on EVS’s dedicated hardware or on the customer’s existing virtualized infrastructure, such as VMware. To ensure business continuity and meet stringent security requirements, EVS clearly delineates responsibilities between teams. When the customer’s platform is used, their IT engineering department manages their VM instances (typically through VMWare), while  EVS  uses Kubernetes to automatically manage the containers.
 

OB vans or small broadcast centers

Since these environments often lack the dynamic resource demands of larger systems, Kubernetes' auto-scaling capabilities are not typically required. Instead, EVS pre-scales systems to match specific production needs, ensuring a streamlined deployment process with minimal configuration overhead.

Distributed environments

As multi-site workflows and/or use of public cloud resources become more prevalent, the ability to scale containerized applications across distributed environments is increasingly critical. EVS collaborates with broadcast integrators to optimize Kubernetes configurations, delivering targeted operational efficiency while respecting the constraints of distributed environments. This ensures that even complex, multi-site workflows are both effective and manageable.

Ensuring performance and reliability

Broadcasting demands real-time performance and reliability, and EVS ensures these requirements are met through:

  1. Use of EVS hardware and configurations:  EVS hardware is optimized for real-time demands, ensuring deterministic performance even in critical workflows.
  2. Continuous integration: For SaaS deployments, EVS employs rigorous continuous integration processes. For customer-controlled deployments, EVS regularly releases its full suite of solutions, including databases and messaging systems, ensuring that each deployment is thoroughly tested within a dedicated test environment according to the processes defined by the customer. This approach ensures a balanced and consistent deployment of solutions across various environments.
  3. Phased approach: Components are only containerized when guarantees of deterministic system behavior can be upheld. This phased approach ensures that broadcasters can adopt modern technologies without compromising the integrity of their workflows.

    Our approach to solutions containerization enables a centrally orchestrated “Balanced computing” strategy. This strategy allows the different components of the solutions to be deployed and activated on any platform or in any environment (on-prem or cloud-based) according to the level of risk, performance/cost ratio, bandwidth, latency and reliability of the network connectivity available in the venue and the available human resources.

By Graham Rowe, Senior System Architect 

Graham Rowe, Senior System Architect at EVS, has over 25 years of experience in the Broadcast and Post-Production industry. Specializing in large-scale live broadcast production system design, Graham is part of EVS's Innovation team, where he works on the creation of world-class products and technologies renowned for their reliability, resilience, and scalability.

Graham Rowe (LinkedIn)
Senior System Architect

Want more insights?

Sign up to get the latest news, product updates, expert insights, and more delivered directly to your inbox. 

Subscribe