• 06 May 2026
  • How AI is reshaping live production: what NAB 2026 actually proved
EVS Blog: How AI is reshaping live production
At NAB 2026, one point came through clearly across every area of the EVS showcase: AI has moved from experimentation into daily operations.
Across replay, robotics, officiating, and media management, the technology on display was actively supporting live environments, embedded into processes where speed, precision, and reliability are critical. Here’s what that looked like, and what it means for how production teams will work moving forward.
Live production
Enabling operators to focus on storytelling

Replay and highlights workflows are zero-tolerance environments. In live production, every tool needs to perform reliably under pressure, delivering consistent results in real time.

That requirement shapes how EVS is deploying AI in this field. The focus is on increasing the creativity of operators, while simultaneously improving their efficiency and productivity.  That means embedding AI directly into existing workflows, so it enhances how operators work rather than adding friction alongside it.

In this context, adoption is driven by trust as much as capability. Operators depend on tools that feel natural to use and that deliver at speed, every time. At NAB 2026, that confidence was clearly visible across the EVS showcase.

One example is XtraMotion. Originally developed to interpolate frames and generate super slow-motion from standard cameras, it has evolved to include advanced deblurring and cinematic enhancement capabilities. Today, it is used daily across sports productions to elevate visual quality, with teams often combining slow-motion and deblurring effects across both live and archived content.

What enables this in a live setting is turnaround speed. Results are delivered in under three seconds at the push of a button, integrating directly into the replay operator’s workflow without interruption or additional steps.

Automatic tracking is another area where EVS’s use of AI is having a tangible impact. Operators typically invest significant attention in tracking action and maintaining composition in real time.

With LSM-VIA Zoom, once a subject is selected, whether a player, the ball, or any moving element, the system maintains framing automatically. Operators can then focus on editorial intent, deciding what is shown and how it supports the narrative of the story.

At the same time, multi-platform delivery continues to accelerate. Many broadcasters are producing content simultaneously for traditional 16:9 feeds and vertical formats designed for social platforms. 

LSM-VIA Zoom’s AI-driven reframing enables instant adaptation between formats, allowing content to be prepared for multiple outputs in parallel and significantly accelerating publishing workflows.

Each capability delivers clear value on its own. Together, they point to a broader shift in production models, where operators dedicate more attention to storytelling while AI manages execution.

Robotics
Intelligence that moves with the production

NAB 2026 marked the introduction of T-Motion, EVS’s new media production robotics solution family, and highlighted how AI is reshaping both operations and safety in live production environments.

Traditional robotic systems depend on fixed paths and pre-programmed moves, which require careful coordination and leave limited room to adapt once a production is underway. However, production spaces are constantly in motion. Jibs swing, operators reposition, and crew circulate across the floor, all while cameras remain live.

T-Motion addresses this complexity with AI-driven path planning. Each movement is calculated in real time, allowing the system to adapt seamlessly to its surroundings while staying fully aware of on-air cameras and production constraints. This continuous awareness also enhances on-set safety, reducing the risk of collisions and enabling smoother interaction between robotic systems and crew.

Automatic reframing is another practical application. Tracking subjects manually is an intuitive process; operators adjust instinctively, keeping compositions intact in real time. Now with AI, that same natural responsiveness is brought to robotic systems: a single input sets subject tracking in motion, using facial recognition and object detection to maintain precise framing across a variety of shot types, from single presenters to over-the-shoulder angles and dynamic two-shots.

This approach significantly reduces the need for continuous manual intervention. Multi-camera operations become more efficient, with consistent framing and movement maintained throughout the production.

Robotic systems are evolving into responsive tools that align closely with the realities of live environments, enhancing efficiency, supporting creativity, and reinforcing safety on set.

Officiating
Decision support for the moments that matter

Sports officiating has always been the domain where AI meets the harshest scrutiny. A wrong call will become a headline. That level of pressure defines the requirements for AI in this space: absolute clarity and speed while preserving the authority of the referee.

Within Xeebra, our VAR system, AI focuses on removing friction from the review process without altering the decision-making responsibility. 

Automatic pitch calibration eliminates one of the most time-consuming manual steps in offside analysis, enabling review to begin immediately from the first frame. Live player skeleton recognition takes this further by tracking each player’s body structure in real time and positioning the offside line at the correct anatomical point, reducing time lost in setup while maintaining objectivity.

Motion blur removal is the capability that will resonate most with anyone who has seen a key decision undermined by an unclear image. A single press on the touchscreen restores clarity in blurred frames, revealing ball position, contact points, and jersey numbers with immediate precision. This is frame reconstruction rather than interpolation, meaning it enhances visibility without generating or inferring new content. In officiating contexts, that distinction is critical.

AI supports a more efficient and confident review process while preserving full human authority, enabling faster, clearer decisions with greater certainty and less delay.

Content management
From storage to storytelling

At NAB 2026, we demonstrated how AI is redefining content management by being embedded directly into the interface of VIA MAP, our media asset platform. This deployment can run on-prem, which ensures content remains close to the source, preserving data sovereignty and reducing security concerns. The approach also avoids pay-per-use pricing models, enabling broadcasters to process content at scale without the escalating operational costs often associated with AI.

Real-time transcription and translation means content is searchable the moment it's ingested, across live sports, news sessions, any format. Journalists and editors can swiftly search against mentioned phrases and commentary, enabling fast content turnaround.

AI-powered scene change detection automatically structures the content itself by identifying cuts, fades, and shot transitions. Editorial teams no longer wait for logging to catch up; they can search and access material while the event is still in progress.

Match Moments goes further, detecting key events as they happen: goals, fouls, players entering the field. Clips are generated automatically and assembled into sequences ready for near-instant social publishing. What previously required a dedicated highlights team working against the clock now runs in parallel with the live event.

Face recognition and enrichment adds another layer of intelligence. Using a built-in database of thousands of known personalities, the system identifies individuals directly within the content. Teams can also train it with their own image sets, extending recognition to local or organization-specific profiles. Every identified face is stored as timecoded metadata, enabling fast and precise retrieval for journalists and editors.

By leveraging AI, EVS’s VIA MAP media asset platform interprets and organizes content, increasing its value throughout the production lifecycle.

What's next
AI that keeps evolving

What comes next builds directly on the same innovation mindset explored in our previous blog, “An insider’s look at EVS’s innovation strategy in the age of AI”: combining long-term vision with rapid, real-world validation.

The developments unveiled at NAB 2026 are not isolated breakthroughs, but part of an ongoing cycle of refinement, experimentation, and deployment. AI is becoming deeply embedded across the EVS ecosystem, evolving with user needs and continuously enriching every layer of production. This reflects EVS’s core philosophy: innovation grounded in practical value while pushing creative boundaries.

As adoption accelerates, these systems are already transforming how teams operate, driving clear gains in efficiency, speed, creativity, and safety, with an impact set to grow as both the technology and workflows continue to mature.

Sign up for our newsletter

Receive our bi-monthly newsletter featuring curated EVS updates, and insights into the latest advancements and innovations in the broadcast industry.

Sign me up!