
Newsroom computer system (NRCS) and editing vendors have already been integrating some AI capabilities into their products for years, with early tools like speech-to-text for automatic transcription and optical character recognition first surfacing a decade ago. And in recent years broadcasters have been pursuing generative AI, albeit within tight guardrails, to provide overworked journalists with assistance in producing and reformatting content for multiple platforms, including mundane tasks like tagging digital content for search engine optimization (SEO).
But the rapid development of agentic AI — autonomous systems which can plan and execute complex tasks without requiring step-by-step human instructions — promises a much more significant impact on day-to-day news production, both in the newsroom and the control room. And insiders say agentic AI may finally help deliver the long-chased dream of true “storycentric” workflows that unite the still-disparate systems and editorial personnel that produce stories for linear newscasts and digital platforms within most broadcast news organizations.
An AI Incubator
Unlocking the power of AI agents through interoperability is the goal of an “IBC Incubator” called “Smart Stories: The Agentic Production Ecosystem.” The project is being led by “champions” Associated Press, NBCUniversal, ITN and BBC with “co-champions” Channel 4, Al Jazeera, Washington Post, Sky and ITV and supporting vendors Shure, EVS, Cuez and Moments Lab.
The group is developing an open standard, the Story Object Module (SOM), aimed at communicating story context between different systems from newsgathering all the way to distribution. It is also developing an associated schema that will allow broadcasters to create AI “skills” that guide agents to automatically perform production functions according to their own unique workflows and editorial standards.
Just as the Media Object Server (MOS) protocol emerged in 1998 as a way for live production equipment from different vendors to communicate with each other to perform key functions like playing out a story from a rundown, SOM is seen by the Incubator team as a way for different agents to communicate with each other about changes in a story and then automate the appropriate response.
“The breakthrough that this group hopes to accomplish is that the various systems involved are getting smarter as the vendors put more AI capabilities into them,” says Brian Hopman, VP & GM of workflow solutions for AP. “What doesn’t exist is a way for those systems to share that intelligence from one to the other.

“We’ve had MOS as an industry, which helps us say, ‘Story A1 has these assets in it. Make sure you play Story A1 and play these assets when it’s on the screen,” Hopman continues. “But this goes many steps beyond that, because we need the systems to know, ‘Well, what is Story A1? What’s it about?’ If it’s about severe weather that impacted the New York area and we just learned that there’s 50,000 homes without power, not 5,000, how do we make sure all of the systems get updated with new information? How does the newsroom system talk with the graphics platform and the digital CMS? As we become smarter about things, as we learn information, we need it to go everywhere.”
The SOM project follows work on successive IBC Accelerators by a core team of news technology experts including ITN CTO Jon Roberts and BBC Solution Lead for Live Production Control Technology Morag McIntosh. That work started before there were any conversations about AI, says Roberts, and initially focused on leveraging “the benefits of software-defined tooling” in live production workflows for the “Gallery-Agnostic Live Production” project, presented at IBC 2023.
That project, which tackled how a top software layer could connect various software tools from different vendors, led the group to focus on two key areas that needed improvement: integration and user interfaces.
“How do we make these tools better interact with each other, and then interfaces,” Roberts says. “How do we do it better? How do we think about the front end differently to allow users to more effectively interact with what is an ever-growing set of tools that they’re presented with.”
AI And The Control Room

The group moved on to a new accelerator project the next year, “Evolution of the Control Room, leveraging XR, Voice, AI and HTML-Based Graphic Solutions,” which “introduced AI to the equation,” Roberts says.
That 2024 project featured a simple AI implementation, integrating the ChatGPT large language model (LLM) into a software-defined live production system to allow the user to control the rundown through voice commands and also to ask questions of the rundown.
“We gave ChatGPT a whole bunch of RAG [Retrieval-Augmented Generation] data to help it to understand how we make TV, so TV concepts, TV terminology, rules of TV, rules of content making,” McIntosh says. “And what that allowed was the user to say things like, ‘When should I move the presenter?’ And the LLM could infer from the data it had in the running order, plus the RAG data that we’ve given it its understanding of TV, it could then return a result. ‘You should move the presenter during the longest VT [Video Tape, i.e. prerecorded clip]. And the longest VT is…’ and then it could list the VT.”

There was an agent involved in that project, McIntosh says, but its use was limited to checking the graphics in the rundown for spelling errors. The role of agents dramatically increased with last year’s project, “AI Agent Assistants for Live Production,” which was demonstrated as a POC at IBC 2025. It featured a full agentic system with 12 agents from different vendors and broadcasters, including rundown, graphics, audio and video processing agents and a Google orchestrator agent. Agents talked to other agents through Google’s A2A (Agent-to-Agent) protocol and performed tasks for the user in a live production environment, via an “AI Assistant Director.”
“What that allowed was for things like the user to say, ‘Are there any missing clips in my running order [rundown]?’” McIntosh says. “And the orchestrator could go and check, talk to the running order agent who could find the missing clips, and return the answer that yes, there were. But then the user could say, well, can you create me a clip for the missing item? Then the agent would pull the script from that item, and send it off to effectively a content discovery agent, the gist of what that story was about. It could then draw a clip out of the archive and then push it back into the running order, and then we could even apply a different agent to apply effects on that clip, such as blurring faces.”
Storycentric North Star
Joining Roberts and McIntosh in working on last year’s project was NBC News VP of Innovation Alex Bassett, a former ITN colleague of Roberts. NBC has been pursuing storycentric workflows as “a North Star” for years, Roberts says, but found “fragmented systems” to be a big challenge.

To help solve that problem, Bassett’s team at NBC had already developed a “device-agnostic” app for content creation and playout for graphics called Sync. The goal with Sync was to be able to mix and match assets and content, Bassett says, and not have to “think about opening seven different apps to control them and play them and deliver them.”
The development of Sync changed as NBC decided it want to bring more real-time data into its graphics plug-ins for the newsroom, to handle things like live election results, and then combine those with the day-to-day graphics it put in every news program. That led it to build an end-to-end system, Roberts says, “which could create content in preproduction with the newsroom and the journalists, and then obviously play that out in the live production space.”
Using AI to help manage real-time data was an obvious opportunity, and when NBC joined the accelerator, it used Sync to do proofs-of-concept of AI within the platform for the automatic creation of graphics from financial data and election results. Because NBC owned the tech stack end-to-end, integrating agents and designing the user interface was fairly easy, Roberts says.
A Lingering Problem
While the progress of the 2025 Accelerator was impressive and even earned the project a Broadcast Tech Innovation Award, the group knew there was still a fundamental problem that needed to get solved. That was automatically communicating story context across the various systems used in the news production process from newsroom to playout.

As Roberts explains, today when a development occurs with a big story, like a guilty verdict in a high-profile court case, there are about 17 different systems to which that information needs to get communicated — that is to make sure that the right preproduced assets, such as a “Guilty” graphic, get played and the wrong ones, like “Not Guilty,” don’t. And that process is still left up to people, with no digital audit trail of their work being created.
“I would say every newsroom in the world currently runs on a model where our people, our staff, are the only thing connecting our tools,” Roberts adds. “That’s where an awful lot of knowledge around what is happening on any given story on any given day lives in the heads of producers and operators in messaging apps and siloed in different systems. And we think that for AI to deliver the value that we all think it can and should, we need to figure out how to surface that information to those tools more effectively.”
Therefore, Roberts, McIntosh and Bassett have authored the SOM specification as a way to deliver a mass notification to every system involved in a story, including content management systems used by digital platforms. Integration could be handled by A2A or other protocols like MCP (Model-Context Protocol), an open standard introduced by Anthropic.
Their work is currently being reviewed by the other participants in the Incubator, with Version 1.0 hopefully ready by May. After that, various “hackathons” among broadcasters and vendors are planned for the summer to prepare for a live demo of SOM in action at IBC 2026. The trio expects that the schema will be ready for broader use by the industry after that.
“The beauty of the incubator is we’re now getting real feedback from not only newsrooms, but vendors as well, to make sure the schema covers everything,” Bassett says. “Because we could have done this, any one of us could have built this in house and built a custom-like schema. But then the problem still remains in every newsroom and in the industry, which is, none of these systems have any context. Making it an open standard solves a problem for everybody. And then the implementation is decided by the vendors and the newsrooms, which is exactly how it should be.”
Injecting AI ‘Skills’
Alongside SOM the group is also developing a specification to inject AI “skills,” which are reusable models (stored in a skill.md file) containing instructions, scripts and resources that direct generic agents to perform specialized tasks within an organization. Skills are a way to customize AI agents for broadcasters, such as creating a graphic to a particular network’s specifications.
“Skills are owned by the newsrooms, like the way NBC tells stories is different to another newsroom,” Bassett says. “And the way we want to have governance and audit or automation or editorial flags and gates, gets to be decided by our newsroom and our implementation. All those notifications are happening in real time, and we get to decide where that gets implemented and in what production methodology.
“If, for example, the court case changes and goes to guilty, that could be a notification to a system, and then a senior producer or a senior newsroom employee can then approve that, and then all downstream systems are notified,” he continues. “We can keep the human in the loop and engaged and protect our brand’s integrity with that storytelling. Again, we do it one way, and other newsrooms may do it another way.”
While the Incubator is pursuing the use of agents to automate tasks currently performed by humans, Roberts emphasizes that work is aimed at helping, not replacing, journalists.
“What we think this vision allows for is giving them back the time they currently spend effectively being a human API between disconnected systems,” Roberts says. “Nobody went into journalism to copy and paste metadata or content from one platform to another. And that is where an awful lot of inefficient time is currently spent in newsrooms at a time when we are all trying to service more platforms with more content to more audiences than ever.”
It All Starts With A Prompt
Moments Lab has already found success with its “multimodal AI” technology helping big station groups like Sinclair and Hearst better organize and access decades of archive content. The company is now gaining traction in day-to-day production as news broadcasters look to seamlessly mix fresh and archive content to help their journalists tell better stories, says Moments Lab CEO Philippe Petitpont.

He says the company will be announcing at NAB a deal with a major U.S. “news and financial broadcaster” that is doing just that with Moments Labs’ agentic AI tools.
“Basically, we’re helping digital teams to create new stories for social media and for their CMS by mixing the fresh content, something that just happened, with something that might be older,” Petitpont says.
When you look at how news organizations are working today, he says, most of the time the work is segmented between fresh content on breaking news and a more in-depth follow up story that might be tapping archive content to show what a public figure said 10 years ago. And agentic AI is making it much easier to extract old content in editing tools and also suggest ways to create a story.
“It’s way easier for them to build new stories that were not possible to do before,” Petitpont says. “In this example about the customer we’re going to announce together, I’m able to, in five minutes, build a story about the evolution of the Apple stock after a specific set of events. And this thing was not really possible before.”
Moments Lab released a video discovery agent several months ago. While the general availability release is for NAB, nearly all of its customers are already using it. And he says it is changing the way they work.
“The way they are starting their experience now is no longer by searching,” Petitpont says. “It’s by prompting the agent saying, ‘I’m working on this story about the B-52 bomber, and I would like to compare the last time it was used in a war zone to what’s happening in Iran right now.’ And because the agent has access to the internet, it’s possible to craft a narrative and to identify very quickly all the elements that might be interesting, both in the archive but also in the live feed. It’s like having an army of people that is going to have the editor doing this way faster than before.”
Varying Global Appetites
Octopus Newsroom integrated an AI assistant into its platform several years ago, allowing the NRCS to act as a proxy between the newsroom and third-party AI tools. The AI assistant allows journalists to easily access popular cloud-based LLMs like Perplexity, ChatGPT, Claude and Gemini to help with mundane tasks, simply by clicking a button on the interface. The most common applications for Octopus’ AI assistant today are to perform tagging in stories and to help with script preparation, particularly in rephrasing a script for multiple versions.
The company has also worked to support local LLMs that customers are building themselves by feeding them their own data and running them on their own compute. Concerns about security and intellectual property are often a driver of the creation of local LLMs, says Octopus COO and Sales Director Gabriel Janko, as well as the costs associated with heavy use of public LLMs.
While Octopus was an early mover in adopting AI, its customers aren’t yet running any agentic workflows. The Czech Republic-based company, which does about 80% of its business in Europe and Asia and the rest in North America, sees significant differences in how customers think about AI overall between different regions, with customers in Asia being the most aggressive in using new AI tools.

“In some parts of the world, they are moving very fast, and basically don’t care about security, don’t care about the risks involved with AI,” Janko says. “In other parts of the world, there are very strict rules, and they are very careful and scared of letting AI into the newsrooms at all. Of course, most of the customers are somewhere in between. Our approach is offering different tools in the system.”
In that vein, Janko wants Octopus to be ready with an agent if customers want it in the future. He plans to use NAB to gauge customer demand.
“I think we will have quite a lot of discussions on this topic at NAB to understand if the market is there yet, if they are ready for that,” Janko says. “Because if, for example, we introduced an agent here in Octopus right now, I could just tell the agent to build the rundown for me. That would mean that I give full control to the agent, whatever it will do in Octopus to build the rundown, copy and paste text, go on-air and so on. You know, it can be potentially dangerous.”
Avid’s Big AI Push
Editing, newsroom and content management vendor Avid first incorporated AI tools like speech-to-text with Microsoft’s “cognitive services” back in 2016. The company has gradually layered in more AI capabilities as those tools continued to improve over the years but has ramped up its AI efforts since its 2024 acquisition of Norwegian storycentric workflow specialist Wolftech. That includes the September 2025 launch of Content Core, a cloud-based “content data platform” that unifies asset identity, ingest, storage, metadata, orchestration and rights information into a single intelligent layer and provides semantic search of all the media within an enterprise, regardless of where it is stored.

“What that all plays into, particularly in a news context, is trying to make it easier for people to surface content that’s relevant to what they’re actually looking for,” says Craig Wilson, principal enterprise specialist, broadcast, for Avid.
At NAB, Avid will show Content Core integrated with all of its news tools, including the MediaCentral content management system, Wolftech News, the legacy iNEWS newsroom computer system, Avid NEXIS storage and Stream IO ingest and playout, to provide an end-to-end workflow for story planning, production and publishing across both broadcast and digital platforms.
New AI capabilities that will be demonstrated include the ability to identify “trending items” in a newsroom, Wilson says, as well as recommending content based on a journalist’s usage, so that a sports reporter gets the baseball scores instead of political stories.
While Content Core is still relatively new, Avid’s eventual goal is for it to “be that single pane of glass experience which will give you access to all of these things,” Wilson says. “And so implementing Wolftech News as part of it and other services, is part of what the plans are over the course of the next few months.”
Avid’s other big AI news is a multi-year strategic partnership with Google Cloud where it will integrate Google’s generative and agentic AI technology into its creative tools. That includes embedding Google’s Gemini models and Vertex AI platform directly into both Avid Content Core and the company’s flagship Media Composer editor, allowing production teams to query content using natural language and enabling a range of agentic AI workflows.
To make searching for content easier, Content Core is now using Google’s Vision Warehouse API to provide enhanced media metadata. The system does analysis of scenes and then provides a metadata layer back.
As for Media Composer, the system has had an SDK that allows third-party extensions into the editor for several years. But at NAB, Avid will be showing a new extension panel that allows a user to access Gemini directly within Media Composer, giving an editor the ability to use Gemini to generate B-roll footage or perform automated tagging of content.
Avid has worked with Google Cloud for years as one of several hyperscalers used by customers, including AWS and Microsoft Azure. But the new deal does reflect a change in thinking as AI development continues to accelerate, Wilson says.
“Over the course of the last few months in particular, there’s been a move to look at ways of how can we integrate AI services more broadly within our product set,” Wilson says. “Not just the ones that we kind of develop and deliver ourselves, but also taking advantage of external ones.”
The post Road To NAB: Agentic AI Poised To Speed Up News Production appeared first on TV News Check.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
