For news media and news agencies across Europe, artificial intelligence is already present in newsrooms, sometimes quietly embedded in tools, sometimes actively tested, and often approached with care. Through its work over the past year, TEMS has gained a clearer understanding of how news organisations are engaging with AI and what they need in order to do so responsibly.

In practice, AI is most often used to support journalistic workflows rather than replace them. News organisations rely on AI for tasks such as transcription, translation, speech to text, metadata enrichment, archive management, and content discovery. These tools help journalists work faster and reach wider audiences, particularly across languages and formats. Yet these uses often remain fragmented, tied to external platforms, and disconnected from broader data strategies. As a result, newsrooms hold valuable content that is not always structured, traceable, or ready to be leveraged as a strategic asset.

Alongside this pragmatic adoption comes a strong sense of editorial and ethical responsibility. Concerns around source protection, intellectual property, data privacy, and the risk of hallucination and unintended bias are central to decisions about AI adoption. Many organisations have introduced internal rules or restrictions, often aligned with GDPR and anticipating obligations under the EU AI Act. For news media, maintaining trust with sources and audiences is non-negotiable.

Levels of readiness vary widely. Some newsrooms have room to experiment, while others face constraints that limit how far they can go beyond essential, task-based uses. Yet all face similar questions: how to use AI without losing control over data, content, and editorial standards; how to ensure that decades of verified reporting, photos, and videos are not undervalued or misused in an AI driven ecosystem; and how to participate in emerging markets without compromising independence or losing sight of the human dimension.

TEMS provides a structured, European approach that helps news organisations turn verified reporting into well documented, high-quality assets that remain under their control. It supports transparent rights documentation, verified source attribution, and trust flags that make content more discoverable and valuable across sectors. It also strengthens collective negotiating power with AI companies and platforms, reducing the complexity individual outlets face when navigating licensing, rights, and compliance alone.

As AI reshapes how information is produced, verified, and distributed, shared frameworks grounded in trust and transparency become increasingly important. TEMS aims to support news media and agencies as they navigate this transition with the same principles that guide their work every day: accuracy, independence, and public trust.