Artificial intelligence is no longer a future trend for journalism. It is already embedded in how news is produced, distributed and consumed.
A recent webinar by the Reuters Institute explored what we currently know about AI’s impact on newsrooms, audiences and the wider information ecosystem. The picture that emerges shows that recent developments in artificial intelligence are not just simple disruption, but a complex transition to a new age, where opportunity and risk develop side by side, akin to the launch of the World Wide Web in the 1990s.
This blog post summarises key findings and conversations from the webinar. You can watch the webinar in its entirety below or by clicking here.
The use of AI is rising among the public. According to Reuters Institute research, 6 out of 10 people reported using an AI tool. Weekly usage has doubled (18% to 34%) in the space of one year, with younger people more likely to use AI on a weekly basis. One in five people use ChatGPT on a weekly basis, making it the most widely used standalone tool. Most AI tools are also more trusted than they are distrusted.
When it comes to news, relatively few people are using AI to directly access news, although this has also doubled (3% to 6%). Worldwide, younger people (under 25s) are more likely to use AI to access news (15%). Most commonly, AI is used to search for the latest news, while younger audiences are more likely to use it to gain context on news items.
More often than through standalone chatbots, people encounter AI via search responses such as Google’s AI overviews. Around half trust these responses, and one-third report that they often click through to sources. For more information on how AI is changing the nature of search, you can read our piece here.
There is a comfort gap in how AI use is perceived across different sectors. Overall, AI is seen as having a significant impact across many sectors, but the level of trust people have in those sectors using AI responsibly varies. When it comes to the news, attitudes are more cautious: people expect AI to make news cheaper and more up-to-date, but less trustworthy. Audiences are also more comfortable with AI being used in background tasks than in content creation. Only a minority believe that journalists check AI output, and few people report seeing or using AI features within news products.
The webinar highlighted several issues in how AI is covered in the media.
AI is often framed as something “magical”, rather than as a system that can be analysed like any other technology. Coverage also tends to rely on jargon or focus on outcomes without explaining how the technology works, making it harder for audiences to understand.
There is also a challenge in accessing data: much of the available information comes from the companies developing AI, which disclose only as much as they choose to. Independent reporting on issues such as environmental impact, bias and societal effects remains limited.
As a result, key aspects of AI, including its environmental cost and how models evolve over time, receive relatively little scrutiny.
More balanced coverage would require greater collaboration across newsrooms and disciplines, as well as more independent research. It would also mean treating AI less as a standalone topic and more as something that cuts across multiple beats, from politics and economics to climate and society.
According to Reuters Institute research, within news organisations, AI is already widely used, particularly for background tasks. Journalists use it for transcription, translation, summarisation, research and idea generation. In the UK, more than half of journalists (56%) report using AI on a weekly basis.
Despite this, attitudes remain cautious. A majority of journalists see AI as a potential threat rather than an opportunity. They are concerned about AI's impact on public trust and the originality and accuracy of content.
This cautious approach is reflected in how AI is deployed. In many organisations, AI is prohibited in content generation, while being encouraged in background processes. This distinction is important: as noted earlier, audiences are more comfortable with AI assisting journalism than replacing it.
One example of this approach is The Guardian. Rather than prioritising public-facing AI features, it has focused on internal training and editorial principles, according to Chris Moran, Editorial Lead on Generative AI. It has placed emphasis on having the right people to evaluate new tools and developments and determine what value, if any, they could bring.
At the Guardian, AI is used to support research and analysis, for example, in identifying patterns across large datasets or exploring its archive. At the same time, the organisation has avoided launching chatbots for readers, emphasising transparency, accuracy and the value of its journalism. One new public-facing feature that is currently in A/B testing is an AI-powered tag page that suggests articles from its archive based on the tagged topic, providing additional context while also promoting older material.
While much of the current conversation focuses on efficiency, one of the most significant opportunities lies in investigative journalism.
AI enables journalists to analyse large datasets, process satellite imagery and identify patterns that would otherwise be impossible to detect. Tasks that once took months or years can now be completed much faster, or become feasible for smaller teams with limited resources.
For example, in Anthropic's study, 81,000 qualitative research interviews were conducted via a chatbot interface over the course of a week, spanning 159 countries and 70 different languages.
However, these capabilities come with new responsibilities. AI outputs must be verified, the code it produces must be understood, and data sources must be scrutinised. The technology expands what is possible, but it does not remove the need for human judgement and oversight.
Perhaps the most urgent challenge discussed in the webinar is the growth of AI-driven misinformation.
The ability to generate convincing text, images and audio at scale allows small groups to spread misleading content to large audiences. Tai Nalon, the co-founder of the fact-checking firm Aos Fatos, reported that claims involving AI-generated misinformation increased by 70% in a single year. This AI-generated content reached 32 million views and 2.1 million interactions on social media in Brazil.
The spread of AI-generated material creates additional challenges for fact-checkers. Questions from audiences about obviously AI-generated content waste fact-checkers’ time that could be better spent on more important work. A further challenge is that most of the investigations into AI-led misinformation campaigns have been done in English. This is a problem because many misinformation campaigns, including those linked to actors such as Russia, are conducted in smaller languages like Catalan.
The large-scale nature of the problem raises concerns about long-term trust. If audiences become uncertain about what is real, scepticism can shift into cynicism, with broader consequences for democracy and democratic institutions.
However, AI is not solely the enemy. While AI accelerates the production of false information, it also offers tools to detect and analyse misinformation on a scale beyond what even a large newsroom could handle. Fact-checkers are already using AI to monitor narratives, identify harmful claims and prioritise responses. For example, Aos Fatos is developing a tool designed to fact-check live video by providing context to audiences in real time.
Alongside these tools, broader responses are also needed. To counter the spread of misinformation, media and political literacy should be taught from an early age by both schools and the media. To reach people with low trust in the news media, collaboration with influencers and community figures who already have rapport with the affected community may also be effective.
A recurring theme throughout the webinar was that AI is not just a technological issue, but a societal one. Research into how AI affects mental health or social skills is in its infancy. Understanding AI’s environmental impact currently relies heavily on disclosures from technology companies. There is also limited visibility into how models evolve over time and how those changes affect different communities. There is still much we do not know about AI, but its adoption is already well underway.
The key question is therefore not whether AI will transform the world, but who controls that transformation. At present, much of the power currently lies with large technology companies that control the models, data and infrastructure behind AI systems. Their business models, incentives and level of transparency shape how the technology evolves.
At the same time, there is growing public demand for greater transparency, clearer regulation and stronger accountability. This includes expectations around labelling AI-generated content, protecting fundamental rights and ensuring that systems are used responsibly.
Yet legislation has struggled to keep pace with rapidly developing technology. Governments operate within election and budget cycles that are often out of sync with technological change. Regulation must also balance freedom of speech with the need to limit harm. However, without sufficient regulation, power is effectively left to technology companies to set the rules themselves.
For publishers and news organisations, the implications are practical rather than theoretical.
AI is already part of the workflow, whether it has been officially integrated into the newsroom or used individually by members of the organisation. The challenge is not whether to use it, but how to use it responsibly and strategically.
This includes identifying use cases where AI adds clear value, establishing guidelines for its use, and maintaining transparency with audiences. It also requires recognising the limits of the technology. AI can support journalism, but it cannot replace the role of editorial judgement, verification and accountability.
At the same time, media organisations need to consider their position in a changing distribution landscape, where AI systems increasingly mediate access to information.
The current moment is best understood as a transition phase.
AI is already reshaping journalism, but its long-term impact remains uncertain. The technology continues to evolve, public expectations are still forming, and regulatory frameworks and research are still catching up.
What is clear is that the role of reliable, verified information becomes more important as the volume of AI-generated content increases. The news media are well placed to fill that need. However, declining trust that people have in the news in many countries makes it more difficult.
The challenge is how to adopt new tools at scale while maintaining human oversight, verification and public trust, and without compromising journalistic values or content quality.