AI, News, and You
The news industry is on shaky ground. Worldwide we are seeing a decline in local newspapers, and major outlets aren’t faring much better. With the digital age, news companies have struggled to adapt, especially with the recent surge of generative AI.
A clear example is the lawsuit Daily News v. Microsoft, where the Daily News is leading a group of newspapers in suing OpenAI and Microsoft. The suit alleges that AI tools are being trained on their digital content without consent, diverting traffic away from their websites.
No matter how you consume news, media companies must rapidly adapt to this changing landscape.
/Lost in AI Translation
AI’s role in summarizing the news is raising red flags. And chief among these concerns is quality. BBC ran a study that had ChatGPT, Copilot, Gemini, and Perplexity summarize 100 news stories. Journalists then reviewed the outputs and found that 51% of the AI-generated answers had significant issues.
Some errors were minor—missing nuance or misrepresenting tone. Others were more serious, with factual inaccuracies, misleading language, or outright hallucinations. In a world where most readers skim headlines or brief summaries, getting these details wrong isn’t just sloppy. It’s dangerous.
When AI Writes, Trust Takes a Hit
Meanwhile trust is also pretty low between readers and AI. Research from the University of Kansas found that when readers think AI is involved in the news they inherently don’t trust its credibility—even AI’s contribution isn’t fully understood.
Its applications can vary widely—from generating an entire article, to making simple spelling corrections, to assisting with behind-the-scenes tasks like SEO optimization that readers never see. Rebuilding trust will, in part, require companies to be more transparent about how AI is being used.
This is especially critical since trust in internet content has eroded dramatically with the rise of convincing Deepfakes, AI-generated videos, and AI-generated audio.
Hallucinations, Defamation, and the Cost of Being Wrong
Mistakes in journalism are nothing new—but AI errors can be harder to detect and more costly to correct. Several recent defamation cases have emerged around AI hallucinations that wrongly implicated individuals in crimes or controversies. While not widespread, there have been claims in recent years tied to AI hallucinations or errors—such as attaching the wrong image to legal trials and damaging reputations, as seen in Ireland.
In the U.S., ChatGPT fabricated a newspaper article falsely linking a law professor to a sexual harassment case. In Australia, an elected mayor considered suing OpenAI after ChatGPT claimed he had served prison time for bribery—when in fact, he was a whistleblower who uncovered it.
These incidents underscore the serious risks of AI-generated news content. If we want AI to play a responsible role in journalism, we need to invest far more effort into ensuring it generates factual information, not misinformation or disinformation at scale.
/The Future for AI and News
Despite the risks, not all developments are cause for alarm. Google recently signed a deal with the Associated Press to deliver up-to-date news through its generative AI chatbot. This kind of partnership—built on trusted data and proper licensing—could be a model for more responsible integration.
And with greater human oversight, better fact-checking, and strong editorial standards, AI could improve (not replace) how we consume news. Meanwhile, some AI companies are working behind the scenes to support journalists, handling tasks like SEO optimization to free up time for actual reporting.
But for that to succeed, we need strong editorial oversight. AI-generated content should be reviewed with the same rigor as any other article. Journalistic standards—accuracy, accountability, context—must apply whether a story is written by a person or assisted by a model.
What’s Next?
Still, the biggest question isn't whether AI can write the news, but whether it should. Journalism is more than assembling facts. It’s about context, judgment, and accountability—qualities that current AI models lack. Until models can reason about the real-world impact of the stories they produce, there’s a limit to how much we should rely on them.
This moment is also an opportunity to reimagine the value of human journalism. In a media landscape overwhelmed by content, trusted voices and clear editorial responsibility matter more than ever. AI can assist, but it can’t replace the role of journalists as sense-makers and storytellers.