Close Menu
Tech Savvyed
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
You can avoid ChatGPT ads, but your Free limits may change

You can avoid ChatGPT ads, but your Free limits may change

10 February 2026
YouTube Music can now turn your thoughts into a personalized playlist

YouTube Music can now turn your thoughts into a personalized playlist

10 February 2026
Romeo is a Dead Man Review – Worthwhile Weird

Romeo is a Dead Man Review – Worthwhile Weird

10 February 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Tech Savvyed
SUBSCRIBE
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release
Tech Savvyed
Home»News»AI chatbots still struggle with news accuracy, study finds
News

AI chatbots still struggle with news accuracy, study finds

News RoomBy News Room14 January 20263 Mins Read
AI chatbots still struggle with news accuracy, study finds
Share
Facebook Twitter Reddit Telegram Pinterest Email

A month-long experiment has raised fresh concerns about the reliability of generative AI tools as sources of news, after Google’s Gemini chatbot was found fabricating entire news outlets and publishing false reports. The findings were first reported by The Conversation, which conducted the investigation.

The experiment was led by a journalism professor specialising in computer science, who tested seven generative AI systems over a four-week period. Each day, the tools were asked to list and summarise the five most important news events in Québec, rank them by importance, and provide direct article links as sources. Among the systems tested were Google’s Gemini, OpenAI’s ChatGPT, Claude, Copilot, Grok, DeepSeek, and Aria.

The most striking failure involved Gemini inventing a fictional news outlet – examplefictif.ca – and falsely reporting a school bus drivers’ strike in Québec in September 2025. In reality, the disruption was caused by the withdrawal of Lion Electric buses due to a technical issue. This was not an isolated case. Across 839 responses collected during the experiment, AI systems regularly cited imaginary sources, provided broken or incomplete URLs, or misrepresented real reporting.

The findings matter because a growing number of people are already using AI chatbots for news

According to the Reuters Institute Digital News Report, six per cent of Canadians relied on generative AI as a news source in 2024. When these tools hallucinate facts, distort reporting, or invent conclusions, they risk spreading misinformation – particularly when their responses are presented confidently and without clear disclaimers.

For users, the risks are practical and immediate. Only 37 per cent of responses included a complete and legitimate source URL. While summaries were fully accurate in less than half of the cases, many were only partially correct or subtly misleading. In some instances, AI tools added unsupported “generative conclusions,” claiming that stories had “reignited debates” or “highlighted tensions” that were never mentioned by human sources. These additions may sound insightful but can create narratives that simply do not exist.

News

Errors were not limited to fabrication

Some tools distorted real stories, such as misreporting the treatment of asylum seekers or incorrectly identifying winners of major sporting events. Others made basic factual mistakes in polling data or personal circumstances. Collectively, these issues suggest that generative AI still struggles to distinguish between summarising news and inventing context.

Looking ahead, the concerns raised by The Conversation align with a broader industry review. A recent report by 22 public service media organisations found that nearly half of AI-generated news answers contained significant issues, from sourcing problems to major inaccuracies. As AI tools become more integrated into search and daily information habits, the findings underscore a clear warning: when it comes to news, generative AI should be treated as a starting point at best – not a trusted source of record.

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticlePiece by piece, SpaceX preps first Starship flight from Space Coast
Next Article Scientists are teaching OLED screens how to shine smarter

Related Articles

You can avoid ChatGPT ads, but your Free limits may change

You can avoid ChatGPT ads, but your Free limits may change

10 February 2026
YouTube Music can now turn your thoughts into a personalized playlist

YouTube Music can now turn your thoughts into a personalized playlist

10 February 2026
Google insists YouTube Music’s paywalled lyrics are just a ‘limited experiment’

Google insists YouTube Music’s paywalled lyrics are just a ‘limited experiment’

10 February 2026
Rivian’s new Watch app turns your wrist into a car remote

Rivian’s new Watch app turns your wrist into a car remote

10 February 2026
Ring’s Search Party Super Bowl ad sold pet love and surveillance fears. Here’s how to opt out

Ring’s Search Party Super Bowl ad sold pet love and surveillance fears. Here’s how to opt out

10 February 2026
DuckDuckGo’s AI lets you talk to it without giving up privacy

DuckDuckGo’s AI lets you talk to it without giving up privacy

10 February 2026
Demo
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss
YouTube Music can now turn your thoughts into a personalized playlist

YouTube Music can now turn your thoughts into a personalized playlist

By News Room10 February 2026

Getting work done by simply describing is the hottest new trend. You can prompt an…

Romeo is a Dead Man Review – Worthwhile Weird

Romeo is a Dead Man Review – Worthwhile Weird

10 February 2026
Google insists YouTube Music’s paywalled lyrics are just a ‘limited experiment’

Google insists YouTube Music’s paywalled lyrics are just a ‘limited experiment’

10 February 2026
Directive 8020, The Next Dark Pictures Game, Gets May Launch Date And New Trailer

Directive 8020, The Next Dark Pictures Game, Gets May Launch Date And New Trailer

10 February 2026
Tech Savvyed
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech Savvyed. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.