Close Menu
Tech Savvyed
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
The “iPhone clone” debate is stuck in the past

The “iPhone clone” debate is stuck in the past

24 April 2026
Sony’s table tennis robot made me think about what happens when AI gets a body

Sony’s table tennis robot made me think about what happens when AI gets a body

24 April 2026
Scientists pretended to be delusional in AI chats. Grok and Gemini encouraged them.

Scientists pretended to be delusional in AI chats. Grok and Gemini encouraged them.

24 April 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Tech Savvyed
SUBSCRIBE
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release
Tech Savvyed
Home»News»Research warns AI agents can be a self-churning propaganda machine
News

Research warns AI agents can be a self-churning propaganda machine

News RoomBy News Room13 March 20263 Mins Read
Research warns AI agents can be a self-churning propaganda machine
Share
Facebook Twitter Reddit Telegram Pinterest Email

A new study from the University of Southern California warns that AI programs can now run propaganda campaigns without human involvement.

The study asks us to imagine a scenario where two weeks before a major election, thousands of posts flood X, Reddit, and Facebook, all pushing the same narrative and amplifying each other. It might seem like an organic movement created by humans. Instead, it’s a bunch of AI agents running the entire campaign. 

That’s not a hypothetical. It’s the central finding of a new paper accepted for publication at The Web Conference 2026, written by researchers at USC’s Information Sciences Institute.

The findings highlight serious concerns about how bad actors could weaponize AI to flood the internet with misinformation and manipulate public opinion.

How did researchers come to this conclusion?

The researchers built a simulated X-like environment with 50 AI agents, with 10 agents acting as influencers and 40 as regular users. Out of 40 regular agents, 20 agents had views aligned with the influencers, while the other 20 had views opposing the campaign. The researchers built their simulation using the PyAutogen library and ran it on the Llama 3.3 70B model. 

The operators were then tasked with promoting a fictional candidate, with the goal of making the campaign hashtag go viral. What followed was unsettling. The bots didn’t just follow a script. They wrote their own posts, learned what worked, and copied each other’s successful content. 

One AI agent literally wrote that it wanted to retweet a teammate’s post because it had already gained engagement. Researchers later increased the number of AI agents to 500 and found the results to be consistent with their findings.

Lead scientist Luca Luceri put it bluntly, “Our paper shows that this is not a future threat. It’s already technically possible.” 

What makes these bots harder to catch?

Traditional bots are predictable. They post the same content, use the same hashtags, and follow the same patterns. It’s as if they’re all following the same script, which makes them easy to spot.

AI-powered bots are different. Since these LLM-powered bots can create their own content, every post is slightly different, and the coordination happens beneath the surface, making the conversations feel genuine. The result is a disinformation campaign that can operate autonomously with minimal human input.

depiction of swarm of ai agents

The most alarming finding was that simply telling the bots who their teammates were produced coordination nearly as strong as when they actively planned together.

The threat doesn’t stop at elections either. Luceri warns that the same playbook could be applied to public health, immigration, and economic policy, anywhere manufactured consensus can shift public opinion.

Can we do anything to stop it?

These kinds of campaigns are difficult for individual users to detect and stop. The researchers put the onus on platforms to stop such coordinated misinformation campaigns by looking beyond individual posts and focusing on how the accounts behave together. 

According to researchers, coordinated re-sharing, rapid mutual amplification, and converging narratives are all detectable signals, even when the content looks genuine. 

Frankly, AI has ushered us into a new world, and it’s going to get a lot darker before it can get better.

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleAI photo editing without the privacy trade off is almost here
Next Article Samsung might avoid a price hike on the upcoming Galaxy Z Fold 8, and you can thank Apple for it

Related Articles

The “iPhone clone” debate is stuck in the past

The “iPhone clone” debate is stuck in the past

24 April 2026
Sony’s table tennis robot made me think about what happens when AI gets a body

Sony’s table tennis robot made me think about what happens when AI gets a body

24 April 2026
Scientists pretended to be delusional in AI chats. Grok and Gemini encouraged them.

Scientists pretended to be delusional in AI chats. Grok and Gemini encouraged them.

24 April 2026
Autonomous cars were supposed to free us from traffic hell. Research says otherwise

Autonomous cars were supposed to free us from traffic hell. Research says otherwise

24 April 2026
Tired of Gemini and ChatGPT? Claude now has your back with Spotify, Uber, and more connectors

Tired of Gemini and ChatGPT? Claude now has your back with Spotify, Uber, and more connectors

24 April 2026
This AI bot does the mindless internet scrolling for you so you can skip the brainrot

This AI bot does the mindless internet scrolling for you so you can skip the brainrot

24 April 2026
Demo
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss
Sony’s table tennis robot made me think about what happens when AI gets a body

Sony’s table tennis robot made me think about what happens when AI gets a body

By News Room24 April 2026

I wanted to dismiss Sony’s table tennis robot as another expensive lab flex. A machine…

Scientists pretended to be delusional in AI chats. Grok and Gemini encouraged them.

Scientists pretended to be delusional in AI chats. Grok and Gemini encouraged them.

24 April 2026
Saros Review – At The Mountains Of Magnificence

Saros Review – At The Mountains Of Magnificence

24 April 2026
Autonomous cars were supposed to free us from traffic hell. Research says otherwise

Autonomous cars were supposed to free us from traffic hell. Research says otherwise

24 April 2026
Tech Savvyed
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech Savvyed. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.