Close Menu
Tech Savvyed
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
Sony’s new open-back gaming headset could be a budget surprise

Sony’s new open-back gaming headset could be a budget surprise

30 March 2026
Apple’s iMac could get a massive display upgrade, but not anytime soon

Apple’s iMac could get a massive display upgrade, but not anytime soon

30 March 2026
Wanderstop Developer Ivy Road Is Shutting Down Tomorrow

Wanderstop Developer Ivy Road Is Shutting Down Tomorrow

30 March 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Tech Savvyed
SUBSCRIBE
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release
Tech Savvyed
Home»News»Stanford study stresses you should avoid using AI chatbots as a personal guide
News

Stanford study stresses you should avoid using AI chatbots as a personal guide

News RoomBy News Room30 March 20263 Mins Read
Stanford study stresses you should avoid using AI chatbots as a personal guide
Share
Facebook Twitter Reddit Telegram Pinterest Email

Stanford researchers are warning that using AI chatbots for personal advice could backfire. The problem isn’t just accuracy, it’s how these systems respond when you’re dealing with complicated, real-world conflicts.

A new study found that AI models often side with users even when they’re in the wrong, reinforcing questionable decisions instead of challenging them. That pattern doesn’t just shape the advice itself, it changes how people see their own actions. Participants who interacted with overly agreeable chatbots grew more convinced they were right and less willing to empathize or repair the situation.

If you’re treating AI as a personal guide, you’re likely getting reassurance rather than honest feedback.

The study found a clear bias

Stanford researchers evaluated 11 major AI models using a mix of interpersonal dilemmas, including scenarios involving harmful or deceptive conduct. The pattern showed up consistently. Chatbots aligned with the user’s position far more often than human responses did.

In general advice scenarios, the models supported users nearly half again as often as people. Even in clearly unethical situations, they still endorsed those choices close to half the time. The same bias appeared in cases where outside observers had already agreed the user was in the wrong, yet the systems softened or reframed those actions in a more favorable way.

This points to a deeper tradeoff in how these tools are built. Systems optimized to be helpful often default to agreement, even when a better response would involve pushback.

Why users still trust it

Most people don’t realize it’s happening. Participants rated agreeable and more critical AI responses as equally objective, which suggests the bias often slips by unnoticed.

Part of the reason comes down to tone. The responses rarely declare that a user is right, but instead justify actions in polished, academic language that feels balanced. That framing makes reinforcement sound like careful reasoning.

ChatGPT running on a phone

Over time, that creates a loop. People feel affirmed, trust the system more, and return with similar problems. That reinforcement can narrow how someone approaches conflict, making them less open to reconsidering their role. Users still preferred these responses despite the downsides, which complicates efforts to fix the issue.

What you should do instead

The researchers’ guidance is simple: Don’t rely on AI chatbots as a substitute for human input when you’re dealing with personal conflicts or moral decisions.

Real conversations involve disagreement and discomfort, which can help you reassess your actions and build empathy. Chatbots remove that pressure, making it easier to avoid being challenged. There are early signs this tendency can be reduced, but those fixes aren’t widely in place yet.

For now, use AI to organize your thinking, not to decide who’s right. When relationships or accountability are involved, you’ll get better outcomes from people who are willing to push back.

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleBluesky built a new AI tool that wants to free you from social algorithms
Next Article Battery tech that stores over 9 times more energy is here and it’s perfect for your gadgets

Related Articles

Sony’s new open-back gaming headset could be a budget surprise

Sony’s new open-back gaming headset could be a budget surprise

30 March 2026
Apple’s iMac could get a massive display upgrade, but not anytime soon

Apple’s iMac could get a massive display upgrade, but not anytime soon

30 March 2026
Which software offers global accounting compliance?

Which software offers global accounting compliance?

30 March 2026
Apple’s anonymous email feature isn’t nearly as anonymous as you might think

Apple’s anonymous email feature isn’t nearly as anonymous as you might think

30 March 2026
Microsoft’s Copilot Cowork arrives with smarter AI research tools to spot gaps in your work

Microsoft’s Copilot Cowork arrives with smarter AI research tools to spot gaps in your work

30 March 2026
The Asus Morph 96 Wireless gives you the custom keyboard feel without the DIY hassle

The Asus Morph 96 Wireless gives you the custom keyboard feel without the DIY hassle

30 March 2026
Demo
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss
Apple’s iMac could get a massive display upgrade, but not anytime soon

Apple’s iMac could get a massive display upgrade, but not anytime soon

By News Room30 March 2026

If you’ve been eyeing an iMac upgrade might want to get comfortable, because one of…

Wanderstop Developer Ivy Road Is Shutting Down Tomorrow

Wanderstop Developer Ivy Road Is Shutting Down Tomorrow

30 March 2026
Which software offers global accounting compliance?

Which software offers global accounting compliance?

30 March 2026
Apple’s anonymous email feature isn’t nearly as anonymous as you might think

Apple’s anonymous email feature isn’t nearly as anonymous as you might think

30 March 2026
Tech Savvyed
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech Savvyed. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.