Close Menu
Tech Savvyed
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
OpenAI is bringing in the mighty Codex tool to the ChatGPT app on your phone

OpenAI is bringing in the mighty Codex tool to the ChatGPT app on your phone

16 May 2026
Asus ROG and Xreal just built the AR glasses gamers have been waiting for, at a price that stings

Asus ROG and Xreal just built the AR glasses gamers have been waiting for, at a price that stings

16 May 2026
YouTube is giving creators a new weapon against AI deepfakes

YouTube is giving creators a new weapon against AI deepfakes

16 May 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Tech Savvyed
SUBSCRIBE
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release
Tech Savvyed
Home»News»Stanford study stresses you should avoid using AI chatbots as a personal guide
News

Stanford study stresses you should avoid using AI chatbots as a personal guide

News RoomBy News Room30 March 20263 Mins Read
Stanford study stresses you should avoid using AI chatbots as a personal guide
Share
Facebook Twitter Reddit Telegram Pinterest Email

Stanford researchers are warning that using AI chatbots for personal advice could backfire. The problem isn’t just accuracy, it’s how these systems respond when you’re dealing with complicated, real-world conflicts.

A new study found that AI models often side with users even when they’re in the wrong, reinforcing questionable decisions instead of challenging them. That pattern doesn’t just shape the advice itself, it changes how people see their own actions. Participants who interacted with overly agreeable chatbots grew more convinced they were right and less willing to empathize or repair the situation.

If you’re treating AI as a personal guide, you’re likely getting reassurance rather than honest feedback.

The study found a clear bias

Stanford researchers evaluated 11 major AI models using a mix of interpersonal dilemmas, including scenarios involving harmful or deceptive conduct. The pattern showed up consistently. Chatbots aligned with the user’s position far more often than human responses did.

In general advice scenarios, the models supported users nearly half again as often as people. Even in clearly unethical situations, they still endorsed those choices close to half the time. The same bias appeared in cases where outside observers had already agreed the user was in the wrong, yet the systems softened or reframed those actions in a more favorable way.

This points to a deeper tradeoff in how these tools are built. Systems optimized to be helpful often default to agreement, even when a better response would involve pushback.

Why users still trust it

Most people don’t realize it’s happening. Participants rated agreeable and more critical AI responses as equally objective, which suggests the bias often slips by unnoticed.

Part of the reason comes down to tone. The responses rarely declare that a user is right, but instead justify actions in polished, academic language that feels balanced. That framing makes reinforcement sound like careful reasoning.

ChatGPT running on a phone

Over time, that creates a loop. People feel affirmed, trust the system more, and return with similar problems. That reinforcement can narrow how someone approaches conflict, making them less open to reconsidering their role. Users still preferred these responses despite the downsides, which complicates efforts to fix the issue.

What you should do instead

The researchers’ guidance is simple: Don’t rely on AI chatbots as a substitute for human input when you’re dealing with personal conflicts or moral decisions.

Real conversations involve disagreement and discomfort, which can help you reassess your actions and build empathy. Chatbots remove that pressure, making it easier to avoid being challenged. There are early signs this tendency can be reduced, but those fixes aren’t widely in place yet.

For now, use AI to organize your thinking, not to decide who’s right. When relationships or accountability are involved, you’ll get better outcomes from people who are willing to push back.

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleBluesky built a new AI tool that wants to free you from social algorithms
Next Article Battery tech that stores over 9 times more energy is here and it’s perfect for your gadgets

Related Articles

OpenAI is bringing in the mighty Codex tool to the ChatGPT app on your phone

OpenAI is bringing in the mighty Codex tool to the ChatGPT app on your phone

16 May 2026
Asus ROG and Xreal just built the AR glasses gamers have been waiting for, at a price that stings

Asus ROG and Xreal just built the AR glasses gamers have been waiting for, at a price that stings

16 May 2026
YouTube is giving creators a new weapon against AI deepfakes

YouTube is giving creators a new weapon against AI deepfakes

16 May 2026
Microsoft is finally fixing the most annoying thing about Windows 11

Microsoft is finally fixing the most annoying thing about Windows 11

16 May 2026
Xbox Elite 3 controller leak shows a familiar design garnished with some mysterious buttons

Xbox Elite 3 controller leak shows a familiar design garnished with some mysterious buttons

16 May 2026
Gemini Intelligence has strict requirements, and your phone may not qualify

Gemini Intelligence has strict requirements, and your phone may not qualify

16 May 2026
Demo
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss
Asus ROG and Xreal just built the AR glasses gamers have been waiting for, at a price that stings

Asus ROG and Xreal just built the AR glasses gamers have been waiting for, at a price that stings

By News Room16 May 2026

AR Glasses have promised a lot over the years but delivered considerably less. Asus ROG…

YouTube is giving creators a new weapon against AI deepfakes

YouTube is giving creators a new weapon against AI deepfakes

16 May 2026
Microsoft is finally fixing the most annoying thing about Windows 11

Microsoft is finally fixing the most annoying thing about Windows 11

16 May 2026
Xbox Elite 3 controller leak shows a familiar design garnished with some mysterious buttons

Xbox Elite 3 controller leak shows a familiar design garnished with some mysterious buttons

16 May 2026
Tech Savvyed
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech Savvyed. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.