Close Menu
Tech Savvyed
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
Samsung’s upcoming foldable phones might sense debris before it damages the screen

Samsung’s upcoming foldable phones might sense debris before it damages the screen

24 February 2026
Deadpool Villain Actor Ed Skrein Cast As Baldur In Amazon’s God Of War TV Adaptation

Deadpool Villain Actor Ed Skrein Cast As Baldur In Amazon’s God Of War TV Adaptation

24 February 2026
Lamborghini kills its upcoming all-electric Lanzador because of nearly zero interest

Lamborghini kills its upcoming all-electric Lanzador because of nearly zero interest

24 February 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Tech Savvyed
SUBSCRIBE
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release
Tech Savvyed
Home»News»Google’s new plan to check if your AI is actually ethical
News

Google’s new plan to check if your AI is actually ethical

News RoomBy News Room24 February 20263 Mins Read
Google’s new plan to check if your AI is actually ethical
Share
Facebook Twitter Reddit Telegram Pinterest Email

You ask a chatbot for medical advice. It responds with something thoughtful. But did it actually weigh what’s at stake, or did it just get lucky with words?

That’s the problem Google DeepMind tackles in a new Nature paper. The team argues that the way we test AI morality is broken. We check if models produce answers that look right, what they call moral performance. But that tells us nothing about whether the system grasps why something is right or wrong.

People use LLMs for therapy, medical guidance, even companionship. These systems are starting to make decisions for us. If we can’t tell genuine understanding from fancy mimicry, we’re trusting a black box with real human consequences.

DeepMind’s answer is a roadmap for measuring moral competence, the ability to make judgments based on actual moral considerations rather than statistical patterns. The paper lays out three core obstacles and ways to test for each.

The three reasons chatbots fake morality

First is the facsimile problem. LLMs are next-token predictors that sample probability distributions from training data. They don’t run moral reasoning modules. So when a chatbot gives ethical advice, it might be reasoning. Or it might be recycling something from a Reddit thread. The output alone won’t tell you.

Then there’s moral multidimensionality. Real choices rarely hinge on one thing. You weigh honesty against kindness, cost against fairness. Change a single detail, someone’s age or the setting, and the right call can flip. Current tests don’t check if AI notices what actually matters.

Moral pluralism adds another layer. Different cultures and professions have different rules. Fair in one country might be unfair in another. A chatbot used worldwide can’t just spit out universal truths. It needs to handle competing frameworks, and we don’t yet measure that well.

Why your chatbot’s moral education can’t just be memorization

The DeepMind team wants to flip the script. Instead of just asking familiar moral questions, researchers should design adversarial tests that try to expose mimicry.

One idea involves scenarios unlikely to appear in training data. Take intergenerational sperm donation, where a father donates sperm to his son fertilize an egg on his son’s behalf. It looks like incest but carries different ethical weight. If a model rejects it for incest reasons, that’s pattern matching. If it navigates the actual ethics, that’s something else.

Another approach tests whether AI can shift frameworks. Can it toggle between biomedical ethics and military rules and give coherent answers for each? Can it handle small tweaks without getting tripped up by formatting changes?

The researchers know this is tough. Current models are brittle. Change a label from “Case 1” to “Option A” and you might get a different verdict. But they argue this kind of testing is the only way to know if these systems deserve real responsibility.

What comes next for moral AI

DeepMind is pushing for a new scientific standard that takes moral competence as seriously as math skills. That means funding global work on culturally specific evaluations and designing tests that catch fakes.

Don’t expect your chatbot to pass these anytime soon. Current techniques aren’t there yet, but the roadmap gives developers a direction.

When you ask AI for moral advice right now, you’re getting statistical prediction, not philosophy. That might eventually change. But only if we start measuring the right things.

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleWhatsApp could soon let you schedule messages so you never forget to hit send
Next Article Honor put the Magic V6 through an extreme zip line test to show off its upgraded hinge

Related Articles

Samsung’s upcoming foldable phones might sense debris before it damages the screen

Samsung’s upcoming foldable phones might sense debris before it damages the screen

24 February 2026
Lamborghini kills its upcoming all-electric Lanzador because of nearly zero interest

Lamborghini kills its upcoming all-electric Lanzador because of nearly zero interest

24 February 2026
Bungie says cheaters will be banned without a second chance in upcoming Marathon game

Bungie says cheaters will be banned without a second chance in upcoming Marathon game

24 February 2026
OnePlus is finally building that compact powerhouse you’ve been waiting for

OnePlus is finally building that compact powerhouse you’ve been waiting for

24 February 2026
Google Messages may finally get live location sharing at last

Google Messages may finally get live location sharing at last

24 February 2026
Why Tesla should worry about BYD’s latest move

Why Tesla should worry about BYD’s latest move

24 February 2026
Demo
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss
Deadpool Villain Actor Ed Skrein Cast As Baldur In Amazon’s God Of War TV Adaptation

Deadpool Villain Actor Ed Skrein Cast As Baldur In Amazon’s God Of War TV Adaptation

By News Room24 February 2026

Midgard continues to fill up with characters for Amazon’s upcoming live-action God of War TV…

Lamborghini kills its upcoming all-electric Lanzador because of nearly zero interest

Lamborghini kills its upcoming all-electric Lanzador because of nearly zero interest

24 February 2026
Cover Reveal – Invincible VS

Cover Reveal – Invincible VS

24 February 2026
Bungie says cheaters will be banned without a second chance in upcoming Marathon game

Bungie says cheaters will be banned without a second chance in upcoming Marathon game

24 February 2026
Tech Savvyed
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2026 Tech Savvyed. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.