Close Menu
Tech Savvyed
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Upcoming Smartphones in June 2025: OnePlus 13s, Vivo T4 Ultra and More

4 June 2025

Apple Challenges ‘Unreasonable’ EU Order to Open Up to Rivals

3 June 2025

NxtQuantum’s AI+ Nova 2 5G Alleged Live Images Surfaces Online; Shows Dual Rear Camer Unit

3 June 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Tech Savvyed
SUBSCRIBE
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release
Tech Savvyed
Home » Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment
Apps

Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment

News RoomBy News Room2 June 20253 Mins Read
Share
Facebook Twitter Reddit Telegram Pinterest Email

Meta is reportedly planning to shift a large portion of risk assessments for its products and features to artificial intelligence (AI). As per the report, the Menlo Park-based social media giant is considering letting AI handle the approvals of its features and product updates, which were so far exclusively handled by human evaluators. This change will reportedly affect the addition of new algorithms, new safety features, and how content is shared across different social media platforms. The decision will reportedly boost the speed of rolling out new features, updates, and products.

According to an NPR report, Meta is planning to automate up to 90 percent of all the internal risk assessments. The publication claimed to have obtained company documents that detail the possible shift in strategy.

So far, any new features or updates for Instagram, WhatsApp, Facebook, or Threads have had to go through a group of human experts who reviewed the implications of how the change would impact users, whether it would violate their privacy, or bring harm to minors. The evaluations, reportedly known as privacy and integrity reviews, also assessed whether a feature could lead to a rise in misinformation or toxic content.

With AI handling the risk assessment, product teams will reportedly receive an “instant decision” after they fill out a questionnaire about the new feature. The AI system is said to either approve the feature or provide a list of requirements that need to be fulfilled before the project can go ahead. The product team then has to verify that it has met those requirements before launching the feature, the report claimed.

As per the report, the company believes shifting the review process to AI will significantly increase the release speed for features and app updates and allow product teams to work faster. However, some current and former Meta employees are reportedly concerned about whether this benefit will come at the cost of strict scrutiny.

In a statement to the publication, Meta said that human reviewers were still being used for “novel and complex issues” and AI was only allowed to handle low-risk decisions. However, based on the documents, the report claims that Meta’s planned transition includes letting AI handle potentially critical areas such as AI safety, youth risk, and integrity — an area said to handle items such as violent content and “spread of falsehood.”

An unnamed Meta employee familiar with product risk assessments told NPR that the automation process started in April and has continued throughout May. “I think it’s fairly irresponsible given the intention of why we exist. We provide the human perspective of how things can go wrong,” the employee was quoted as saying.

Notably, earlier this week, Meta released its Integrity Reports for the first quarter of 2025. In the report, the company stated, “We are beginning to see LLMs operating beyond that of human performance for select policy areas.”

The social media giant added that it has started using AI models to remove content from review queues in scenarios where it is “highly confident” that the said content does not violate its policies. Justifying the move, Meta added, “This frees up capacity for our reviewers allowing them to prioritise their expertise on content that’s more likely to violate.”

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleExclusive: Huawei Band 10 With Stress Monitoring, More Features to Launch in India Under Rs. 5,000
Next Article iPhone 17 to Reportedly Use Same Chip as iPhone 16; All Models Could Incorporate Metalens Technology

Related Articles

Character.AI Unveils Video Generation Tool, Community Feed and Other Interactive Features

3 June 2025

Microsoft Bing Adds an AI Video Creator Tool Powered by OpenAI’s Sora

3 June 2025

WhatsApp for Android, iOS May Soon Let You Copy Specific Parts of a Message

3 June 2025

WhatsApp for Android May Soon Let Users Create Custom AI Chatbots

3 June 2025

Google AI Edge Gallery App That Can Run AI Models Locally Released on Android

2 June 2025

Elon Musk Says New XChat on X Comes With Bitcoin-Style Encryption, New Features

2 June 2025
Demo
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss

Apple Challenges ‘Unreasonable’ EU Order to Open Up to Rivals

By News Room3 June 2025

Apple has submitted a legal challenge to an EU order to open up its closed…

NxtQuantum’s AI+ Nova 2 5G Alleged Live Images Surfaces Online; Shows Dual Rear Camer Unit

3 June 2025

Vivo Y19s Pro With 6,000mAh Battery, 50-Megapixel Rear Camera Launched: Price, Features

3 June 2025

Hi-Fi Rush Developer Tango Gameworks Officially Relaunches And Is Making A New Action Game

3 June 2025
Tech Savvyed
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2025 Tech Savvyed. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.