Close Menu
Tech Savvyed
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Infinix Hot 60i – Price in India, Specifications (28th June 2025)

28 June 2025

Infinix Hot 60i Launched With MediaTek Helio G81 Ultimate SoC, 50-Megapixel Rear Camera

28 June 2025

Samsung Tipped to Unveil Tri-Fold Smartphone With Galaxy Z Fold 7, Z Flip 7; Launch Timeline Leaked

28 June 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Tech Savvyed
SUBSCRIBE
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release
Tech Savvyed
Home » Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment
Apps

Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment

News RoomBy News Room2 June 20253 Mins Read
Share
Facebook Twitter Reddit Telegram Pinterest Email

Meta is reportedly planning to shift a large portion of risk assessments for its products and features to artificial intelligence (AI). As per the report, the Menlo Park-based social media giant is considering letting AI handle the approvals of its features and product updates, which were so far exclusively handled by human evaluators. This change will reportedly affect the addition of new algorithms, new safety features, and how content is shared across different social media platforms. The decision will reportedly boost the speed of rolling out new features, updates, and products.

According to an NPR report, Meta is planning to automate up to 90 percent of all the internal risk assessments. The publication claimed to have obtained company documents that detail the possible shift in strategy.

So far, any new features or updates for Instagram, WhatsApp, Facebook, or Threads have had to go through a group of human experts who reviewed the implications of how the change would impact users, whether it would violate their privacy, or bring harm to minors. The evaluations, reportedly known as privacy and integrity reviews, also assessed whether a feature could lead to a rise in misinformation or toxic content.

With AI handling the risk assessment, product teams will reportedly receive an “instant decision” after they fill out a questionnaire about the new feature. The AI system is said to either approve the feature or provide a list of requirements that need to be fulfilled before the project can go ahead. The product team then has to verify that it has met those requirements before launching the feature, the report claimed.

As per the report, the company believes shifting the review process to AI will significantly increase the release speed for features and app updates and allow product teams to work faster. However, some current and former Meta employees are reportedly concerned about whether this benefit will come at the cost of strict scrutiny.

In a statement to the publication, Meta said that human reviewers were still being used for “novel and complex issues” and AI was only allowed to handle low-risk decisions. However, based on the documents, the report claims that Meta’s planned transition includes letting AI handle potentially critical areas such as AI safety, youth risk, and integrity — an area said to handle items such as violent content and “spread of falsehood.”

An unnamed Meta employee familiar with product risk assessments told NPR that the automation process started in April and has continued throughout May. “I think it’s fairly irresponsible given the intention of why we exist. We provide the human perspective of how things can go wrong,” the employee was quoted as saying.

Notably, earlier this week, Meta released its Integrity Reports for the first quarter of 2025. In the report, the company stated, “We are beginning to see LLMs operating beyond that of human performance for select policy areas.”

The social media giant added that it has started using AI models to remove content from review queues in scenarios where it is “highly confident” that the said content does not violate its policies. Justifying the move, Meta added, “This frees up capacity for our reviewers allowing them to prioritise their expertise on content that’s more likely to violate.”

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleExclusive: Huawei Band 10 With Stress Monitoring, More Features to Launch in India Under Rs. 5,000
Next Article iPhone 17 to Reportedly Use Same Chip as iPhone 16; All Models Could Incorporate Metalens Technology

Related Articles

Apple to Expand Swift Language Support to Android; Sets Up Android Working Group

27 June 2025

Canva Launches Deep Research Connector with ChatGPT, Introduces New Open MCP Server

27 June 2025

YouTube Introduces AI-Powered Search Results Carousel, Shows a Snapshot of Suggested Videos

27 June 2025

Xiaomi Pad 7S Pro Price, Specifications, Features, Comparison

27 June 2025

Apple Changes App Store Rules in EU to Comply with Antitrust Order

27 June 2025

Top 10 Snapseed QR Codes to Try in 2025

27 June 2025
Demo
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss

Infinix Hot 60i Launched With MediaTek Helio G81 Ultimate SoC, 50-Megapixel Rear Camera

By News Room28 June 2025

Infinix Hot 60i has been silently launched in Bangladesh as the company’s first handset in its…

Samsung Tipped to Unveil Tri-Fold Smartphone With Galaxy Z Fold 7, Z Flip 7; Launch Timeline Leaked

28 June 2025

iPhone 17 to Feature Slightly Larger Display Than iPhone 16, Tipster Claims

28 June 2025

Monster Hunter Wilds Title Update 2 Features Returning Beasts, New Cosmetic Options, And More

28 June 2025
Tech Savvyed
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2025 Tech Savvyed. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.