Close Menu
Tech Savvyed
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

iPhone 17 Series Colour Options Tipped via Dummy Units; Pro Models to Get More Saturated Tones

30 July 2025

YouTube Brings AI-Powered Age Estimation to Enable Features to Protect Teens

30 July 2025

Google Pixel 10 May Feature Inbuilt Qi2 Magnets, Leaked ‘Pixelsnap’ Charging Puck Suggests

30 July 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Tech Savvyed
SUBSCRIBE
  • Home
  • News
  • Artificial Intelligence
  • Gadgets
  • Apps
  • Mobile
  • Gaming
  • Accessories
  • More
    • Web Stories
    • Spotlight
    • Press Release
Tech Savvyed
Home » Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment
Apps

Meta Reportedly Planning to Replace Human Reviewers With AI for Risk Assessment

News RoomBy News Room2 June 20253 Mins Read
Share
Facebook Twitter Reddit Telegram Pinterest Email

Meta is reportedly planning to shift a large portion of risk assessments for its products and features to artificial intelligence (AI). As per the report, the Menlo Park-based social media giant is considering letting AI handle the approvals of its features and product updates, which were so far exclusively handled by human evaluators. This change will reportedly affect the addition of new algorithms, new safety features, and how content is shared across different social media platforms. The decision will reportedly boost the speed of rolling out new features, updates, and products.

According to an NPR report, Meta is planning to automate up to 90 percent of all the internal risk assessments. The publication claimed to have obtained company documents that detail the possible shift in strategy.

So far, any new features or updates for Instagram, WhatsApp, Facebook, or Threads have had to go through a group of human experts who reviewed the implications of how the change would impact users, whether it would violate their privacy, or bring harm to minors. The evaluations, reportedly known as privacy and integrity reviews, also assessed whether a feature could lead to a rise in misinformation or toxic content.

With AI handling the risk assessment, product teams will reportedly receive an “instant decision” after they fill out a questionnaire about the new feature. The AI system is said to either approve the feature or provide a list of requirements that need to be fulfilled before the project can go ahead. The product team then has to verify that it has met those requirements before launching the feature, the report claimed.

As per the report, the company believes shifting the review process to AI will significantly increase the release speed for features and app updates and allow product teams to work faster. However, some current and former Meta employees are reportedly concerned about whether this benefit will come at the cost of strict scrutiny.

In a statement to the publication, Meta said that human reviewers were still being used for “novel and complex issues” and AI was only allowed to handle low-risk decisions. However, based on the documents, the report claims that Meta’s planned transition includes letting AI handle potentially critical areas such as AI safety, youth risk, and integrity — an area said to handle items such as violent content and “spread of falsehood.”

An unnamed Meta employee familiar with product risk assessments told NPR that the automation process started in April and has continued throughout May. “I think it’s fairly irresponsible given the intention of why we exist. We provide the human perspective of how things can go wrong,” the employee was quoted as saying.

Notably, earlier this week, Meta released its Integrity Reports for the first quarter of 2025. In the report, the company stated, “We are beginning to see LLMs operating beyond that of human performance for select policy areas.”

The social media giant added that it has started using AI models to remove content from review queues in scenarios where it is “highly confident” that the said content does not violate its policies. Justifying the move, Meta added, “This frees up capacity for our reviewers allowing them to prioritise their expertise on content that’s more likely to violate.”

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleExclusive: Huawei Band 10 With Stress Monitoring, More Features to Launch in India Under Rs. 5,000
Next Article iPhone 17 to Reportedly Use Same Chip as iPhone 16; All Models Could Incorporate Metalens Technology

Related Articles

YouTube Brings AI-Powered Age Estimation to Enable Features to Protect Teens

30 July 2025

OpenAI Introduces Study Mode in ChatGPT, Designed to Help Students Learn

30 July 2025

Jack Dorsey Launches Bitchat Mesh Chat App for iPhone That Works Without Internet Access

29 July 2025

Harmonic Launches Aristotle AI Model-Powered Chatbot App for Android and iOS

29 July 2025

Google Chrome Gets AI-Powered Store Summaries to Improve Online Shopping Experience

29 July 2025

WhatsApp Testing Night Mode Option in Camera Interface in Android Beta App

29 July 2025
Demo
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss

YouTube Brings AI-Powered Age Estimation to Enable Features to Protect Teens

By News Room30 July 2025

YouTube is rolling out a new feature to users in the US that relies on…

Google Pixel 10 May Feature Inbuilt Qi2 Magnets, Leaked ‘Pixelsnap’ Charging Puck Suggests

30 July 2025

OpenAI Introduces Study Mode in ChatGPT, Designed to Help Students Learn

30 July 2025

Samsung Galaxy S26 Ultra Leaked Firmware Suggests Snapdragon 8 Elite 2 Chipset

30 July 2025
Tech Savvyed
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2025 Tech Savvyed. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.