2026-03-19 18:30 UTC

Meta will move away from human content moderators in favor of more AI

A little more than a year after ditching third-party fact checkers and rolling back much of its proactive content moderation, the company says it will further "transform" its approach by drastically reducing the number of human moderators in favor of AI-based systems.

The company says the change will happen "over the next few years," and will allow the company to catch more issues faster than its current approach.

Meta didn't say how much of its contract workforce might be cut as it makes this transition.

The company employs thousands of contractors around the world to review content flagged by its AI systems and user reports among other tasks.

The company said that as it shifts its approach humans will "play a key role" in "critical decisions" and aid in training and other tasks.

"Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions," Meta said in an update.

"For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement." The company has been testing LLM-based systems for content moderation for a while and says that early tests have had "promising" results.

← Back to latest posts