Meta has introduced AI-powered tools and user alerts to combat industrialized scamming on its platforms. The company removed 10.9 million accounts linked to criminal scam centers in 2025. These measures follow collaborations with law enforcement and legal actions against scammers.
Meta announced on March 11, 2026, several new features to disrupt organized scamming operations, which the company describes as a multibillion-dollar global crisis. The updates include AI tools designed to identify impersonators of brands and celebrities, as well as to detect deceptive links, enabling quicker removal of fraudulent content.
In addition, Meta is rolling out alerts to warn users of potential scams early. On Facebook, notifications will flag suspicious friend requests. WhatsApp users will receive warnings for device linking requests that appear fraudulent, and Messenger will alert about suspect accounts.
The company is also expanding its advertiser verification processes, aiming to have verified advertisers account for 90 percent of its ads revenue by the end of the year, up from the current 70 percent. Meta estimates that marketing for scams and banned products may have represented 10 percent of its 2024 revenue.
These efforts build on previous actions. In 2025, Meta removed 159 million scam ads and 10.9 million Facebook and Instagram accounts tied to criminal scam centers. Last month, it sued three entities from Brazil and China involved in scams using images and deepfakes of popular figures to promote dubious products and investments.
A recent collaboration with Thai law enforcement led to 21 arrests and the disabling of over 150,000 accounts associated with Southeast Asian scam compounds.