Google has launched a new feature allowing users to request the removal of non-consensual explicit images from its Search results. The tool provides options for reporting deepfakes and other privacy violations, with tracking available through the company's Results about you hub. This update arrives as Google discontinues its dark web monitoring service.
Google's latest update to its Search platform aims to address privacy concerns by enabling users to swiftly request the removal of harmful content. When encountering an unwanted image in search results, individuals can click the three dots next to it and select "remove result." From there, options include reporting that "It shows a sexual image of me," or that the picture depicts a person under 18 or contains personal information. For sexual images, users are prompted to specify whether it is a real photo or a deepfake. The tool also supports submitting multiple images in a single request.
Upon submission, Google immediately directs users to resources for emotional and legal support. An opt-in feature allows for filtering similar results in future searches, though unreported images remain visible to others. This functionality is set to roll out in most countries over the coming days, with updates to the Results about you hub reaching US users soon.
To access the hub, users must provide personal contact details and government ID numbers. The expanded hub now monitors for sensitive data such as social security numbers, driver's licenses, and passports in search results, notifying users if such information appears and guiding them on removal steps. This builds on Google's existing personal information tracking but goes further in proactive alerts.
The changes coincide with Google ending its dark web reports, which previously notified users of their data appearing online, often from breaches. The company noted that those alerts failed to assist with subsequent actions, a gap these new tools seek to fill by emphasizing practical removal and support options.