Instagram chief advocates fingerprinting real media over ai detection

Instagram head Adam Mosseri has suggested that platforms should focus on verifying authentic content rather than chasing AI-generated fakes, as artificial intelligence becomes ubiquitous on social media. In a post outlining trends for 2026, he highlighted how AI is transforming the platform and empowering creators. Mosseri emphasized the need for camera manufacturers to cryptographically sign images at capture to establish authenticity.

Adam Mosseri, head of Instagram, shared his vision for the platform's future in a detailed post on December 31, 2025, amid the rapid rise of AI-generated content that dominated social media feeds throughout the year. He noted that AI has made elements once unique to creators—such as authenticity, connection, and an inimitable voice—accessible to anyone with the right tools. "Everything that made creators matter—the ability to be real, to connect, to have a voice that couldn’t be faked—is now suddenly accessible to anyone with the right tools," Mosseri wrote. "The feeds are starting to fill up with synthetic everything."

Despite the shift, Mosseri expressed optimism about AI's potential, describing much of the content as "amazing." However, he acknowledged growing challenges in distinguishing real from fake media. Social platforms face increasing pressure to label AI-generated content, but as AI improves at mimicking reality, detection efforts will become less effective. Mosseri proposed an alternative: "It will be more practical to fingerprint real media than fake media." He suggested camera manufacturers could implement cryptographic signing at the point of capture, creating a verifiable chain of custody for images.

This approach aligns with Meta's struggles to reliably identify manipulated content, despite investing tens of billions in AI this year. Existing methods like watermarks have proven unreliable and easy to circumvent. Mosseri also addressed frustrations from photographers and creators, who complain about algorithmic biases against their posts. He argued that the era of polished, square images is over, urging creators to embrace raw, unflattering visuals to demonstrate genuineness in an AI-saturated environment. As Instagram serves 3 billion users, this pivot could redefine content creation and trust on the platform.

Articoli correlati

Tech leaders from Anthropic, AMD, and others on stage at WIRED's Big Interview event in San Francisco, discussing AI and big tech amid futuristic visuals.
Immagine generata dall'IA

Tech leaders address AI and big tech at WIRED's Big Interview event

Riportato dall'IA Immagine generata dall'IA

At WIRED's Big Interview event in San Francisco, prominent tech figures discussed the future of AI, cryptocurrency, and Silicon Valley's challenges. Speakers included executives from Circle, Cloudflare, Anthropic, AMD, and others, sharing insights on innovation, regulation, and industry ethics. The event highlighted efforts to balance technological advancement with societal impacts.

In 2025, a tech writer attempted to re-engage with major social media platforms after years of avoidance, only to find them dominated by sponsored content and AI-generated material that eroded genuine human connections. This personal experience reflected a broader disillusionment, making it simpler to step away despite record user numbers on platforms like Instagram and TikTok. Alternatives like Reddit and Bluesky offered some respite amid the commercial overload.

Riportato dall'IA

Adam Mosseri, Instagram's head, defended the platform in a trial over youth mental health claims. Parents voiced concerns about social media's impact on children. The trial focuses on Instagram's decisions regarding youth mental health.

Google has introduced Me Meme, an AI-powered feature in its Photos app that turns user photos into personalized memes using templates. The tool allows users to upload images of pets, friends, or themselves to create shareable content. It is rolling out gradually to Android and iOS devices over the coming weeks.

Riportato dall'IA

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

Consumers increasingly rely on peer reviews from platforms like Reddit and TikTok to navigate online shopping distrust fueled by AI content. Brands such as Medicube and Alo Yoga are integrating these reviews into their strategies for growth and trust-building. Experts highlight reviews' role as human validation in an algorithm-driven market.

Riportato dall'IA

Under a new agreement with the Department of Information and Communications Technology, Meta has pledged to enhance its mechanisms for detecting, reporting, and removing disinformation and inappropriate content on Facebook. This includes faster flagging of child exploitation material, immediate reporting to local authorities, and its removal from the platform. The deal also targets scams such as fake investment schemes using deepfakes of officials, business leaders, and celebrities.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta