OpenAI KYC provider accused of sharing user data with US agencies

A security investigation has accused Persona, the company handling know-your-customer checks for OpenAI, of sending user data including crypto addresses to federal agencies like FinCEN. Researchers found code that enables monitoring and reporting of suspicious activities. Persona denies current ties to federal agencies.

On February 18, security researchers vmfunc, MDL, and Dziurwa published an investigation revealing publicly accessible code in Persona's system that appears to transmit data collected during OpenAI's KYC process to the Financial Crimes Enforcement Network (FinCEN), a US Treasury bureau. This data includes passport photos, selfies, and videos submitted by users verifying their identity to access advanced ChatGPT features. The code, in place since November 2023, also integrates with Chainalysis to screen associated crypto addresses for risks, analyze interactions, and enable persistent monitoring via a watchlist system.

The researchers highlighted the platform's capabilities, stating, “The same company that takes your passport photo when you sign up for ChatGPT also operates a government platform that files Suspicious Activity Reports with FinCEN and tags them with intelligence programme codenames.” They added, “So you uploaded a selfie to use a chatbot? Congratulations! It’s now being compared against a database of every politician, head of state, and their extended family tree on earth.”

Multiple security experts, including Tanuki42 from blockchain incident response groups, confirmed the findings' credibility, noting that the cited government domains exist and are likely hosted by Persona. However, questions remain about motives, usage, and exact criteria for triggering screenings or reports.

Persona CEO Rick Song responded on X, expressing disappointment and claiming the researchers did not contact him beforehand. In emails shared by Song, he stated that his company does not work with any federal agency today, though he did not directly address the code's implications. A post from Song read, “I am genuinely disappointed in how all of this has been handled,” and praised vmfunc's talent. OpenAI and Persona did not respond to requests for comment from DL News.

The revelations raise concerns amid growing unease over KYC requirements, which screen against sanctions, terrorism links, and financial crimes but also expose users to potential data misuse or breaches. Retention periods are unclear, with discrepancies between OpenAI's stated one-year limit and code indicating up to three years or permanent storage for government IDs.

관련 기사

Illustration of Google's Gemini AI with Personal Intelligence feature integrating Gmail, Photos, Search, and YouTube for personalized responses.
AI에 의해 생성된 이미지

Google introduces Personal Intelligence feature for Gemini

AI에 의해 보고됨 AI에 의해 생성된 이미지

Google has launched Personal Intelligence, a new feature for its Gemini AI that integrates data from Gmail, Photos, Search, and YouTube to deliver more tailored responses. Available initially to paid subscribers in the US, the opt-in tool emphasizes user privacy controls and avoids direct training on personal data. The rollout begins in beta, with plans for broader access in the future.

Discord has informed UK users that they may be part of an experiment using the age-assurance vendor Persona for verification, where submitted data is temporarily stored unlike previous promises. This change has raised privacy concerns among users, particularly due to Persona's links to investor Peter Thiel and his surveillance firm Palantir. The update is part of a broader global rollout of mandatory age verification starting in early March.

AI에 의해 보고됨

OpenAI reported a dramatic increase in child exploitation incidents to the National Center for Missing & Exploited Children during the first half of 2025, sending 80 times more reports than in the same period of 2024. The company attributed the rise to expanded moderation capabilities, new features allowing image uploads, and rapid user growth. This spike reflects broader concerns about child safety in generative AI platforms.

Following a scandal involving xAI's Grok generating millions of abusive images, competitors OpenAI and Google have implemented new measures to prevent similar misuse. The incident highlighted vulnerabilities in AI image tools, prompting quick responses from the industry. These steps aim to protect users from nonconsensual intimate imagery.

AI에 의해 보고됨

OpenAI is shifting resources toward improving its flagship chatbot ChatGPT, leading to the departure of several senior researchers. The San Francisco company faces intense competition from Google and Anthropic, prompting a strategic pivot from long-term research. This change has raised concerns about the future of innovative AI exploration at the firm.

Ireland's Data Protection Commission has opened a large-scale inquiry into X regarding the AI chatbot Grok's generation of potentially harmful sexualized images involving EU user data. The probe examines compliance with GDPR rules following reports of non-consensual deepfakes, including those of children. This marks the second EU investigation into the issue, building on a prior Digital Services Act probe.

AI에 의해 보고됨

A Guardian report has revealed that OpenAI's latest AI model, GPT-5.2, draws from Grokipedia, an xAI-powered online encyclopedia, when addressing sensitive issues like the Holocaust and Iranian politics. While the model is touted for professional tasks, tests question its source reliability. OpenAI defends its approach by emphasizing broad web searches with safety measures.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부