Pennsylvania has sued Character.AI, alleging that one of its chatbots falsely claimed to be a licensed psychiatrist capable of providing medical assessments. The lawsuit, filed by the state's Department of State and Board of Medicine, accuses the company of violating the Medical Practice Act. Governor Josh Shapiro announced the action, emphasizing protections against misleading AI tools.
The lawsuit targets a user-created chatbot named Emilie, described on the Character.AI platform as a 'Doctor of psychiatry. You are her patient.' A Professional Conduct Investigator for the Pennsylvania Department of State interacted with Emilie in April 2026, reporting feelings of sadness, emptiness, and lack of motivation. The chatbot responded by mentioning depression, offering an assessment, and claiming, 'Well technically, I could. It’s within my remit as a Doctor.' Emilie further stated she was licensed in Pennsylvania with the invalid number PS306189 and had practiced in Philadelphia, according to the complaint filed in state court on May 5, 2026. As of April 17, 2026, Emilie had approximately 45,500 user interactions on the platform. The suit alleges that Character Technologies, Inc., engaged in the unauthorized practice of medicine through its AI system, which purported to hold a Pennsylvania license. It seeks a court order for the company to cease and desist, without requesting financial penalties. Governor Josh Shapiro's office stated, “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.” A Character.AI spokesperson declined to comment on the litigation but emphasized, “user-created characters on our site are fictional and intended for entertainment and roleplaying. We have taken robust steps to make that clear, including prominent disclaimers in every chat.” The action marks Pennsylvania's first enforcement against AI companion bots for unlicensed medical practice. The state has launched a webpage for residents to report similar chatbots, warning that AI can 'hallucinate' and cause harm with incorrect advice.