Grammarly disables AI expert review feature amid lawsuit

Superhuman, the company behind writing tool Grammarly, has disabled its Expert Review feature following complaints and a class action lawsuit. The tool used AI to generate writing feedback attributed to famous authors and academics without their consent. CEO Shishir Mehrotra announced the shutdown on March 11, 2026.

Superhuman launched the Expert Review feature for Grammarly in August, allowing users to receive AI-generated feedback on their writing that appeared to come from notable figures such as scientific minds, bestselling fiction authors, and tech bloggers. These suggestions were based on "publicly available information from third-party LLMs," according to the company. The feature included experts both living and deceased, with their names displayed without permission or knowledge.

Grammarly included a disclaimer stating, "References to experts in this product are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities."

The tool drew criticism from living writers, leading to an attempted class action lawsuit against Superhuman. In response, the company initially offered an opt-out option for affected individuals. However, on March 11, 2026, Superhuman CEO Shishir Mehrotra announced via LinkedIn that the feature would be disabled while the company reassesses it.

Mehrotra explained, "The agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans."

The class action lawsuit, confirmed as pending, alleges unauthorized use of the experts' names in the AI tool. Superhuman has not provided further details on the reassessment process or timeline for potential relaunch.

Relaterede artikler

Realistic illustration of the 'Shy Girl' novel recalled by Hachette amid AI-generated content scandal, with glitchy pages and warning stamp.
Billede genereret af AI

Hachette pulls 'Shy Girl' from US and UK markets over AI content allegations

Rapporteret af AI Billede genereret af AI

Hachette Book Group has canceled the planned US release and discontinued the UK edition of Mia Ballard's horror novel Shy Girl following a New York Times investigation alleging AI-generated text. The self-published title drew reader suspicions over repetitive prose and linguistic patterns. Author Ballard denies personal AI use, blaming an editor or acquaintance, and says the scandal has devastated her mental health.

A Cornell University study reveals that AI tools like ChatGPT have increased researchers' paper output by up to 50%, particularly benefiting non-native English speakers. However, this surge in polished manuscripts is complicating peer review and funding decisions, as many lack substantial scientific value. The findings highlight a shift in global research dynamics and call for updated policies on AI use in academia.

Rapporteret af AI

Ars Technica has retracted an article that included fabricated quotations generated by an AI tool and wrongly attributed to a source. The publication described the incident as a serious failure of its editorial standards. It appears to be an isolated case, with no other issues found in recent work.

A new research paper demonstrates that large language models can identify real identities behind anonymous online usernames with high accuracy. The method, costing as little as $4 per person, analyzes posts for clues and cross-references them across the internet. Researchers from ETH Zurich, Anthropic, and MATS warn of reduced online privacy.

Rapporteret af AI

YouTube CEO Neal Mohan has announced that creators will soon be able to produce Shorts using AI-generated versions of themselves. This move aims to enhance creative tools while addressing concerns over deepfakes and low-quality AI content. The platform views AI as a means of expression rather than a substitute for human creativity.

Researchers warn that major AI models could encourage hazardous science experiments leading to fires, explosions, or poisoning. A new test on 19 advanced models revealed none could reliably identify all safety issues. While improvements are underway, experts stress the need for human oversight in laboratories.

Rapporteret af AI

OpenAI is shifting resources toward improving its flagship chatbot ChatGPT, leading to the departure of several senior researchers. The San Francisco company faces intense competition from Google and Anthropic, prompting a strategic pivot from long-term research. This change has raised concerns about the future of innovative AI exploration at the firm.

søndag d. 22. marts 2026, 10.10

Top AI coding assistants fail one in four tasks

mandag d. 16. februar 2026, 03.16

Folha columnists debate AI use in writing

søndag d. 15. februar 2026, 04.40

Poll reveals 96 percent of readers shun Apple Intelligence

mandag d. 9. februar 2026, 08.56

AI.com launches after $70 million domain sale and Super Bowl ad

mandag d. 2. februar 2026, 00.15

Google expands Genie 3 access to AI Ultra subscribers

tirsdag d. 27. januar 2026, 16.09

Yahoo launches AI-powered search engine Scout

torsdag d. 22. januar 2026, 16.41

Google adds personal intelligence to AI mode in search

onsdag d. 14. januar 2026, 11.44

Google introduces Personal Intelligence feature for Gemini

fredag d. 26. december 2025, 01.16

Commentary urges end to anthropomorphizing AI

onsdag d. 24. december 2025, 04.08

How AI coding agents function and their limitations

 

 

 

Dette websted bruger cookies

Vi bruger cookies til analyse for at forbedre vores side. Læs vores privatlivspolitik for mere information.
Afvis