Roblox launches AI tool to rephrase inappropriate chat

Roblox has introduced an AI-powered feature that rephrases inappropriate language in real-time chats. The tool aims to maintain conversation flow by substituting offensive words rather than blocking them with symbols. This update follows recent safety measures amid ongoing legal challenges over child protection.

Roblox, the popular online gaming platform, has rolled out a new AI-driven system to handle inappropriate language in user chats. Previously, the platform's filters replaced violating messages with a series of hash marks (####), which Roblox acknowledged could disrupt discussions and hinder communication. The updated feature now automatically substitutes problematic words or phrases with more suitable alternatives, starting with profanity.

For example, a message like “Hurry TF up” would be changed to “Hurry up!” Participants in the chat receive a notification when a message has been rephrased, while the original sender sees the edited content highlighted. Despite the rephrasing, users who repeatedly violate Roblox's community standards face penalties, as the system does not excuse policy breaches.

Rajiv Bhatia, Roblox’s chief safety officer, explained in a blog post: “As these systems scale, they create a flywheel for civility, where real-time feedback helps users learn and adopt our Community Standards.” The rephrasing tool is initially available in chats between age-verified users in similar age groups and supports all languages covered by Roblox's translation system.

This development comes after Roblox implemented mandatory age verification in January, prompted by reports describing a “pedophile problem” on the platform, where adults allegedly groomed children. Children under 13 are now restricted from using in-game chat outside specific experiences, with others limited to interactions with peers of comparable ages.

However, these efforts have not quelled legal scrutiny. In February, Los Angeles County filed a lawsuit claiming Roblox “makes children easy prey for pedophiles.” More recently, Louisiana’s attorney general has sued, alleging that Roblox “created a public park and filled it with sex predators that are preying on… children.”

Articoli correlati

Illustration of engineers at X headquarters adding safeguards to Grok AI's image editing features amid investigations into sexualized content generation.
Immagine generata dall'IA

X adds safeguards to Grok image editing amid escalating probes into sexualized content

Riportato dall'IA Immagine generata dall'IA

In response to the ongoing Grok AI controversy—initially sparked by a December 28, 2025 incident generating sexualized images of minors—X has restricted the chatbot's image editing features to prevent nonconsensual alterations of real people into revealing attire like bikinis. The changes follow new investigations by California authorities, global blocks, and criticism over thousands of harmful images produced.

Roblox's new AI-powered age verification system, aimed at curbing child predators on the platform, is facing significant issues just days after launch. Reports indicate misclassifications of users' ages and easy workarounds by children, while developers complain of reduced engagement. The system was introduced amid lawsuits and investigations into safety concerns.

Riportato dall'IA

Los Angeles County has filed a lawsuit against Roblox, alleging the gaming platform engages in deceptive practices and fails to adequately protect children from predators and exploitation. The suit claims Roblox markets itself as safe for young users while its design exposes minors to harm. Roblox strongly disputes the allegations, emphasizing its ongoing safety improvements.

Roblox has launched an open beta for its new 4D creation platform, enabling users to generate interactive 3D objects that respond to in-game actions. The toolset builds on last year's AI model for dynamic 3D generation and currently offers limited templates for cars and basic shapes. This release comes amid ongoing scrutiny over child safety measures on the platform.

Riportato dall'IA

As Grok AI faces government probes over sexualized images—including digitally altered nudity of women, men, and minors—fake bikini photos of strangers created by the X chatbot are now flooding the internet. Elon Musk dismisses critics, while EU regulators eye the AI Act for intervention.

Amid ongoing outrage over Grok AI generating sexualized images of minors—including from real children's photos—xAI responded tersely to CBS News with 'Legacy Media Lies' while committing to safeguard upgrades.

Riportato dall'IA

OpenAI has reported a dramatic rise in child exploitation incidents, submitting 80 times more reports to the National Center for Missing & Exploited Children in the first half of 2025 compared to the same period in 2024. This surge highlights growing challenges in content moderation for AI platforms. The reports are channeled through NCMEC's CyberTipline, a key resource for addressing child sexual abuse material.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta