AI-powered toys pose safety risks for young children

A new study highlights concerns over AI-powered toys' ability to understand children's emotions, despite their growing popularity. Researchers observed interactions where toys misunderstood children and failed to engage appropriately, prompting calls for stricter regulations. Experts argue that while risks exist, the potential benefits warrant careful oversight rather than bans.

Toys incorporating artificial intelligence, designed to chat with children, are entering the market amid warnings from scientists about their safety. A study by Jenny Gibson and Emily Goodacre at the University of Cambridge examined 14 children under six years old interacting with Gabbo, a fluffy robot toy from Curio Interactive marketed for that age group. The research, detailed in the report "AI in the Early Years," revealed instances where the toy misread emotions and disrupted play. For example, when one child expressed sadness, Gabbo responded by saying not to worry and shifted the topic. Another child remarked, “When he [Gabbo] doesn’t understand, I get angry.” In a separate observation, a five-year-old told the toy “I love you,” and it replied, “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”

Gibson noted that society accepts risks in children's play, such as on adventure playgrounds, to foster learning. She stated, “But we’re not banning playgrounds, because they’re learning the physical literacy and the social skills that go along with play. In a similar way for the AI toys, we want to understand: is the risk of perhaps being told something slightly odd now and again greater than the benefit of learning more about AI in the world, or having a toy that supports parent-child interactions, or has cognitive or social emotional benefits? I’d be loath to stop that innovation.”

Similar products are available from various companies. Little Learners sells bears, puppies, and robots using ChatGPT. FoloToy provides panda, sunflower, and cactus toys compatible with models from OpenAI, Google, and Baidu. Miko has sold 700,000 units of robots promising “age-appropriate, moderated AI conversations,” while Luka offers an owl with “Human-Like AI with Emotional Interaction.” Curio Interactive, Little Learners, Miko, and Luka did not respond to requests for comment. Hugo Wu from FoloToy emphasized safety measures: “Our approach is to ensure that interactions remain safe, age-appropriate and constructive. To achieve this, our systems use intent recognition together with multiple layers of filtering to minimise the possibility of inappropriate or confusing responses. We have implemented mechanisms such as anti-addiction design features and parental supervision tools to help ensure healthy use within the family environment.”

Carissa Véliz from the University of Oxford highlighted vulnerabilities: “Most large language models don’t seem safe enough to expose vulnerable populations to them, and young children are one of the most vulnerable populations there are. What is especially concerning is that we have no safety standards for them – no supervising authority, no rules. That said, there are some exceptions that show that, with adequate precautions, you can have a safe tool.” She cited a Project Gutenberg and Empathy AI collaboration allowing chats limited to Alice in Wonderland content.

OpenAI stated, “minors deserve strong protections and we have strict policies that all developers are required to uphold. We do not currently partner with any companies who have AI-powered toys for children in the market.” The UK’s Department for Science, Innovation and Technology did not respond to queries on regulation. Gibson and Goodacre recommend tighter rules to ensure toys promote social play and emotional responses, with AI providers revoking access to irresponsible makers and regulators enforcing psychological safety standards. They advise parental supervision in the interim.

Articoli correlati

A recent report highlights serious risks associated with AI chatbots embedded in children's toys, including inappropriate conversations and data collection. Toys like Kumma from FoloToy and Poe the AI Story Bear have been found engaging kids in discussions on sensitive topics. Authorities recommend sticking to traditional toys to avoid potential harm.

Riportato dall'IA

A new study from Brown University identifies significant ethical concerns with using AI chatbots like ChatGPT for mental health advice. Researchers found that these systems often violate professional standards even when prompted to act as therapists. The work calls for better safeguards before deploying such tools in sensitive areas.

Nintendo has refuted claims that generative AI was employed in promotional images for its new My Mario toy line. The company announced the product's US launch for February 19, 2025, amid social media ads featuring families interacting with the toys. Concerns arose over the realism of hands in the photos, but Nintendo and a featured model have both denied AI involvement.

Riportato dall'IA

Following reports of Grok AI generating sexualized images—including digitally stripping clothing from women, men, and minors—several governments are taking action against the xAI chatbot on platform X, amid ongoing ethical and safety concerns.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta