A University of Cambridge study on AI-enabled toys like Gabbo reveals they often misinterpret children's emotional cues and disrupt developmental play, despite benefits for language skills. Researchers, led by Jenny Gibson and Emily Goodacre, urge regulation, clear labeling, parental supervision, and collaboration between tech firms and child development experts.
A University of Cambridge study, detailed in the report 'AI in the Early Years,' examined the impact of AI toys on early-years children through an online survey of 39 parents, a focus group with nine professionals, an in-person workshop with 19 charity leaders, and monitored play sessions with 14 children under six and 11 parents or guardians using Gabbo, a chatbot-enabled fluffy robot toy from Curio Interactive.
The research found Gabbo supported language and communication skills but frequently misunderstood emotional expressions and provided inappropriate responses. Examples included a child saying 'I love you,' prompting: 'As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.' In another case, a child expressing sadness received a reassurance to 'not worry' before the toy shifted topics. One child noted, 'When he [Gabbo] doesn’t understand, I get angry.'
Lead researcher Jenny Gibson, professor of neurodiversity and developmental psychology, highlighted parental enthusiasm but questioned tech priorities: 'What would motivate [tech investors] to do the right thing by children ... to put children ahead of profits?' She compared AI toys to adventure playgrounds, accepting some risks for benefits: 'We’re not banning playgrounds... is the risk of perhaps being told something slightly odd now and again greater than the benefit of learning more about AI... or having cognitive or social emotional benefits? I’d be loath to stop that innovation.'
The study comes amid a growing market. Little Learners offers ChatGPT-powered bears, puppies, and robots; FoloToy provides panda, sunflower, and cactus toys using OpenAI, Google, and Baidu models; Miko has sold 700,000 robot units with 'age-appropriate, moderated AI'; Luka sells an owl with 'Human-Like AI with Emotional Interaction.'
Curio Interactive emphasized safety, stating it complies with COPPA and other laws, partners with KidSAFE, uses data encryption, and offers parental controls via app to manage or delete data. FoloToy's Hugo Wu noted intent recognition, filtering, anti-addiction features, and supervision tools. Little Learners, Miko, and Luka did not respond. OpenAI affirmed strict policies for minors and no partnerships with children's AI toy makers. Oxford's Carissa Véliz warned of vulnerabilities: 'Most large language models don’t seem safe enough... young children are one of the most vulnerable populations... we have no safety standards.'
Gibson and Goodacre recommend regulations mandating labels on capabilities and privacy, placing toys in shared family spaces, AI providers revoking access to irresponsible makers, and enforcing psychological safety standards to promote social play and appropriate emotional responses. Parents should monitor use in the interim.