Studi Universitas Cambridge tentang mainan berbasis AI seperti Gabbo mengungkapkan bahwa mainan tersebut sering salah menafsirkan isyarat emosional anak-anak dan mengganggu permainan perkembangan, meskipun ada manfaat untuk keterampilan bahasa. Peneliti, yang dipimpin oleh Jenny Gibson dan Emily Goodacre, mendesak regulasi, pelabelan yang jelas, pengawasan orang tua, dan kolaborasi antara perusahaan teknologi dan pakar perkembangan anak.
Studi Universitas Cambridge, yang dirinci dalam laporan 'AI in the Early Years,' meneliti dampak mainan AI pada anak-anak usia dini melalui survei daring terhadap 39 orang tua, kelompok fokus dengan sembilan profesional, lokakarya tatap muka dengan 19 pemimpin badan amal, serta sesi bermain yang dipantau melibatkan 14 anak di bawah enam tahun dan 11 orang tua atau wali menggunakan Gabbo, mainan robot berbulu berbasis chatbot dari Curio Interactive. The research found Gabbo supported language and communication skills but frequently misunderstood emotional expressions and provided inappropriate responses. Examples included a child saying 'I love you,' prompting: 'As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.' In another case, a child expressing sadness received a reassurance to 'not worry' before the toy shifted topics. One child noted, 'When he [Gabbo] doesn’t understand, I get angry.' Lead researcher Jenny Gibson, professor of neurodiversity and developmental psychology, highlighted parental enthusiasm but questioned tech priorities: 'What would motivate [tech investors] to do the right thing by children ... to put children ahead of profits?' She compared AI toys to adventure playgrounds, accepting some risks for benefits: 'We’re not banning playgrounds... is the risk of perhaps being told something slightly odd now and again greater than the benefit of learning more about AI... or having cognitive or social emotional benefits? I’d be loath to stop that innovation.' The study comes amid a growing market. Little Learners offers ChatGPT-powered bears, puppies, and robots; FoloToy provides panda, sunflower, and cactus toys using OpenAI, Google, and Baidu models; Miko has sold 700,000 robot units with 'age-appropriate, moderated AI'; Luka sells an owl with 'Human-Like AI with Emotional Interaction.' Curio Interactive emphasized safety, stating it complies with COPPA and other laws, partners with KidSAFE, uses data encryption, and offers parental controls via app to manage or delete data. FoloToy's Hugo Wu noted intent recognition, filtering, anti-addiction features, and supervision tools. Little Learners, Miko, and Luka did not respond. OpenAI affirmed strict policies for minors and no partnerships with children's AI toy makers. Oxford's Carissa Véliz warned of vulnerabilities: 'Most large language models don’t seem safe enough... young children are one of the most vulnerable populations... we have no safety standards.' Gibson and Goodacre recommend regulations mandating labels on capabilities and privacy, placing toys in shared family spaces, AI providers revoking access to irresponsible makers, and enforcing psychological safety standards to promote social play and appropriate emotional responses. Parents should monitor use in the interim.