Intern recalls building alphago on its tenth anniversary

Ten years after Google DeepMind's AlphaGo defeated Go champion Lee Sedol, Chris Maddison reflects on his role as an intern in developing the groundbreaking AI. The 2016 victory in Seoul marked a pivotal moment in artificial intelligence, demonstrating neural networks' potential to surpass human intuition in complex games. Maddison, now a professor at the University of Toronto, highlights the enduring technological principles behind AlphaGo that influence modern systems like large language models.

In March 2016, Google DeepMind's AlphaGo faced off against Lee Sedol, the world's top Go player, in a five-match series in Seoul, South Korea. The AI won 4-1, shocking observers with its intuitive play. As Sergey Brin noted at the time, "AlphaGo actually does have an intuition. It makes beautiful moves. It even creates more beautiful moves than most of us could think of." Lee Sedol later said he was "in shock."

Chris Maddison, then a master's student, joined the project as an intern in the summer of 2014 after Ilya Sutskever persuaded him with an argument linking Go expertise to neural network capabilities in half a second—comparable to a visual cortex forward pass, as proven in ImageNet. Working with Aja Huang and David Silver, Maddison built neural networks trained on expert games to predict the next move. This simple approach succeeded where others failed; by summer's end, his networks defeated Thore Graepel, a DeepMind researcher and decent Go player.

Go's complexity, with 10^171 possible positions—far exceeding the 10^80 atoms in the observable universe—made it a formidable challenge. AlphaGo advanced by playing millions of games against itself, discovering strategies beyond human play, as Pushmeet Kohli at Google DeepMind explained: "By learning through these games, it could discover new knowledge and could go beyond human-level players."

Maddison left the team before the matches to pursue his PhD but consulted remotely. In Seoul, the atmosphere was intense; crowds lined sidewalks watching the games on large screens, with hundreds of millions in China tuning in. He recalled Aja Huang describing Lee Sedol as "one stone from God," underscoring the gap they bridged.

AlphaGo's legacy endures. Noam Brown at OpenAI stated, "AlphaGo definitively showed that neural nets can do pattern recognition better than humans. They can essentially have intuition that surpasses humans." Its method—pretraining on vast data like Go games or internet text, followed by reinforcement learning to align with goals—mirrors large language models. Successors include AlphaFold, which earned a Nobel Prize in chemistry for protein prediction, and AlphaProof, achieving gold-medal performance at the International Mathematical Olympiad.

Yet challenges remain: neural networks are black boxes, as seen in AlphaGo's unexplained move 37, initially puzzling spectators. Progress hinges on abundant data and clear reward signals, particularly in fields like mathematics and programming. Maddison expressed sympathy for Lee Sedol, who apologized to humanity after the loss and couldn't traditionally review the match with the AI. Still, he sees AI enhancing human appreciation of games like Go and chess, preserving their cultural purpose beyond mere victory.

Labaran da ke da alaƙa

Sony AI robot Ace defeating a professional table tennis player on an Olympic-sized court.
Hoton da AI ya samar

Sony's AI robot Ace beats professional table tennis players

An Ruwaito ta hanyar AI Hoton da AI ya samar

Sony AI's table tennis robot Ace has challenged and sometimes defeated professional human players at an expert level. A study published Wednesday in Nature details how it learned via reinforcement learning and performed on an Olympic-sized court at Sony's Tokyo headquarters. The robot uses nine camera eyes to track the ball's spin by its logo.

Former Go champion Lee Sedol will meet artificial intelligence again on Monday, nearly a decade after his landmark match against Google's AlphaGo. This time, the encounter emphasizes collaboration over competition. Lee will work with an agentic AI to create a Go game model.

An Ruwaito ta hanyar AI

President Lee Jae-myung met Demis Hassabis, co-founder and CEO of Google DeepMind, in Seoul on April 27 to discuss the responsible use of artificial intelligence and global partnerships. They shared views that AI could powerfully address global challenges like low growth, climate crisis, and health care, while posing risks such as abuse in warfare or deepening inequality. The government partnered with DeepMind under its K-Moonshot initiative for AI-led science innovation.

A 24-year-old engineer at Shanghai Artificial Intelligence Laboratory has launched Colleague Skill, an AI tool claiming to extract downloadable skills from colleagues and figures like Steve Jobs and Gautama Buddha. Developed in under four hours, the project went viral on Microsoft-backed GitHub amid fears of AI displacing jobs in China. Zhou Tianyi told The Paper the tool turns work communications and documents into reusable skills to prevent repetitive labor.

An Ruwaito ta hanyar AI

California-based Generalist AI has launched Gen-1, a new physical AI model that enables robots to handle tasks like folding laundry, fixing other robots and stuffing cash into wallets. The model draws on human dexterity data collected worldwide to teach robots 'physical common sense.' Co-founder Pete Florence described it as a major advance for real-world robotics.

Wannan shafin yana amfani da cookies

Muna amfani da cookies don nazari don inganta shafin mu. Karanta manufar sirri mu don ƙarin bayani.
Ƙi