Intern recalls building alphago on its tenth anniversary

Ten years after Google DeepMind's AlphaGo defeated Go champion Lee Sedol, Chris Maddison reflects on his role as an intern in developing the groundbreaking AI. The 2016 victory in Seoul marked a pivotal moment in artificial intelligence, demonstrating neural networks' potential to surpass human intuition in complex games. Maddison, now a professor at the University of Toronto, highlights the enduring technological principles behind AlphaGo that influence modern systems like large language models.

In March 2016, Google DeepMind's AlphaGo faced off against Lee Sedol, the world's top Go player, in a five-match series in Seoul, South Korea. The AI won 4-1, shocking observers with its intuitive play. As Sergey Brin noted at the time, "AlphaGo actually does have an intuition. It makes beautiful moves. It even creates more beautiful moves than most of us could think of." Lee Sedol later said he was "in shock."

Chris Maddison, then a master's student, joined the project as an intern in the summer of 2014 after Ilya Sutskever persuaded him with an argument linking Go expertise to neural network capabilities in half a second—comparable to a visual cortex forward pass, as proven in ImageNet. Working with Aja Huang and David Silver, Maddison built neural networks trained on expert games to predict the next move. This simple approach succeeded where others failed; by summer's end, his networks defeated Thore Graepel, a DeepMind researcher and decent Go player.

Go's complexity, with 10^171 possible positions—far exceeding the 10^80 atoms in the observable universe—made it a formidable challenge. AlphaGo advanced by playing millions of games against itself, discovering strategies beyond human play, as Pushmeet Kohli at Google DeepMind explained: "By learning through these games, it could discover new knowledge and could go beyond human-level players."

Maddison left the team before the matches to pursue his PhD but consulted remotely. In Seoul, the atmosphere was intense; crowds lined sidewalks watching the games on large screens, with hundreds of millions in China tuning in. He recalled Aja Huang describing Lee Sedol as "one stone from God," underscoring the gap they bridged.

AlphaGo's legacy endures. Noam Brown at OpenAI stated, "AlphaGo definitively showed that neural nets can do pattern recognition better than humans. They can essentially have intuition that surpasses humans." Its method—pretraining on vast data like Go games or internet text, followed by reinforcement learning to align with goals—mirrors large language models. Successors include AlphaFold, which earned a Nobel Prize in chemistry for protein prediction, and AlphaProof, achieving gold-medal performance at the International Mathematical Olympiad.

Yet challenges remain: neural networks are black boxes, as seen in AlphaGo's unexplained move 37, initially puzzling spectators. Progress hinges on abundant data and clear reward signals, particularly in fields like mathematics and programming. Maddison expressed sympathy for Lee Sedol, who apologized to humanity after the loss and couldn't traditionally review the match with the AI. Still, he sees AI enhancing human appreciation of games like Go and chess, preserving their cultural purpose beyond mere victory.

관련 기사

전 바둑 챔피언 이세돌이 월요일 인공지능(AI)과 다시 만난다. 구글의 알파고와의 역사적인 대결 10년 가까이 지난 후, 이번에는 경쟁이 아닌 협력을 중심으로 한 만남이다. 이세돌은 에이전트 AI와 함께 바둑 게임 모델을 제작할 예정이다.

AI에 의해 보고됨

Google DeepMind's AlphaFold artificial intelligence system has marked its fifth anniversary, continuing to evolve after revolutionizing biology and chemistry. The project earned the Nobel Prize in Chemistry last year for its groundbreaking contributions. WIRED recently discussed its trajectory with DeepMind's Pushmeet Kohli.

베이징 AGI-Next 정상회의에서 알리바바 AI 과학자 린 쥰양은 자원 한계로 중국이 향후 3~5년 내 미국을 AI에서 추월할 확률이 20% 미만이라고 경고했다. 그는 미국 기업如 OpenAI가 차세대 연구에 막대한 컴퓨팅 자원을 쏟는 반면, 중국은 일상 수요 충족만으로도 한계에 달했다고 지적했다.

AI에 의해 보고됨

A study applying Chile's university entrance exam, PAES 2026, to AI models shows several systems scoring high enough for selective programs like Medicine and Civil Engineering. Google's Gemini led with averages near 950 points, outperforming rivals like ChatGPT. The experiment underscores AI progress and raises questions about standardized testing efficacy.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부