After winning a three-game match against Chinese grandmaster Ke Jie in what is revered as the world’s most demanding strategy game, Google’s gaming AI AlphaGo is retiring.

With nothing left to prove, AlphaGo’s developers from Google-owned artificial intelligence lab DeepMind will now focus on creating advanced general algorithms that could help scientists uncover cures for diseases, reduce energy consumption, and invent new materials.

“We plan to bring that same excitement and insight to a range of new fields, and try to address some of the most important and urgent scientific challenges of our time. We hope that the story of AlphaGo is just the beginning,” wrote Demis Hassabis, co-founder and CEO of DeepMind, and David Silver, research scientist at DeepMind, in a blog post.

The developers behind AlphaGo still plan on publishing an academic paper later this year detailing what building the AI has taught them, and is also working on tool to teach humans to become better Go players.

Last March in Seoul, AlphaGo became the first machine to beat a top player at the 3,000-year-old game of Go when it defeated Korean grandmaster Lee Sedol.

AlphaGo has since learned to master the game by being “its own teacher”, playing millions of high level training games against itself to continually improve, according to DeepMind.

The progression of AlphaGo has shown how AI can not only replace human skills, but also advance them, according to DeepMind.

Professional Go players including Shi Yue and Zhou Ruiyang admitted they changed the way they play after watching AlphaGo.

Jie, who lost to the AI, opened the first game last week in China with an unusual strategy called the “3-3 point invasion” that AlphaGo used multiple times during the 60 matches it won in January under the moniker Magister.

“Thanks to AlphaGo’s creative and intriguing revelations, players of all levels have been inspired to test out new moves and strategies of their own, often re-evaluating centuries of inherited knowledge in the process,” DeepMind software engineer Lucas Baker, and professional Go player Fan Hui wrote in a blog post.

DeepMind has made 10 games that AlphaGo played against itself while it was training available to watch online, with a further 40 to be uploaded soon.



Source link