DeepMind Technologies Limited is a leading artificial intelligence (AI) research company that was founded in London in 2010 by Demis Hassabis, Mustafa Suleyman, and Shane Legg. The company is best known for developing cutting-edge AI systems that have been used in a variety of applications, ranging from healthcare to gaming.
DeepMind’s breakthroughs in AI have been impressive, with its algorithms achieving unprecedented accuracy and speed in areas such as image and speech recognition, natural language processing, and gaming. Some of the company’s most notable achievements include the development of AlphaGo, an AI program that defeated the world champion at the ancient Chinese game of Go, and AlphaFold, an AI system that solved the 50-year-old “protein folding problem” and revolutionized the field of biochemistry.
One of the things that sets DeepMind apart from other AI companies is its focus on developing algorithms that can learn and adapt without being explicitly programmed. This approach, known as “deep learning,” involves training neural networks on massive amounts of data, allowing the algorithms to learn patterns and make predictions on their own.
DeepMind’s commitment to advancing the field of AI has not gone unnoticed, and the company has received numerous awards and accolades for its work. In 2018, DeepMind was awarded the Royal Society’s prestigious Royal Medal for its contributions to the field of computer science. The company has also been recognized by the European Commission for its efforts to promote ethical AI and has been named one of the most innovative companies in the world by Fast Company magazine.
Despite its successes, DeepMind has faced criticism from some who argue that its research may have unintended consequences. One of the company’s most controversial projects was its collaboration with the UK’s National Health Service (NHS) to develop an AI system for diagnosing eye diseases. The project sparked concerns over patient privacy and the potential for AI to replace human doctors.
In response to these concerns, DeepMind has taken steps to ensure that its research is conducted ethically and transparently. The company has established an independent review board to oversee its research and has developed a set of ethical principles to guide its work. In addition, DeepMind has published research papers that detail its methods and findings, making its work accessible to the broader scientific community.
DeepMind Products
According to the company’s website, DeepMind Technologies’ goal is to combine “the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms”.
Google Research released a paper in 2016 regarding AI safety and avoiding undesirable behaviour during the AI learning process. Deepmind has also released several publications via its website. In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours.
In July 2018, researchers from DeepMind trained one of its systems to play the computer game Quake III Arena.
As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by Nature or Science.
DeepMind received media attention during the AlphaGo period; according to a LexisNexis search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019.
Deep reinforcement learning
As opposed to other AIs, such as IBM’s Deep Blue or Watson, which were developed for a pre-defined purpose and only function within its scope, DeepMind claims that its system is not pre-programmed: it learns from experience, using only raw pixels as data input. Technically it uses deep learning on a convolutional neural network, with a novel form of Q-learning, a form of model-free reinforcement learning. They test the system on video games, notably early arcade games, such as Space Invaders or Breakout. Without altering the code, the AI begins to understand how to play the game, and after some time plays, for a few games (most notably Breakout), a more efficient game than any human ever could.
In 2013, DeepMind published research on an AI system that could surpass human abilities in games such as Pong, Breakout and Enduro, while surpassing state of the art performance on Seaquest, Beamrider, and Q*bert. This work reportedly led to the company’s acquisition by Google. DeepMind’s AI had been applied to video games made in the 1970s and 1980s; work was ongoing for more complex 3D games such as Quake, which first appeared in the 1990s.
In 2020, DeepMind published Agent57, an AI Agent which surpasses human level performance on all 57 games of the Atari2600 suite.
AlphaGo and successors
In 2014, the company published research on computer systems that are able to play Go.
In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero. This was the first time an artificial intelligence (AI) defeated a professional Go player. Previously, computers were only known to have played Go at “amateur” level. Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force.
In March 2016 it beat Lee Sedol—a 9th dan Go player and one of the highest ranked players in the world—with a score of 4–1 in a five-game match.
In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years.[63][64] It used a supervised learning protocol, studying large numbers of games played by humans against each other.
In 2017, an improved version, AlphaGo Zero, defeated AlphaGo 100 games to 0. AlphaGo Zero’s strategies were self-taught. AlphaGo Zero was able to beat its predecessor after just three days with less processing power than AlphaGo; in comparison, the original AlphaGo needed months to learn how to play.
Later that year, AlphaZero, a modified version of AlphaGo Zero but for handling any two-player game of perfect information, gained superhuman abilities at chess and shogi. Like AlphaGo Zero, AlphaZero learned solely through self-play.
DeepMind researchers published a new model named MuZero that mastered the domains of Go, chess, shogi, and Atari 2600 games without human data, domain knowledge, or known rules.
Researchers applied MuZero to solve the real world challenge of video compression with a set number of bits with respect to Internet traffic on sites such as YouTube, Twitch, and Google Meet. The goal of MuZero is to optimally compress the video so the quality of the video is maintained with a reduction in data. The final result using MuZero was a 6.28% average reduction in bitrate.
In October 2022, DeepMind unveiled a new version of AlphaZero, called AlphaTensor, in a paper published in Nature. The version discovered a faster way to perform matrix multiplication – one of the most fundamental tasks in computing – using reinforcement learning. For example, AlphaTensor figured out how to multiply two mod-2 4×4 matrices in only 47 multiplications, unexpectedly beating the 1969 Strassen algorithm record of 49 multiplications.
AlphaFold
In 2016, DeepMind turned its artificial intelligence to protein folding, a long-standing problem in molecular biology. In December 2018, DeepMind’s AlphaFold won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. “This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem,” Hassabis said to The Guardian. In 2020, in the 14th CASP, AlphaFold’s predictions achieved an accuracy score regarded as comparable with lab techniques. Dr Andriy Kryshtafovych, one of the panel of scientific adjudicators, described the achievement as “truly remarkable”, and said the problem of predicting how proteins fold had been “largely solved”.
In July 2021, the open-source RoseTTAFold and AlphaFold2 were released to allow scientists to run their own versions of the tools. A week later DeepMind announced that AlphaFold had completed its prediction of nearly all human proteins as well as the entire proteomes of 20 other widely studied organisms. The structures were released on the AlphaFold Protein Structure Database. In July 2022, it was announced that the predictions of over 200 million proteins, representing virtually all known proteins, would be released on the AlphaFold database.
WaveNet and WaveRNN
In 2016, DeepMind introduced WaveNet, a text-to-speech system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as Google Assistant. In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet.
In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with Google AI. In 2020 WaveNetEQ, a packet loss concealment method based on a WaveRNN architecture, was presented. In 2019, Google started to roll WaveRNN with WavenetEQ out to Google Duo users.
AlphaStar
In 2016, Hassabis discussed the game StarCraft as a future challenge, since it requires strategic thinking and handling imperfect information.
In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game StarCraft II. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time. It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match.
In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only Protoss v. Protoss, this one played as all of the game’s races, and had earlier unfair advantages fixed. By October 2019, AlphaStar reached Grandmaster level on the StarCraft II ladder on all three StarCraft races, becoming the first AI to reach the top league of a widely popular esport without any game restrictions.
AlphaCode
In 2022, DeepMind unveiled AlphaCode, an AI-powered coding engine that creates computer programs at a rate comparable to that of an average programmer, with the company testing the system against coding challenges created by Codeforces utilized in human competitive programming competitions. AlphaCode earned a rank equivalent to 54% of the median score on Codeforces after being trained on GitHub data and Codeforce problems and solutions. The program was required to come up with a unique solution and stopped from duplicating answers.
DeepMind Sparrow
Sparrow is an artificial intelligence-powered chatbot developed by DeepMind to build safer machine learning systems by using a mix of human feedback and Google search suggestions.
Chinchilla AI
Chinchilla AI is a language model developed by DeepMind.
Conclusion
Overall, DeepMind’s contributions to the field of AI have been nothing short of remarkable. The company’s cutting-edge research has the potential to transform a wide range of industries and improve people’s lives in countless ways. While concerns about the ethical implications of AI are certainly valid, it is clear that DeepMind is committed to advancing the field in a responsible and transparent manner. As such, the company is sure to remain a key player in the world of AI for many years to come.
Author:Com21.com,This article is an original creation by Com21.com. If you wish to repost or share, please include an attribution to the source and provide a link to the original article.Post Link:https://www.com21.com/deepmind-review.html