Google DeepMind has introduced a powerful new AI system, AlphaEvolve, designed to solve complex problems in mathematics, computer science, and engineering. Described in a white paper released on 14 May, AlphaEvolve blends the creativity of large language models (LLMs) with advanced algorithms that refine its output to produce optimized solutions.
We’re committed to ensuring that as many scientists as possible can access and benefit from AlphaEvolve
Kohli adds
According to Mario Krenn of the Max Planck Institute for the Science of Light, the system marks a major milestone: “AlphaEvolve is the first successful demonstration of scientific discovery using general-purpose LLMs.”
Unlike AI tools tailored for specific tasks—such as AlphaFold for protein structure prediction—AlphaEvolve is a general-purpose system. It can generate and evolve code to address a wide variety of challenges, from unsolved math problems to real-world computing issues.
Practical Impact at Google
Pushmeet Kohli, head of science at DeepMind, says AlphaEvolve has already yielded practical benefits. It contributed to the design of next-generation AI chips (tensor processing units) and helped optimize Google’s global computing infrastructure, reducing resource use by 0.7%.
How It Works
Built on DeepMind’s Gemini LLMs, AlphaEvolve works by taking a user-defined problem, generating a wide range of possible solutions, and using a separate evaluator algorithm to assess and refine them. Over time, the system evolves stronger solutions by learning from past results. “We explore a broad spectrum of ways to solve the problem,” says DeepMind AI scientist Matej Balog, who co-led the research.
AlphaEvolve builds on DeepMind’s earlier system FunSearch, which excelled in solving abstract math problems. Compared to FunSearch, AlphaEvolve can handle larger, more complex code and apply its method across various scientific domains.
One major breakthrough is in matrix multiplication, a core computation in machine learning. AlphaEvolve found a faster method in certain cases than the algorithm created by Volker Strassen in 1969. It even outperformed AlphaTensor, a 2022 DeepMind tool specifically designed for matrix operations.
Limitations and Caution
The system shows promise for optimization problems and could extend to designing microscopes, telescopes, or materials. However, experts urge caution. Simon Frieder of the University of Oxford notes that AlphaEvolve is likely best suited to a narrow range of problems that can be expressed through code.
Huan Sun from Ohio State University remains skeptical until the tool is tested by external researchers: “I would take the results with a grain of salt until it’s used more widely.”
Despite being less resource-hungry than AlphaTensor, AlphaEvolve is still too computationally intensive to be made publicly available, according to Kohli. However, DeepMind hopes to open it to broader use and is inviting the scientific community to suggest applications.