Google Researchers Turn to AI to Design Future Chips and Get Moore's Law Back On Track
The growing complexity of modern semiconductors is slowing progress, but Google has a potential solution: letting AI do the layout.
Researchers from Google have published a paper detailing a potential route to keeping Moore's Law up and running for a few more years — by training an artificial intelligence to design future processors.
Moore's Law is the observation by Intel co-founder Gordon Moore that the number of transistors on leading-edge semiconductor parts trends towards a doubling every 18 months. While initially simply a historical observation, Moore's Law has become a must-hit target for the industry - but as the number of transistors increases, packing them all into a single chip becomes increasingly difficult.
Today’s chips take years to design, resulting in the need to speculate about how to optimize the next generation of chips for the machine learning (ML) models of 2-5 years from now," explain researchers Anna Goldie and Azalia Mirhoseini in a joint blog post outlining the company's approach. "Dramatically shortening the chip design cycle would allow hardware to adapt to the rapidly advancing field of ML. What if ML itself could provide the means to shorten the chip design cycle, creating a more integrated relationship between hardware and ML, with each fuelling advances in the other?
"In 'Chip Placement with Deep Reinforcement Learning,' we pose chip placement as a reinforcement learning (RL) problem, where we train an agent (i.e, an RL policy) to optimize the quality of chip placements. Unlike prior methods, our approach has the ability to learn from past experience and improve over time. In particular, as we train over a greater number of chip blocks, our method becomes better at rapidly generating optimized placements for previously unseen chip blocks. Whereas existing baselines require human experts in the loop and take several weeks to generate, our method can generate placements in under six hours that outperform or match their manually designed counterparts. While we show that we can generate optimized placements for Google accelerator chips (TPUs), our methods are applicable to any kind of chip (ASIC)."
The team claims its approach is the first to offer generalisation, meaning it can use how it laid out earlier netlists to optimize future netlists — meaning that the more the technology is used, the better it gets. Combined with pre-training, this allows for impressive improvements in placement costs.
"The ability of our approach to learn from experience and improve over time unlocks new possibilities for chip designers," the pair write. "As the agent is exposed to a greater volume and variety of chips, it becomes both faster and better at generating optimized placements for new chip blocks. A fast, high-quality, automatic chip placement method could greatly accelerate chip design and enable co-optimization with earlier stages of the chip design process. Although we evaluate primarily on accelerator chips, our proposed method is broadly applicable to any chip placement problem. After all that hardware has done for machine learning, we believe that it is time for machine learning to return the favour."
More information is available on the Google AI blog, or from the paper published on arXiv.org under open-access terms.