The company has revealed that porting a standard cell library, a task that previously took eight engineers 10 months to complete, can now be done overnight by a single GPU.
"We are trying to use AI wherever we can in our design process," Dally told Google's Jeff Dean. "I would love to have the end-to-end stage where I could simply say, 'design me the new GPU,' but I think we are a long way from that."
Nvidia is already using AI across multiple stages of chip design, from circuit-level optimizations to system-level exploration, and achieves orders-of-magnitude productivity gains and, in some cases, better-than-human results, according to Dally.AI has already transformed standard cell development, one of the most time-consuming steps when transitioning to a new fabrication process. Porting a standard cell library of roughly 2,500–3,000 cells previously required a team of eight engineers working for about 10 months, according to Dally
Nvidia has replaced this work with a reinforcement learning system called NB-Cell, which can now complete the same task overnight on a single GPU.
At a higher level, Nvidia has developed internal large language models — Chip Nemo and Bug Nemo — trained on proprietary architecture documentation covering all GPUs that Nvidia has ever developed. These LLMs can act like engineering assistants who can explain to junior designers how complex hardware blocks work. As a result, Nvidia no longer has to bother senior engineers about things that can be done by LLMs.
"We had a series of LLMs that we called Chip Nemo and Bug Nemo. We took a generic LLM, and then we fine-tuned it by feeding it all of the design documents proprietary to Nvidia," Dally said.
In the long term, Nvidia's chief scientist envisions chip development to shift to a multi-agent model in which specialized AI systems will handle different parts of the design, like human teams do today. For now, AI acts to cut development time by assisting engineers and improving design quality to levels beyond what humans can do, which in turn enables engineers to explore more design options than before.
Beyond cell libraries and engineering assistance, Nvidia is applying reinforcement learning to classical circuit design problems. It comes up with totally bizarre designs that no human would ever come up with, but they are actually 20% or 30% better than the human designs," said Dally.
In addition to using AI for place and route, Nvidia is also using AI to explore architectural designs.
In particular, Nvidia's agent-based systems run large numbers of experiments, evaluate different design directions, and narrow down viable configurations. This greatly accelerates decision-making in the early stages of the chip development cycle when engineers must choose between various architectural trade-offs.
Nvidia is using AI for design verification, one of the longest stages in the chip development cycle. Nonetheless, AI still cannot be responsible for the whole verification process, so Nvidia must emulate its designs and conduct actual experiments to ensure that everything works fine.
"We would like to collapse that space, what the really long pole is design verification," said Dally. "We are particularly looking at how we can use AI to prove that designs work more quickly."