Lisa Su, CEO of Advanced Micro Devices, testifies right through the Senate Commerce, Science and Transportation Committee listening to titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart construction on Thursday, May 8, 2025.
Tom Williams | CQ-Roll Call, Inc. | Getty Images
Advanced Micro Devices on Thursday unveiled new information about its next-generation AI chips, the Instinct MI400 collection, that can send subsequent 12 months.
The MI400 chips will be capable to be assembled right into a complete server rack referred to as Helios, AMD mentioned, which can permit 1000’s of the chips to be tied in combination in some way that they are able to be used as one “rack-scale” gadget.
“For the first time, we architected every part of the rack as a unified system,” AMD CEO Lisa Su mentioned at a release tournament in San Jose, California, on Thursday.
OpenAI CEO Sam Altman gave the impression on degree on with Su and mentioned his corporate would use the AMD chips.
“When you first started telling me about the specs, I was like, there’s no way, that just sounds totally crazy,” Altman mentioned. “It’s gonna be an amazing thing.”
AMD’s rack-scale setup will make the chips glance to a person like one gadget, which is necessary for many synthetic intelligence shoppers like cloud suppliers and firms that broaden huge language fashions. Those shoppers need “hyperscale” clusters of AI computer systems that may span whole knowledge facilities and use huge quantities of energy.
“Think of Helios as really a rack that functions like a single, massive compute engine,” mentioned Su, evaluating it in opposition to Nvidia’s Vera Rubin racks, which might be anticipated to be launched subsequent 12 months.
OpenAI CEO Sam Altman poses right through the Artificial Intelligence (AI) Action Summit, on the Grand Palais, in Paris, on February 11, 2025.
Joel Saget | Afp | Getty Images
AMD’s rack-scale era additionally allows its newest chips to compete with Nvidia’s Blackwell chips, which already are available configurations with 72 graphics-processing devices stitched in combination. Nvidia is AMD’s number one and best rival in giant knowledge heart GPUs for growing and deploying AI packages.
OpenAI — a notable Nvidia buyer — has been giving AMD comments on its MI400 roadmap, the chip corporate mentioned. With the MI400 chips and this 12 months’s MI355X chips, AMD is making plans to compete in opposition to rival Nvidia on worth, with an organization govt telling newshounds on Wednesday that the chips will value much less to perform due to decrease energy intake, and that AMD is undercutting Nvidia with “aggressive” costs.
So a long way, Nvidia has ruled the marketplace for knowledge heart GPUs, partly as it used to be the primary corporate to broaden the type of instrument wanted for AI builders to benefit from chips firstly designed to show graphics for 3-d video games. Over the previous decade, earlier than the AI growth, AMD all in favour of competing in opposition to Intel in server CPUs.
Su mentioned that AMD’s MI355X can outperform Nvidia’s Blackwell chips, in spite of Nvidia the usage of its “proprietary” CUDA instrument.
“It says that we have really strong hardware, which we always knew, but it also shows that the open software frameworks have made tremendous progress,” Su mentioned.
AMD stocks are flat up to now in 2025, signaling that Wall Street does not but see it as a big risk to Nvidia’s dominance.
Andrew Dieckmann, AMD’s normal manger for knowledge heart GPUs, mentioned Wednesday that AMD’s AI chips would value much less to perform and no more to procure.
“Across the board, there is a meaningful cost of acquisition delta that we then layer on our performance competitive advantage on top of, so significant double-digit percentage savings,” Dieckmann mentioned.
Over the following couple of years, giant cloud corporations and nations alike are poised to spend loads of billions of bucks to construct new knowledge heart clusters round GPUs with a purpose to boost up the improvement of state of the art AI fashions. That comprises $300 billion this 12 months by myself in deliberate capital expenditures from megacap era corporations.
AMD is anticipating the whole marketplace for AI chips to exceed $500 billion by means of 2028, even though it hasn’t mentioned how a lot of that put it up for sale can declare — Nvidia has over 90% of the marketplace recently, in keeping with analyst estimates.
Both corporations have dedicated to freeing new AI chips on an annual foundation, versus a biannual foundation, emphasizing how fierce festival has grow to be and the way necessary bleeding-edge AI chip era is for corporations like Microsoft, Oracle and Amazon.
AMD has purchased or invested in 25 AI corporations prior to now 12 months, Su mentioned, together with the acquire of ZT Systems previous this 12 months, a server maker that evolved the era AMD had to construct its rack-sized methods.
“These AI systems are getting super complicated, and full-stack solutions are really critical,” Su mentioned.
What AMD is promoting now
Currently, probably the most complex AMD AI chip being put in from cloud suppliers is its Instinct MI355X, which the corporate mentioned began delivery in manufacturing ultimate month. AMD mentioned that it might be to be had for hire from cloud suppliers beginning within the 3rd quarter.
Companies construction huge knowledge heart clusters for AI need choices to Nvidia, no longer best to stay prices down and supply flexibility, but additionally to fill a rising want for “inference,” or the computing energy wanted for if truth be told deploying a chatbot or generative AI software, which will use a lot more processing energy than conventional server packages.
“What has really changed is the demand for inference has grown significantly,” Su mentioned.
AMD officers mentioned Thursday that they imagine their new chips are awesome for inference to Nvidia’s. That’s as a result of AMD’s chips are provided with extra high-speed reminiscence, which permits larger AI fashions to run on a unmarried GPU.
The MI355X has seven instances the volume of computing energy as its predecessor, AMD mentioned. Those chips will be capable to compete with Nvidia’s B100 and B200 chips, that have been delivery since past due ultimate 12 months.
AMD mentioned that its Instinct chips were followed by means of seven of the 10 greatest AI shoppers, together with OpenAI, Tesla, xAI, and Cohere.
Oracle plans to supply clusters with over 131,000 MI355X chips to its shoppers, AMD mentioned.
Officials from Meta mentioned Thursday that they had been the usage of clusters of AMD’s CPUs and GPUs to run inference for its Llama type, and that it plans to shop for AMD’s next-generation servers.
A Microsoft consultant mentioned that it makes use of AMD chips to serve its Copilot AI options.
Competing on worth
AMD declined to mention how a lot its chips value — it does not promote chips by means of themselves, and end-users most often purchase them thru a {hardware} corporate like Dell or Super Micro Computer — however the corporate is making plans for the MI400 chips to compete on worth.
The Santa Clara corporate is pairing its GPUs along its CPUs and networking chips from its 2022 acquisition of Pensando to construct its Helios racks. That method larger adoption of its AI chips must additionally receive advantages the remainder of AMD’s trade. It’s additionally the usage of an open-source networking era to intently combine its rack methods, referred to as UALink, as opposed to Nvidia’s proprietary NVLink.
AMD claims its MI355X can ship 40% extra tokens — a measure of AI output — in step with buck than Nvidia’s chips as a result of its chips use much less energy than its rival’s.
Data heart GPUs can value tens of 1000’s of bucks in step with chip, and cloud corporations most often purchase them in huge amounts.
AMD’s AI chip trade remains to be a lot smaller than Nvidia’s. It mentioned it had $5 billion in AI gross sales in its fiscal 2024, however JP Morgan analysts expect 60% enlargement within the class this 12 months.