The earnings call for Advanced Micro Devices

Precise News

Advanced Micro Devices (AMD -1.14%) Q1 2024 Earnings Call , 5:00 p.m.
Participants on today’s call are Doctor Lisa Su, our chair and chief executive officer; and Jean Hu, our executive vice president, chief financial officer, and treasurer.
Revenue declined 11% sequentially as higher data center revenue resulting from the ramp of our AMD Instinct GPUs was offset by lower gaming and embedded segment revenues.
Inventory increases sequentially by $301 million to $4.7 billion, primarily to support the continued ramp of data center and client products in advanced process nodes.
And then as my follow-up, I was hoping you could speak to your data center GPU roadmap beyond the MI300.
But beyond the MI300, how should we think about your roadmap and your ability to compete in data center?
But I was just wondering, what’s implied in your second-quarter guidance for the data center CPU side of things?
I think the most important thing is we did say data center is going to grow double digit sequentially.

NEUTRAL

AMD, or Advanced Micro Devices, is down 1.14 percent.

Call for Q1 2024 Earnings.

, five o’clock. me. ET.

Contents:.

Prepared Statement.

Issues and Responses.

Attendees of the Call.

Prepared Statement:.

The operator.

Thank you for joining us for the AMD first-quarter 2024 conference call. [Instructions for the operator] Please note that this conference is being recorded. I am delighted to present to you our host, Vice President of Investor Relations Mitch Haws. Much obliged, Mitch.

It’s your turn to start.

Head of Investor Relations Mitch Haws.

Welcome to AMD’s first-quarter 2024 financial results conference call. Thank you. You ought to have had a chance by now to look over a copy of our earnings press release and the slides that go with it. If you haven’t had a chance to look over these materials, you can do so by visiting AMD.com’s Investor Relations page. Throughout today’s call, we will mostly discuss non-GAAP financial measures. The complete non-GAAP to GAAP reconciliations are available in the press release from today and on the slides that are available on our website.

Doctor Lisa Su, our chair and CEO, and Jean Hu, our executive vice president, chief financial officer, and treasurer, are on the call today. This is a live call that will be webcasted and replayed on our website. Prior to our start, let me to mention that our executive vice president and chief technology officer, Mark Papermaster, will be present at the TD Cowen Technology Media and Telecom Conference on May 29. Additionally, Executive Vice President, Chief Financial Officer, and Treasurer Jean Hu will be present at the J.

P. On Tuesday, May 21st, there is the Morgan Global Media and Communications Conference; on Wednesday, June 5th, there is the Bank of America Global Technology Conference; and on Tuesday, June 11th, there is the Jefferies Nasdaq Investor Conference. The talk today includes forward-looking statements that speak only as of today and are based on current beliefs, assumptions, and expectations. As such, they carry risks and uncertainties that could cause actual results to differ materially from our current expectations. For additional information on the variables that could result in a material difference between actual results and our press release, please refer to the cautionary statement.

I’ll give the call to Lisa after that.

Lisa Su serves as the CEO and President.

Good afternoon to everyone listening this afternoon, and thank you, Mitch. Since AI is being used widely, there is a huge demand for more compute in a wide range of markets, which makes this an extremely exciting time for the industry. As we expand our data center business and integrate AI capabilities throughout our product line, we are performing admirably in the face of these challenges. Revenue climbed to $5.05 billion in the first quarter.

As the sales of the data center and client segments increased by more than 80 percent annually, we were able to increase profitability and gross margin by more than two percentage points. Revenue from the data center segment reached a record $2.33 billion, growing by 80% year over year and 2% sequentially. The robust ramp of AMD Instinct MI300X GPU shipments and a double-digit percentage increase in server CPU sales were the main drivers of the significant year-over-year growth. We think that increased enterprise adoption and greater cloud deployments drove our server CPU revenue share gain in the seasonally weak first quarter.

In the cloud, hyperscalers kept implementing fourth-generation EPYC processors to power more of their internal workloads and public instances, even though the general demand environment remained mixed. Amazon, Microsoft, and Google have expanded their fourth-generation EPYC processor offerings with new instances and regional deployments, bringing the total number of AMD-powered public instances available globally to almost 900. There has been an improvement in demand within the enterprise, as CIOs are required to maintain the physical footprint and power requirements of their current infrastructure while adding additional general-purpose and AI compute capacity. This situation is a perfect fit for our EPYC processors’ value proposition.

We can deliver the same amount of compute with 45% fewer servers than the competition thanks to our high core count and energy efficiency, which can reduce initial capex by up to half and annual opex by more than 40%. Because of this, the use of EPYC CPUs in enterprises is growing, as seen by the deployments with major companies like American Airlines, DBS, Emirates Bank, Shell, and SD Micro. Additionally, we’re gaining traction with AMD-powered solutions, which power the most widely used database and ERP software. To illustrate, only fourth-generation EPYC processors are currently used to power the most recent version of Oracle Exadata, the industry-leading database system utilized by 76 of the Fortune 100.

We’re very excited about our upcoming Zen five core EPYC processors, which will be part of the next generation Turin family. Turin’s silicon is looking fantastic, and we’re sampling it widely. Due to Turin’s notable improvements in performance and efficiency, we are in a strong position to take on an even greater portion of first- and third-party workloads in the cloud. Furthermore, compared to fourth-generation EPYC platforms, our server partners are developing 30% more Turin platforms, which will expand our enterprise SAM by providing fresh solutions tailored for higher workloads.

The launch of Turin later this year is still planned. With regard to our wider range of data center products, we achieved record data center GPU revenue for the second consecutive quarter, with MI300 becoming the fastest-ramping product in AMD history and surpassing $1 billion in total sales in less than two quarters. Cloud-based MI300X production deployments have grown at Microsoft, Meta, and Oracle, enabling generative AI inferencing and training for a wide range of public offerings as well as internal workloads. With multiple MI300 platforms going into volume production this quarter, we’re collaborating closely with Dell, HP, Lenovo, Super Micro, and other enterprise partners.

Furthermore, over 100 AI and enterprise clients are actively creating or implementing MI300X. In the area of AI software, we made great strides by incorporating AMD hardware support into the OpenAI Triton compiler, which will facilitate the development of extremely efficient AI applications for AMD platforms. Along with adding new features like video decode, we also released a major update to our ROCm software stack that greatly improves generative AI performance by integrating advanced attention algorithms support for sparsity and FP8. Other improvements include expanding support for open-source libraries like VLLM and frameworks like JAX. Our partners’ AI workloads are performing incredibly well.

In comparison to H100, MI300X GPUs are providing leadership, inferencing performance, and significant TCO benefits as we jointly optimize for their models. For instance, when running their flagship LLMs on MI300X as opposed to H100, a number of our partners are witnessing notable increases in tokens per second. In addition, as a founding member of the Ultra Ethernet Consortium, we are committed to fostering the wide ecosystem needed to power the next generation of AI systems and are working to optimize the widely used Ethernet protocol to support AI workloads at data center scale. The need for MI300 is still increasing.

Furthermore, we have raised our revenue projection for data center GPUs from $3.5 billion in January to over $4 billion in 2024, based on our growing customer engagements. Eventually, as we accelerate and broaden our roadmaps for AI hardware and software and increase the footprint of our data center GPUs, we will be collaborating more closely with our cloud and enterprise clients. Moving on to our client segment, revenue was $1.04 billion, up 85% from the previous year due to high demand from OEMs and the channel for our most recent Ryzen mobile and desktop processors. Revenue from the client segment fell by 6% over a given period.

In the first quarter, there was a lot of demand for our most recent generation of Ryzen CPUs. As new Ryzen 8040 notebook designs from Acer, Asus, HP, Lenovo, and other companies ramped up, sales of Ryzen desktop CPUs increased by a robust double-digit percentage year over year, while sales of Ryzen mobile CPUs nearly doubled. Earlier this month, we introduced our Ryzen Pro 8000 processors, which added to our lineup of industry-leading enterprise PC offerings. Our desktop CPUs from the Ryzen Pro 8000 series are the first to include dedicated on-chip AI accelerators in commercial desktop pieces, and our mobile Ryzen Pro 8040 CPUs offer industry-leading performance and battery life for business notebooks.

Based on the performance and efficiency benefits of our Ryzen Pro portfolio and an expanded range of AMD-powered commercial PCs from our OEM partners, we see clear opportunities to increase our market share of commercial PCs. From an optimistic perspective, we think the market will resume annual growth in 2024, propelled by the onset of an enterprise refresh cycle in the adoption of AI PCs. AI has the potential to produce productivity and usability gains never seen in PC history, making it the biggest revolution in the industry since the internet. With over 150 ISVs on track to be developing for AMD AI PCs by the end of the year, we’re collaborating closely with Microsoft and a wide range of partners to enable the next generation of AI experiences powered by Ryzen processors.

Later this year, we will also introduce our next-generation Ryzen mobile processors, codenamed Strix, marking the next significant step in our AI PC roadmap. The notable improvements in performance and energy efficiency that Strix is delivering have resulted in a high level of customer interest. Because Strix makes it possible to experience next-generation AI in laptops that are faster, lighter, and thinner than ever before, the design win momentum for premium notebooks is surpassing that of previous generations. The prospects for the PC market’s expansion excite us.

Additionally, we anticipate growing our revenue share this year due to the strength of our Ryzen CPU portfolio. I’ll now turn to our section about games. Revenue fell to $922 million, a 48 percent year-over-year and 33 percent sequential decline. Considering that we are currently in the fifth year of the console cycle, first-quarter semi-custom SoC sales decreased in line with our predictions.

Revenue in gaming graphics fell both sequentially and yearly. With the global release of the Radeon RX 7900 GRE, we increased the number of graphics cards in our 7000 series lineup. We also unveiled AMD Fluid Motion Frames, a driver-based technology that can result in significant performance gains in thousands of games. Going to the section that is embedded now. As customers continue to prioritize returning their inventory levels to normal, revenue fell by 20% sequentially and by 46% annually to $846 million.

With high input counts, power efficiency, and cutting-edge security features, we introduced the Spartan UltraScale+ FPGA family. We are also witnessing a robust pipeline of growth for our cost-optimized embedded portfolio across several markets. We are now projecting sequentially flat second-quarter embedded segment revenue, with a gradual recovery in the second half of the year, given the current embedded market conditions. In the long run, we believe that edge AI represents a significant growth potential that will raise the need for computing power across a variety of platforms. Our second generation of versal adaptive SoCs, which provide ten times better scalar compute performance and three times more AI tops per watt than our previous generation of industry-leading adaptive SoCs, was unveiled in response to this demand.

The only technology that combines multiple compute engines to handle AI, pre-processing, inferencing, and post-processing on a single chip is Versal gen two adaptive SoCs. This allows customers to quickly add highly performant and efficient AI capabilities to a variety of products. We had the pleasure of having Subaru join us at our launch, announcing that the next generation of their EyeSight ADAS system will be powered by versal AI Edge Series Gen two devices. As more customers use our entire portfolio of FPGAs, CPUs, GPUs, and adaptive SoCs to meet a greater percentage of their compute needs, the momentum for embedded design wins continues to be very strong. To summarize, we performed admirably during the first quarter, positioning us to yield robust yearly revenue growth and increased gross margin fueled by the increasing traction of our Instinct, EPYC, and Ryzen product lines.

Our 2024 priorities are very clear: introduce our next-generation Zen five PC and server processors to maintain our leadership performance, expand our portfolio of differentiated adaptive computing solutions, and accelerate our data center growth by increasing Instinct GPU production and gaining share with our EPYC processors. Looking ahead, AMD has never had more opportunities thanks to AI. We are still in the early phases of what we anticipate will be a period of sustained growth driven by an unquenchable demand for both high-performance AI and general purpose compute, despite the fact that build-outs of AI infrastructure have grown significantly. To capitalize on this significant growth opportunity, we have increased our investments throughout the entire organization. We have accelerated our AI hardware roadmaps, partnered closely with the biggest AI companies to co-optimize solutions for their most critical workloads, and rapidly expanded our AI software stack.

The business’s trajectory and the substantial growth opportunities that lie ahead excite us greatly. In order to give Jean more context for our first-quarter results, I’d like to now transfer the call to her. Jean, huh?

Jean Hu serves as treasurer, chief financial officer, and executive vice president.

Thank you, Lisa. And happy afternoon to all of you. I’ll start by going over our financial results and then give you our current projection for the second quarter of the 2024 fiscal year. In the first quarter, we generated 230 basis points of gross margin growth in addition to delivering robust year-over-year revenue growth in our client and data center segments.

Revenue for the first quarter of 2024 was $5.5 billion, up 2% year over year. This increase was largely offset by lower revenue in our gaming and embedded segments, which were primarily derived from the data center and client segments. Revenue decreased 11% sequentially as lower revenues from the gaming and embedded segments offset higher revenue from the data center, which came from the ramp of our AMD Instinct GPUs. Gross margin increased by 230 basis points to 52% over the previous year, primarily due to higher revenue contributions from the client and data center segments, which were somewhat offset by lower revenue contributions from the embedded and gaming segments. Because we are investing heavily in R&D and marketing to take advantage of the enormous growth opportunities in artificial intelligence that lie ahead of us, operating expenses were $1.7 billion, up 10% from the previous year.

With $1.1 billion in operating income, the operating margin was 21%. $120 million was spent on taxes, interest expense, and other. Diluted earnings per share for the fourth quarter of 2024 were $0.62, up 3% from the previous year. Moving on to our reportable segments, let’s look at the data center first.

Data centers generated record-breaking $2.3 billion in segment revenue in a quarter, up 80% from the previous year and $1 billion more. A robust double-digit percentage growth in our server processing revenue due to the expansion of our Zen 4 products and the ramp of AMD Instinct GPUs from cloud and enterprise customers drove over 40% of total revenue from the data center. A seasonal decline in server CPU sales was partially offset by the 2 percent increase in our sequential basis revenue brought about by the ramp-up of our AMD Instinct GPUs. Operating income for the data center segment was $541 million, or 23 percent of revenue, as opposed to $148 million, or 11 percent, in the previous year.

Due to operating leverage, operating income increased 266 percent year over year despite a significant increase in R&D spending. Revenue from the client segment reached $1.44 billion, an 85% increase from the previous year, mostly due to Ryzen 8000 series processors. Customer revenue decreased by 6% in the following period. Due to increased revenue, the client segment’s operating income increased to $86 million, or 6% of revenue, from an operating loss of $172 million in the previous year.

Revenue from the gaming segment was $922 million, down 33% sequentially and 48% annually due to a decline in Radeon GPU sales and [semi] customer base. Compared to $314 million or 18% a year earlier, the gaming segment’s operating income increased to $151 million, or 16 percent of revenue. Revenue from the embedded segment was $846 million, down 20% sequentially and 46% annually as customers continue to control their inventory levels. Operating income for the embedded segment decreased to $342 million, or 41% of revenue, from $798 million, or 51%, the previous year.

Changing our attention to the cash flow and balance sheet. We made $521 million in operating cash flow during the quarter, and we had $379 million in free cash flow. The sequential increase in inventory, which brings it to $4 point7 billion, is mainly due to the ongoing ramp of client and data center products in advanced process nodes. Six billion dollars were in cash and cash equivalents at the end of the quarter, along with short-term investments.

Remember that this June is the maturity date of our $750 million debt. We intend to pay off the debt with our current cash because we have plenty of liquidity. Let’s now discuss our outlook for the second quarter of 2024. We anticipate $5.07 billion in revenue, give or take $300 million.

Sequentially, we anticipate a double-digit percentage increase in revenue from the data center segment, mostly due to the data center GPU ramp; an increase in revenue from the client segment; a flat revenue from the embedded segment; and, depending on current demand signals, a significant double-digit percentage decline in revenue from the gaming segment. Because of the strength of our product portfolio, we anticipate a significant year-over-year increase in revenue for the data center and client segments and a significant double-digit percentage decline in revenue for the embedded and gaming segments. Furthermore, we project the non-GAAP effective tax rate to be 13 percent, the diluted share count to be approximately 1 point64 billion shares, non-GAAP operating expenses to be approximately $1.8 billion, and a non-GAAP gross margin of roughly 53 percent for the second quarter. Finally, we had a great start to the year.

In addition to delivering year-over-year revenue growth in our data center and client segment and increasing the gross margin, we made significant progress on our strategic priorities. We think our investment will put us in a great position to take advantage of the significant AI opportunities that lie ahead. I’ll hand it back to Mitch to continue the Q&A after that.

Chief Investor Relations Officer Mitch Haws.

Regards, Jean. We’d be pleased to take questions from the audience, Paul.

Questions and Responses:.

Handler.

[Instructions for the operator] Toshiya Hari from Goldman Sachs has the first query. Kindly ask your question.

Goldman Sachs analyst Toshiya Hari.

Greetings, I greatly appreciate you answering my query. My initial inquiry concerns the MI300, Lisa. The full-year forecast is being increased from $3.55 billion to $4 billion by you. If you can shed some light on the matter, it would be greatly appreciated. What’s generating that extra $500 million in revenue, in your opinion? Are there more enterprise or cloud customers? Are there more new customers?

Subsequently, regarding the supply side, there have been reports or discussions suggesting that CoWos under HBM may pose a significant constraint for you all. Additionally useful would be any information you could provide regarding how you’re approaching the supply side of the business. And I have a brief update after that.

Lisa Su — President and Chief Executive Officer.

Great. Thank you, Toshiya for the question. Look, the MI300 ramp is going really well. If we look at just what’s happened over the last 90 days, we’ve been working very closely with our customers to qualify MI300 in their production data centers, both from a hardware standpoint, software standpoint.

So far, things are going quite well, and what we see now is just greater visibility to both current customers as well as new customers committing to MI300. So that gives us the confidence to go from $3.5 billion to $4 billion. And I view this as you know very much — as a very dynamic market, and there are lots of customers. We said on the — in the prepared remarks that we have over 100 customers that were engaged with in both development as well as deployment.

So overall the ramp is going really well. As it relates to the supply chain, actually I would say I’m very pleased with how supply has ramped. It is absolutely the fastest product ramp that we have done. It’s a very complex product.

Chiplets, CoWos, 3D integration, HBM. And so far it’s gone extremely well. We’ve gotten great support from our partners. And I would say even in the quarter that we just finished, we actually did a little bit better than expected when we first started the quarter.

I think Q2 will be another significant ramp, and we’re going to ramp supply every quarter this year. So I think the supply chain is going well. We are tight on supply. So there’s no question in the near term that if we had more supply we have demand uh for that product.

And we’re going to continue to work on those elements as we go through the year. But I think both on the demand side and the supply side, I’m very pleased with how the ramp is going.

Toshiya Hari — Goldman Sachs — Analyst.

Thank you for all the details. And then as my follow-up, I was hoping you could speak to your data center GPU roadmap beyond the MI300. The other concern that we hear is your nearest competitor has been pretty transparent with their roadmap, and that extends into ’25 and oftentimes ’26. So maybe this isn’t the right venue for you to give too much.

But beyond the MI300, how should we think about your roadmap and your ability to compete in data center? Thank you.

Lisa Su — President and Chief Executive Officer.

Yeah, sure. So look, Toshiya, when we start with the roadmap, I mean we always think about it as a multi-year, multi-generational roadmap. So we have the follow-ons to MI300 as well as the next next generations well in development. I think what is true is we’re getting much closer to our top AI customers.

They’re actually giving us significant feedback on the roadmap and what we need to meet their needs. Our chiplet architecture is actually very flexible, and so that allows us to actually make changes to the roadmap as necessary. So we’re very confident in our ability to continue to be very competitive. Frankly, I think we’re going to get more competitive.

Right now, I think MI300X is in a sweet spot for inference, very, very strong inference performance. I see as we bring in additional products later this year into 2025, that will continue to be a strong spot for us, and then we’re also enhancing our training performance and our software roadmap to go along with it. So more details to come in the coming months. But we have a strong roadmap that goes through the next couple of years, and it is informed by just a lot of learning in working with our top customers.

Toshiya Hari — Goldman Sachs — Analyst.

Appreciate it. Thank you.

Lisa Su — President and Chief Executive Officer.

Sure.

Operator.

Our next question is from Ross Seymore with Deutsche Bank. Please proceed with your question.

Ross Seymore — Deutsche Bank — Analyst.

Hi, Thanks for having me ask a question. The non AI side of the data center business, it sounds like the enterprise side has some good traction even though the sequential drop happened seasonally, Lisa. But I was just wondering, what’s implied in your second-quarter guidance for the data center CPU side of things? And generally speaking, how are you seeing that whole kind of GPU versus CPU crowding out dynamic playing out for the rest of 2024?

Lisa Su — President and Chief Executive Officer.

Yeah, sure. Thank you for asking, Ross. Our EPYC business has actually done fairly well, in my opinion. There is some variation in the market.

Some of the cloud guys may still be figuring out how to optimize things, in my opinion. Depending on the customer, I believe. In fact, we observed some very positive early indicators in the enterprise space during the first quarter, with some major customers initiating refresh programs. We can clearly see Genoa’s strong value proposition being reflected throughout the entire organization.

The entire data center is anticipated to grow by double digits in the second quarter. And after that, we anticipate the server to be operational as well. There are, I believe, a few drivers for us as the second half of the year approaches. Although we do anticipate some improvement in the general market conditions for the server industry, the second half of the year will also see the launch of our Turin location.

That will also, we believe, extend our leadership position within the server market. So overall, I think the business is performing well, and we believe that we’re continuing to be very well positioned to gain share throughout the year.

Analyst Ross Seymore — Deutsche Bank.

I’m grateful for it. And I suppose as a follow-up, just to flip to the client side, I saw that you guided it up step-by-step. Could you provide any kind of estimate regarding that for the upcoming quarter? Additionally, regarding the AI PC aspect in general, are you primarily thinking of it as an ASP driver or as a unit driver, or will it be both?

Lisa Su is the CEO and President.

Indeed, so once more, I believe that we’re quite enthusiastic about the AI PC, both in the short- and even more so-term. From the M&C side as well as the channel, I believe the client business is doing well. In the second quarter, we anticipate client growth to be sequential. Concerning your question regarding units versus ASPs, as we move into the second half of the year, I believe we anticipate some increase in both.

Examining the Strix line of AI PCs, we find that they are exceptionally well-suited to the high-end market segments. That’s also, in my opinion, where you’ll initially see the strongest AI PC content. You would then notice it more throughout the remainder of the portfolio as 2025 approaches.

Analyst at Deutsche Bank, Ross Seymore.

I am grateful.

Lisa Su serves as the CEO and President.

Thank you. Thanks, Ross.

Operator.

Thank you. Our next question is from Matt Ramsay with TD Cowen. Please proceed with your question.

Matt Ramsay — TD Cowen — Analyst.

Yes. Thank you very much. Good afternoon, everybody. Lisa, I have sort of a longer-term question and then a shorter-term follow-up.

One, I guess is the — one of the questions that I’ve been getting from folks a lot is, obviously your primary competitor has announced, I guess, a multiyear roadmap. And we continue to hear more and more from other folks about internal ASIC programs at some of your primary customers, whether they be for inference, or training, or both. I guess it would be really helpful if you could to talk to us about how your conversations go with those customers, how committed they are to your long-term roadmap, multi-generation, as you described it, how they juxtapose doing investments of their internal silicon versus using a merchant supplier like yourselves and maybe what advantages the experience across a large footprint of customers can give your company that those guys doing internal ASICs might not get. Thanks.

Lisa Su — President and Chief Executive Officer.

Yeah, sure, Matt. Thanks for the question. So look, I think one of the things that we see and we’ve said is that the TAM for AI compute is growing extremely quickly, and we see that continuing to be the case in all conversations. In 2027, we had indicated a TAM of, let’s say, $400 billion.

At the time, that might have seemed aggressive to some people. However, based on our conversations with customers, the overall demand for AI compute is very high. And some of the most significant cloud service providers have just made announcements that demonstrate this, as you have seen. There are various aspects to it, in my opinion.

First off, we enjoy excellent working relationships with the majority of the leading artificial intelligence companies. And the idea there is we want to innovate together. Despite the fact that there will be numerous solutions, I don’t believe there is a single, universally applicable solution when you consider these massive language models and everything you require for training and inferencing there. The GPU is still the preferred architecture, especially as the algorithms and the models are continuing to evolve over time, and that favors our architecture and also our ability to really optimize CPU with GPU.

So from my standpoint, I think we’re very happy with the partnerships that we have. I think this is a huge opportunity for all of us to really innovate together, and we see that there’s a very strong commitment to working together over multiple years going forward. And that’s, I think, a testament to some of the work that we’ve done in the past, and that very much is what happened with the epic roadmap as well.

Analyst at TD Cowen, Matt Ramsay.

Thank you, Lisa. As a slightly abbreviated follow-up. And from my extensive and prolonged observation of the company, I believe that noise has always existed in the system, regardless of the stock price, which can be as low as $200 or as high as $2. Either way, there has been a kind of constant noise.

But the last 1.5 months has been extreme in that sense. And so I wanted to just — I’ve got random reports in my inbox about changes in demand from some of your MI300 customers or planned demand for consuming your product. I think you answered earlier about the supply situation and how you’re working with your partners there. But has there been any change from the customers that you’re in ramp with now or that you soon will be of what their intention is for demand? Or in fact has that maybe strengthened rather than gone down in recent periods? Because I keep getting questions about it.

Thanks.

Lisa Su serves as the CEO and President.

Sure, Matt. Look, I think I might have said it earlier, but maybe I’ll repeat it again. I think the demand side is actually really strong. And what we see with our customers and what we are tracking very closely is customers moving from, let’s call, it initial POCs to pilots, to full scale production, to deployment across multiple workloads.

And we’re moving through that sequence very well. I feel very good about the deployments and ramps that we have ongoing right now, and I also feel very good about new customers who are sort of earlier on in that process. So from a demand standpoint, we continue to build backlog as well as build engagements going forward. And similarly on the supply standpoint, we’re continuing to build supply momentum.

But from a speed-of-ramp standpoint, I’m actually really pleased with the progress.

Matt Ramsay — TD Cowen — Analyst.

All right. Thank you very much.

Operator.

Thank you. Our next question is from Aaron Rakers with Wells Fargo. Please proceed with your question.

Aaron Rakers — Wells Fargo Securities — Analyst.

Yeah. Thanks for taking the question, and I apologize if I missed this earlier. But I know last quarter, you talked about having a — securing enough capacity to support significant upside to the ramp of the MI300. I know that you upped your guide now to $4 billion.

I’m curious to how you would characterize the supply relative to that context offered last quarter as we think about that new kind of target of $4 billion. Would you characterize it as still having supply capacity upside potential? Thank you.

Lisa Su holds the positions of CEO and President.

Yes, Aaron. So we’ve said before that our goal is to ensure that we have supply that exceeds the current guidance, and that is true. We still have significant supply visibility beyond that, even though we have increased our guidance from $3.55 billion to $4 billion.

Analyst Aaron Rakers works at Wells Fargo Securities.

True enough. OK. Thank you. And then as a quick follow-up, going back to the earlier question on server demand, more traditional server.

As you see the ramp of maybe share opportunities in more traditional enterprise, I’m curious to how you would characterize the growth that you expect to see in more traditional server CPU market as we move through ’24 or even longer term, how you characterize that growth trend.

Lisa Su — President and Chief Executive Officer.

Yeah, I think, Aaron, what I would say is there are — the need for refresh of, let’s call it, older equipment is certainly there, so we see a refresh cycle coming. We also see AI head nodes, as another place where we see growth in, let’s call it, the more traditional market. Our sweet spot is really in the highest performance, sort of high core count, energy efficiency space, and that is playing out well. And we’re also — we’ve traditionally been very strong in, let’s call it, cloud first-party workloads.

And that is now extending to cloud third-party workloads where we see enterprises who are, let’s call it, in more of a hybrid environment adopting AMD both in the cloud and on-prem. So I think overall, we see it as a continued good progression for us with the server business going through 2024 and beyond.

Aaron Rakers — Wells Fargo Securities — Analyst.

Yeah. Thank you.

Lisa Su — President and Chief Executive Officer.

Thanks.

Operator.

Thank you. Our next question is from Vivek Arya with Bank of America Securities. Please proceed with your question.

Vivek Arya — Bank of America Merrill Lynch — Analyst.

Thanks for taking my question. Lisa, I just wanted to go back to the supply question and the $4 billion outlook for this year. I think at some point, there was a suggestion that the $4 billion number, right, that there are still supply constraints. But I think at a different point, you said that you have supply visibility significantly beyond that.

Given that we are almost at the middle of the year, I would have thought that you would have much better visibility about the back half. So is the $4 billion number a supply constraint number? Or is it a demand constraint number? Or alternatively, if you could give us some sense of what the exit rate of your GPU sales could be. I think on the last call, $1.5 billion was suggested. Could it be a lot more than that in terms of your exit rate of MI for this year?

Lisa Su — President and Chief Executive Officer.

Yeah, Vivek. Let me try to make sure that we answer this question clearly. We have no supply cap on our $4 billion figure when looking at the entire year. I apologize; the supply is not limited.

It is true that we are capable of providing more than that. It is more back half-weighted. So if you’re looking at sort of the near term, the near term, I would say for example in the second quarter, we do have more demand than we have supply right now, and we’re continuing to work on pulling in some of that supply. I believe this to be an industry-wide problem, by the way.

There is absolutely no connection between AMD and this. I think overall, AI demand has exceeded anyone’s expectations in 2024. It’s what the memory guys told you, then. The guys from the foundry have told you about it.

As the year progresses, we’re all reaching our maximum potential. Furthermore, we do have excellent visibility into what is going on. We continue to have fantastic customer engagements, as I mentioned. Ensuring we meet all of the milestones while ramping up products is my goal.

And as we surpass those benchmarks, we incorporate that into the overall guidance for AI for the entire year. That being said, things are actually moving along pretty nicely in terms of client progression. We also keep adding new clients and increasing the amount of work that we do for our existing clientele. Hopefully, Vivek, that makes the question more clear.

Analyst at Bank of America Merrill Lynch named Vivek Arya.

Alright. I’m grateful, Lisa. Perhaps one nod to MI, but perhaps more to the embedded business. Regarding Q2 and the second-half rebound, you seem to be speaking with a little more objectivity, which is consistent with what we have heard from many of your auto industry peers.

But where are you in the inventory clearing cycle, and what impact does embedded’s somewhat more measured rebound in the back half have on gross margin expansion, and can we continue to expect, say, 100 basis points a quarter in terms of gross margin expansion due to the data center mix, or just any puts and takes of embedded, and then what that means for gross margins in the back half? I am grateful.

Jean Hu serves as the chief financial officer, executive vice president, and treasurer.

Hey, Vivek. We appreciate your inquiry. The weaker demand in certain markets, in my opinion, is the main reason why the embedded business declined a little bit more than anticipated. More specifically, there hasn’t been much communication.

And as you pointed out, in certain industrial and automotive areas, it’s actually pretty comparable with peers. Second half, we do think the first half is the bottom of embedded business, and we’ll start to see gradual recovery in the second half. And going back to your gross margin question is when you look at our gross margin expansion in both Q1 and guided Q2, the primary driver is the strong performance on the data center side. The data center will continue to ramp in second half.

I think that will continue to be the major driver of gross margin expansion in second half. Of course, if embedded is doing better, we’ll have a more tailwind in second half.

Vivek Arya — Bank of America Merrill Lynch — Analyst.

Thank you.

Operator.

Thank you. Our next question is from Timothy Arcuri with UBS. Please proceed with your question.

Timothy Arcuri — UBS — Analyst.

Thanks very much. I also wanted to ask about your data center GPU roadmap. The customers that we talk to say that they’re engaged, not just because of MI300 but really because of what’s coming, and it seems like there’s a big demand shift to rack scale systems that try to optimize performance per square foot given some of the data center and power constraints. So can you just talk about how important systems are going to be in your roadmap And do you have all the pieces you need as the market shifts to rack scale systems?

Lisa Su — President and Chief Executive Officer.

Yeah, sure. Timothy, thanks for the question. For sure, look, our customers are engaged in the multi-generational conversations. So we’ve — definitely going out over the next couple of years.

And as it relates to the overall system integration, it is quite important. It is something that we’re working very closely with our customers and partners on. That’s a significant investment in networking, working with a number of networking partners as well to make sure that the scale-out capability is there. And to your question of do we have the pieces, we do absolutely have the pieces.

I think the work that we’ve always done with our Infinity Fabric as well as with our Pensando acquisition, that’s brought in a lot of networking expertise. And then we’re working across the networking ecosystem with key partners like Broadcom, and Cisco, and Arista who are with us at our AI data center event in December. So our work right now in future generations is not just specifying a GPU. It is specifying, let’s call it, full system reference designs, and that’s something that will be quite important going forward.

Timothy Arcuri — UBS — Analyst.

Thanks a lot. And then just as a quick follow up, I know you know this year looks like it’s going to be pretty back half loaded in your server CPU business just like it was last year. And I know you kind of held our hands at about this time last year, sort of on what the full year could look like and how back end loaded it could be. So I kind of wonder could you give us some milestones in terms of how much server CPU could grow this year, how back end loaded.

It could be is it is like up 30 percent this year for your server CPU business year over year. Is that a reasonable bogey? I just wonder if you can kind of give us any guidance on that piece of the business. Thanks.

Lisa Su — President and Chief Executive Officer.

Yeah, I mean, I think Tim, I think the best way to say it is our data center segment, is on a very, very strong ramp as we go through the back half of the year. Server CPUs, certainly. Data center GPUs, for sure. So I don’t know that we’re going to get into specifics.

But I could say, in general, you should expect overall at the segment level to be very strong double digits.

Timothy Arcuri — UBS — Analyst.

Thank you, Lisa.

Lisa Su — President and Chief Executive Officer.

Thank you.

Operator.

Thank you. Our next question is from Joe Moore with Morgan Stanley. Please proceed with your question.

Joe Moore — Morgan Stanley — Analyst.

Great. Thank you. I wonder if you could address the profitability of MI300. I know you said a couple of quarters ago that it would eventually be above corporate average, but it would take you a few quarters to get there.

Can you talk about where you are in that?

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

Yeah. Thank you, Joe. We’ve done a fantastic job ramping MI300, team. As you probably know, it’s a very complex product, and we are still at the fourth year of the ramp both from the testing time and the process improvement.

That is still the case with those things. We do think over time, the gross margin should be accretive to corporate average.

Joe Moore — Morgan Stanley — Analyst.

Great, thank you. And then a separate follow-up on the Turin transition on server. I know when you had transitioned into Genoa, you said it could take a little while that there were significant platform shifts and things like that. Turin seems to be much more kind of ecosystem-compatible.

How soon do you think your server portfolio will see that product ramp?

Lisa Su — President and Chief Executive Officer.

Yeah, Joe, I think from what we see, look, I think Turin is the same platform, so that does make it an easier ramp. I do think that Genoa and Turin will coexist for some amount of time because customers are deciding when they’re going to bring out their new platforms. We expect Turin to give us access to a broader set of workloads. So our SAM actually expands with Turin both in enterprise and cloud.

And from our from our experience, I think you’ll see a faster transition than, for example, when we went from Milan to Genoa.

Joe Moore — Morgan Stanley — Analyst.

Great. Thank you.

Operator.

Thank you. Our next question is from Stacy Rasgon with Bernstein Research. Please proceed with your question.

Stacy Rasgon — AllianceBernstein — Analyst.

Hi, guys. I appreciate you answering my queries. I intended to discuss the MI300 ramp into Q2 in my first one. You stated that your cumulative sales have reached $1 billion, give or take, which translates to, perhaps, $600 million in Q1.

You’re guiding total revenues up about $225 million into Q2. But you’ve got client up. You’ve got traditional data center up. You’ve got embedded flat.

Gaming is going to be down, but I’d hazard a guess that the client and traditional data center like offset it, if not more. Does the MI300 ramp into Q2, is it more or less than the total corporate ramp that you’ve got built into guidance right now? Is it that you’re expecting?

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

Hi, Stacy. Thanks for the question. You always ask a math question. So I think in general, it is more, the data center GPU ramp.

It will be more than the overall company’s $207 million ramp.

Stacy Rasgon — AllianceBernstein — Analyst.

OK. So that means gaming must be down like a lot, right, if client and traditional data are all up. OK.

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

Yeah, you’re right. Gaming is down similar zip code like a Q1.

Stacy Rasgon — AllianceBernstein — Analyst.

Got it. Got it. And that’s helpful.

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

So maybe — yeah, maybe let me give you some color about the gaming business, right. It’s — if you look at the gaming, the demand has been quite weak. That’s quite very well known, and also their inventory level. So based on the visibility we have, the first half, both Q1 and Q2, we guided down sequentially more than 30 percent.

We actually think the second half will be lower than first half. That’s basically how we’re looking at this year for the gaming business. And at the same time, gaming’s gross margin is lower than our company average. So overall, it will help the mix on the gross margin side.

That’s just some color on the gaming side. But you’re right, Q2 gaming is down a lot.

Stacy Rasgon — AllianceBernstein — Analyst.

Got it. That’s helpful. Thank you. For my second question, I wanted to look at the near-term data center profitability.

So operating profit was down 19 percent sequentially on 2 percent revenue growth. Is that just the margins of the GPUs filtering in relative to the CPUs? And I know you said GPUs would eventually be above corporate average? Are they below the CPU average in — I mean they clearly are, I guess, in the near term. But are they going to stay that way?

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

Yeah. I think you’re right, is the GPU gross margin right now is below the data center gross margin level. I think there are two reasons. Actually, the major reason is that we actually increase the investment quite significantly to — as Lisa mentioned, to expand and accelerating our roadmap in the AI side.

That’s one of the major drivers of the operating income coming down slightly. On the gross margin side, going back to your question is, we said in the past and we continue to believe the case is data center GPU gross margin over time will be accretive to corporate average, but it will take a while to get to the silver level of gross margin.

Stacy Rasgon — AllianceBernstein — Analyst.

Got it. That’s helpful. Thank you.

Operator.

Thank you. Our next question is from Harlan Sur with J. P. Morgan. Please proceed with your question.

Harlan Sur — JPMorgan Chase and Company — Analyst.

Good, good afternoon. Thanks for taking my question. On your data center GPU segment and the faster time to production shipments, given you just upped your full-year GPU outlook, how much of it is faster bring up of your customers frameworks driven by your latest ROCm software platform and maybe stronger collaboration with your customers engineers just to get them to call faster? And how much of it is just a more aggressive build-out plan by customers versus their prior expectations given what appears to be pretty strong urgency for them to move forward with their important AI initiatives?

Lisa Su — President and Chief Executive Officer.

Yeah, Harlan, thank you for the question. What it really is is both us and our customers feeling confident in broadening the ramp. Because if you think about it, first of all, the ROCm stack has done really well. And the work that we’re doing is hand in hand with our customers to optimize their key models.

And it was important to get sort of verification and validation that everything would run well, and we’ve now passed some important milestones in that area. And then I think the other thing is, as you said, there’s a huge demand for more AI compute and so our ability to participate in that, and help customers get that up and running is great. So I think overall as we look at it, this ramp has been very, very aggressive if you think about where we were just a quarter ago. Each of these are pretty complex bring-ups, and I’m very happy with how they’ve gone.

And by the way, we’re only sitting here in April, so there’s still a lot of 2024 to go, and there’s a great customer momentum in the process.

Harlan Sur — JPMorgan Chase and Company — Analyst.

Yeah, absolutely. Just going back, just kind of rewinding back to the March quarter. So similar to the PC client business, right, which declined at the low end of the seasonal range, if I make certain assumptions around your data center GPU business, I expect that out of data center, it looks like your server CPU business was also down at the lower end of the seasonal range. By my math, it was down like 5 percent, 6 percent sequentially.

Is that right? And that’s less than half the decline of your competitor. And if so, like what drove the less than seasonal declines? I assume some of it was share gains, sounds like enterprise was also better. Looks like you guys did drive a little bit more cloud instance adoption. But anything else that drove the slightly better seasonal pattern in March for data center server?

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

Yeah, yeah. Harlan, this is Jean. I think the server business has been performing really well year over year. It actually increased very strong double digit.

I think sequentially, it is more seasonal, but we feel pretty good about continue gaining share there.

Harlan Sur — JPMorgan Chase and Company — Analyst.

Thank you.

Lisa Su — President and Chief Executive Officer.

And if I just add, Harlan, to your question, we did see strength in enterprise in the first quarter, and I think that has offset perhaps some of the normal seasonality.

Mitch Haws — Head of Investor Relations.

Yeah. Well, we have time for two more questions.

Operator.

Thank you. Our next question is from Tom O’Malley with Barclays. Please proceed with your question.

Tom O’Malley — Barclays — Analyst.

Hey, thanks for taking my question. I just wanted to ask on the competitive environment. Obviously on the CPU side, you had a competitor talk about launching a high core count product in the coming quarter, kind of ramping now and more so into Q3. You’ve seen really good pricing tailwinds as a function of the higher core capital.

Can you talk about what you’re seeing in that market? Do you think that there’s any risk for more aggressive pricing, which would impact your ASP ramp for the rest of the year?

Lisa Su — President and Chief Executive Officer.

Yeah. When we look at our server CPU, sort of ASPs, they’re actually very stable. I think we — again, we tend to be indexed toward the higher core counts. Overall, I would say the pricing environment is stable.

This is about sort of TCO for sort of the customer environments. And sort of our performance and our performance per watt, our leadership, and that usually translates into TCO advantage for our customers.

Tom O’Malley — Barclays — Analyst.

Helpful. And then just a broader question to follow up here. So I think it got asked earlier about the importance of systems. But on your end, how important is the Open Ethernet consortium to you being able to move more into systems? I know that today you obviously have some internal assets and then you can partner with others.

But is there a way that you can be competitive before there is an industry standard on the Ethernet side? And can you talk about when you think the timing of that kind of consortium comes to market and enables you to maybe accelerate that roadmap? Thanks a lot.

Lisa Su — President and Chief Executive Officer.

Yeah, I think it’s very important to say we are very supportive of the open ecosystem. We’re very supportive of the ultra Ethernet consortium. But I don’t believe that that is a limiter to our ability to build large-scale systems. I think Ethernet is something that many in the industry feel will be the long-term answer for networking in these systems.

And we have a lot of work that we’re doing with internally as well as with our customers and partners to enable that.

Mitch Haws — Head of Investor Relations.

Paul, we’re ready for our final question.

Operator.

Thank you. Our last question is from Harsh Kumar with Piper Sandler. Please proceed with your questions.

Harsh Kumar — Piper Sandler — Analyst.

Hey, thank you for letting me ask a question. Lisa, I had two. One is for you and one perhaps for Jean. So we recently hosted a very large custom GPU company for a call, and they talked about kind of mega data centers coming up in the near to mid-term, talking about nodes potentially in the 100,000-plus range and maybe up to a million.

So as we look out at these kinds of data centers, from an architectural standpoint, is that a situation where winner takes all where if somebody gets in, they kind of get all the sockets? Or will they be lines where your chip perhaps or your board can be placed right next to somebody else’s board maybe on a separate line? Just help us understand how something like that would play out if there’s a chance for more than one competitor to play in such a large data center.

Lisa Su — President and Chief Executive Officer.

Yeah, so I’ll talk maybe a little bit more at the strategic level. I think as we look at sort of how AI shapes up over the next few years, there are customers who would be looking at very large training environments. And perhaps, that’s what you’re talking about. I think our view of that is number one, we view that as a very attractive area for AMD.

It’s an area where, we believe we have the technology to be very competitive there. And I think the desire would be to have optionality in terms of how you build those out. So obviously a lot has to happen between here and there. But I think your overarching question of is it winner takes all, I don’t think.

So that being the case, we believe that AMD is very well positioned to play in those, let’s call it, very large-scale systems.

Harsh Kumar — Piper Sandler — Analyst.

Wonderful. Thank you. And then maybe a quick one for Jean. So Jean, I put everything into the model that you talked about for June.

I get about a more or less a $400 million rise in the June quarter over March. You mentioned that both MI300 and EPYC will grow. Curious if you could help us think about the relative sizing of those two segments within the quarter. I’m getting — the point I’m trying to make is I’m getting roughly about a $900 million number for MI300 for June.

Is that — am I in the ballpark on my way off here?

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

Harsh, we’re not going to guide a specific segment below the segment revenue. I think the most important thing is we did say data center is going to grow double digit sequentially. I’ll leave it over there. We — subsegment, there are a lot of details.

Harsh Kumar — Piper Sandler — Analyst.

Fair enough, I had to try. Thank you, guys. Thank you so much.

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

Yeah, yeah, yeah. Thank you.

Operator.

Thank you. There are no further questions at this time. I’d like to hand the floor back over to management for any closing comments.

Mitch Haws — Head of Investor Relations.

Great. That concludes today’s call. Thanks to all of you for joining us today.

Operator.

[Operator signoff].

Duration: 0 minutes.

Call participants:.

Mitch Haws — Head of Investor Relations.

Lisa Su — President and Chief Executive Officer.

Jean Hu — Executive Vice President, Chief Financial Officer and Treasurer.

Toshiya Hari — Goldman Sachs — Analyst.

Ross Seymore — Deutsche Bank — Analyst.

Matt Ramsay — TD Cowen — Analyst.

Aaron Rakers — Wells Fargo Securities — Analyst.

Vivek Arya — Bank of America Merrill Lynch — Analyst.

Timothy Arcuri — UBS — Analyst.

Joe Moore — Morgan Stanley — Analyst.

Stacy Rasgon — AllianceBernstein — Analyst.

Harlan Sur — JPMorgan Chase and Company — Analyst.

Tom O’Malley — Barclays — Analyst.

Harsh Kumar — Piper Sandler — Analyst.

More AMD analysis.

scroll to top