Advanced Micro Devices, Inc. (AMD.DE) Q4 2023 Earnings Call Transcript
Published at 2024-01-30 20:31:03
Greetings and welcome to the AMD Fourth Quarter and Full Year 2023 Conference Call. At this time, all participants are in a listen-only mode. A brief question-and-answer session will follow the formal presentation. [Operator Instructions] And as a reminder, this conference is being recorded. It is now my pleasure to introduce to you, Mitch Haws, Vice President, Investor Relations. Thank you, Mitch. You may begin.
Thank you, John, and welcome to AMD's Fourth Quarter and Full Year 2023 Financial Results Conference Call. By now you should have had the opportunity to review a copy of our earnings press release and the accompanying slides. If you've not had the chance to review these materials, they can be found on the Investor Relations page of amd.com. We will refer primarily to non-GAAP financial measures during today's call. The full non-GAAP to GAAP reconciliations are available in today's press release and the slides posted on our website. Participants on today's call are Dr. Lisa Su, our Chair and Chief Executive Officer; and Jean Hu, our Executive Vice President, Chief Financial Officer and Treasurer. This is a live call and will be replayed via webcast on our website. Before we begin, I would like to note that Mark Papermaster, Executive Vice President and Chief Technology Officer will attend the Bernstein Tech, Media, Telecom & Consumer One-on-One Forum on Tuesday, February 28th; and Jean Hu, Executive Vice President, Chief Financial Officer and Treasurer will attend the Wolfe Research Semiconductor Conference on Tuesday, February 15th and the Morgan Stanley Technology, Media & Telecom Conference on March 5th. Today's discussion contains forward-looking statements based on current beliefs, assumptions and expectations, speak only as of today and as such, involve risks and uncertainties that could cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on factors that could cause actual results to differ materially. With that, I'll hand the call over to Lisa. Lisa?
Thank you, Mitch, and good afternoon to all those listening in today. We finished 2023 strong as Data Center sales accelerated significantly throughout the year, despite the mixed demand environment. As a result, we delivered record Data Center segment annual revenue and strong top-line and bottom-line growth in the fourth quarter, driven by the ramp of Instinct AI accelerators and robust demand for EPYC server CPUs across cloud, enterprise and AI customers. Looking at our financial results. Fourth quarter revenue increased 10% year-over-year to $6.2 billion, driven by a significant double-digit percentage growth in our Data Center and Client segments. On a full year basis, annual revenue declined 4% to $22.7 billion as record Data Center and Embedded segment annual revenue was offset by lower Client and Gaming segment revenue. Importantly, Data Center and Embedded segment annual revenue grew by $1.2 billion and accounted for more than 50% of revenue in 2023 as we gained server share, launched our next-generation Instinct AI accelerators and maintained our position as the industry's largest provider of adaptive computing solutions. Turning to the fourth quarter business results. Data Center segment revenue grew 38% year-over-year and 43% sequentially to a record $2.3 billion. Server CPU and Data Center GPU sales both set quarterly and annual revenue records as sales of our Data Center products accelerated throughout the year. We gained server CPU revenue share in the quarter, driven by significant double-digit percentage growth in 4th Gen EPYC Processor revenue and demand for our 3rd Gen EPYC Processor portfolio. In Cloud, while the overall demand environment remained soft, server CPU revenue increased year-over-year and sequentially as North American hyperscalers expanded 4th Gen EPYC Processor deployments to power their internal workloads and public instances. Amazon, Alibaba, Google, Microsoft and Oracle brought more than 55 AMD-powered AI, HPC and general-purpose cloud instances into preview or general availability in the fourth quarter. Exiting 2023, there were more than 800 EPYC CPU based public cloud instances available. We expect this number to grow in 2024 based on the leadership performance, efficiency and features of our EPYC CPU portfolio. In Enterprise, sales accelerated by a significant double-digit percentage in the quarter as we built momentum with Forbes 2000 customers. We closed multiple wins with large financial, energy, automotive, retail, technology and pharmaceutical companies, positioning us well for continued growth, based on expanded production deployments planned for 2024. A growing number of customers are adopting EPYC CPUs for inferencing workloads, where our leadership throughput performance deliver significant advantages on smaller models like Llama 7B, as well as the power head nodes in large training and inference clusters. Looking ahead, customer excitement for our upcoming Turin family of EPYC Processors is very strong. Turin is a drop-in replacement for existing 4th Gen EPYC platforms that extends our performance, efficiency and TCO leadership with the addition of our next-gen Zen 5 core, new memory expansion capabilities and higher core counts. Internal and end customer validation work is progressing to plan with Turin on track to deliver overall performance leadership, as well as leadership on a per core or per watt basis across a wide range of workloads when it launches later this year. Turning to our Broader Data Center portfolio. Our Data Center GPU business accelerated significantly in the quarter, with revenue exceeding our $400 million expectations, driven by a faster ramp for MI300X with AI customers. We launched our MI300 accelerator family in December with strong partner and ecosystem support from multiple large cloud providers, all the major OEMs and many leading AI developers. MI300X GPUs deliver leadership generated AI performance by combining our high-performance CDNA 3 architecture with industry-leading memory bandwidth and capacity. Customer response to MI300 has been overwhelmingly positive. And we are aggressively ramping production to support the dozens of cloud, enterprise and supercomputing customers deploying Instinct accelerators. In Cloud, we are working closely with Microsoft, Oracle, Meta and other large cloud customers on Instinct GPU deployments, powering both their internal AI workloads and external offerings. For Enterprise customers, HPE, Dell, Lenovo, Supermicro and other server vendors are on track to launch differentiated MI300 platforms later this quarter with strong demand from multiple enterprise customers. In HPC Supercomputing, we shipped the majority of AMD Instinct MI300A accelerators for the El Capitan supercomputer in the fourth quarter and expect to complete shipments this quarter for what is expected to be the world's fastest supercomputer when it comes online later this year. We also closed new Instinct GPU wins in the quarter, including the flagship system at the German High Performance Computing Center, HLRS, as well as what is expected to be one of the world's most powerful enterprise supercomputers for energy company Eni. On AI software development, we made significant progress expanding the ecosystem of AI developers working on AMD platforms with the release of our ROCm 6 software suite. The ROCm 6 stack significantly increases performance and key generative AI workloads, adds expanded support and optimizations for additional frameworks and libraries and simplifies the overall developer experience. The additional functionality and optimizations of ROCm 6 and the growing volume of contributions from the Open Source AI Software community are enabling multiple large hyperscale and enterprise customers to rapidly bring up their most advanced large language models on AMD Instinct accelerators. For example, we are very pleased to see how quickly Microsoft was able to bring up GPT-4 on MI300X in their production environment and rollout Azure private previews of new MI300 instances aligned with the MI300X launch. At the same time, our partnership with Hugging Face, the leading open platform for the AI community, now enables hundreds of thousands of AI models to run out of the box on AMD GPUs and we are extending that collaboration to our other platforms. Looking ahead, our prior guidance was for Data Center GPU revenue to be flattish from Q4 to Q1 and exceed $2 billion for 2024. Based on the strong customer pool and expanded engagements, we now expect Data Center GPU revenue to grow sequentially in the first quarter and exceed $3.5 billion in 2024. We have also made significant progress with our supply chain partners and have secured additional capacity to support upside demand. Turning to our Client segment. Revenue was $1.5 billion, an increase of 62% year-over-year and flat sequentially. We launched our latest generation Ryzen 8000 series notebooks and desktop processors in January, including our Ryzen 8040 Mobile series that combined leadership compute performance and energy efficiency with an updated MPU that delivers up to 60% more AI performance compared to our prior generation that was already industry-leading. Acer, ASUS, HP, Lenovo, MSI and other large PC OEMs will all offer notebooks powered by our Ryzen 8000 series processors with the first systems expected to go on sale in February. To further our leadership in AI PCs, we launched our Ryzen 8000 G-series processors earlier this month, which are the industry's first desktop CPUs with an integrated AI engine. Millions of AI PCs powered by Ryzen processors have shipped to date and Ryzen CPUs power more than 90% of AI-enabled PCs currently in market. Our work with Microsoft and our PC ecosystem partners to enable the next-generation of AI PCs expanded significantly in the quarter. We are aggressively driving our Ryzen AI CPU roadmap to extend our AI leadership, including our next-gen Strix processors that are expected to deliver more than three times the AI performance of our Ryzen 7040 series processors. Strix combines our next-gen, Zen 5 core with enhanced RDNA graphics and an updated Ryzen AI engine to significantly increase the performance, energy efficiency and AI capabilities of PCs. Customer momentum for Strix is strong with the first notebooks on track to launch later this year. Looking at 2024, we are planning for the PC TAM to grow modestly year-on-year, weighted towards the second half as AI PCs ramp. We continue to see strong growth opportunities for our client business as we ramp our current products, extend our AIPC leadership and launch our next wave of Zen 5 CPUs. Now turning to our Gaming segment. Revenue declined 17% year-over-year and 9% sequentially to $1.4 billion as lower semi-custom revenue was partially offset by increased sales of Radeon GPUs. Semi-custom SoC sales declined in line with our projections in the quarter. Going forward, we now expect annual revenue to decline by a significant double-digit percentage year-over-year as supply caught up with demand in 2023, and we entered the fifth year of what has been a very strong console cycle. In Gaming Graphics, revenue grew both year-over-year and sequentially, driven by strong demand in the channel for both our Radeon 6000 and Radeon 7000 series GPUs. We expanded our Radeon 7000 GPU series with the launch of new RX 7600 XT Series enthusiast desktop GPUs earlier this month that offer leadership price performance for 1080p gaming. We also launched new open source FidelityFX Super Resolution 3 software that can deliver significantly higher gaming frame rates on both GPUs and APUs. Turning to our Embedded segment. Revenue decreased 24% year-over-year and 15% sequentially to $1.1 billion as customers focused on reducing their inventory levels. We expanded our embedded portfolio in the quarter with new leadership solutions for key markets. We launched new Versal Prime adaptive SoCs for the aerospace, test and measurement, health care and communications markets that deliver industry-first support for DDR5 memory and increased DSP capability compared to our prior generation. In automotive, we launched new Versal SoC solutions that bring industry-leading AI compute capabilities and advanced safety and security features to next-generation vehicles. We also launched Ryzen embedded processors with unmatched performance and features for industrial automation, machine vision, robotics and edge server applications. Looking at 2024, we expect overall embedded demand will remain soft through the first half of the year as customers continue to focus on normalizing their inventory levels. Longer term, we're very confident in the growth trajectory of our Embedded business as our expanded product portfolio drove more than $10 billion of design wins in 2023, an increase of more than 25% compared to 2022. In summary, I'm very pleased with our fourth quarter and full year results. For 2024, we expect the demand environment to remain mixed, with strong growth in our Data Center and Client segments, offset by declines in our Embedded and Gaming segments. Against this backdrop, we believe we will deliver strong annual revenue growth and expand gross margin, driven by the strength of our Instinct EPYC and Ryzen product portfolios. Taking a step back, we believe AI is a once-in-a-generation transition that will reshape virtually every portion of the computing market, starting in the Data Center and then expanding into PCs and across multiple embedded markets. We have built excellent customer traction based on the strength of our multiyear AI hardware and software road maps, and we see clear opportunities to drive our next wave of growth as we deliver leadership AI solutions across our portfolio. In the Data Center, we see 2024 as a start of a multiyear AI adoption cycle with the market for Data Center AI accelerators growing to approximately $400 billion in 2027. Customer deployments of our Instinct GPUs continues accelerating, with MI300 now tracking to be the fastest revenue ramp of any product in our history, and positioning us well to capture significant share over the coming years based on the strength of our multi-generation Instinct GPU road map and open source ROCm software strategy. In PCs, we are focused on delivering our long-term road maps with leadership Ryzen AI NPU capabilities to enable differentiated experiences as Microsoft and our other software partners bring new AI capabilities to PC starting later this year. At the same time, we are rapidly driving leadership AI compute capabilities across the full breadth of our embedded product portfolio. This is an incredibly exciting time for the industry and even more exciting time for AMD as our leadership IP, broad product portfolio and deep customer relationships position us well to deliver significant revenue growth and earnings expansion over the next several years. Now I'd like to turn the call over to Jean to provide some additional color on our fourth quarter and full year financial results. Jean?
Thank you, Lisa, and good afternoon, everyone. I'll start with a review of our financial results and then provide our current outlook for the first quarter of fiscal 2024. AMD executed well in 2023 despite of a mixed market demand environment, delivering revenue of $22.7 billion and earnings per share of $2.65. We drove year-over-year revenue growth in our Embedded and Data Center segments. In addition, we successfully launched our AMD Instinct MI300 GPUs, positioning us for a strong ramp in 2024 in the AI market. For the fourth quarter of 2023, revenue was $6.2 billion, growing 10% year-over-year as revenue growth in the Data Center and the Client segments was partially offset by the lower revenue in our Embedded and Gaming segment. Revenue was up 6% sequentially, primarily driven by the ramp of AMD Instinct GPUs across several leading customers and higher revenue from EPYC server processors, partially offset by the decline in Embedded and Gaming segment revenues. Gross margin was 51%, flat year-over-year, with higher revenue contribution from the Data Center and the Client segments offset by lower Embedded segment revenue. Operating expenses were $1.7 billion, an increase of 8% year-over-year as we invest in R&D and marketing activities to support our significant AI growth opportunities. Operating income was $1.4 billion, representing a 23% operating margin. Taxes, interest expense and other was $163 million. For the fourth quarter of 2023, diluted earnings per share was $0.77, an increase of 12% year-over-year. Now turning to our reportable segments. Starting with the Data Center segment, revenue was $2.3 billion, up 38% year-over-year and 43% sequentially driven by strong growth of both AMD Instinct GPU and the Fourth Generation AMD EPYC CPU sales. Data Center segment operating income was $666 million or 29% of revenue compared to $444 million or 27% a year ago. Higher operating income was primarily due to operating liability driven by higher revenue. Client segment revenue was $1.5 billion, up 62% year-over-year, driven by Ryzen 7000 Series CPU sales. Client segment operating income was $55 million or 4% of revenue compared to an operating loss of $152 million a year ago driven by higher revenue. Gaming segment revenue was $1.4 billion, down 17% year-over-year and 9% sequentially due to a decrease in semi customer revenue, partially offset by increase in Radeon GPU sales. Gaming segment operating income was $224 million or 16% of revenue compared to $266 million or 16% a year ago. Embedded segment revenue was $1.1 billion, down 24% year-over-year and 15% sequentially as customers continue to work down their inventory levels. Embedded segment operating income was $461 million or 44% of revenue compared to $699 million or 50% a year ago. Turning to the balance sheet and cash flow. During the quarter, we generated $381 million in cash from operations and the free cash flow was $242 million. Inventory decreased sequentially by $94 million to $4.4 billion. At the end of the quarter, cash, cash equivalents and short-term investment was strong at $5.8 billion. In the fourth quarter, we repurchased 2 million shares and returned $233 million to shareholders. For the year, we repurchased 10 million shares and returned $985 million to shareholders. We have $5.6 billion in remaining share repurchase authorization. Now turning to our first quarter of 2024 outlook. We expect revenue to be approximately $5.4 billion plus or minus $300 million. Sequentially, we expect Data Center segment revenue to be flat, with the seasonal decline in server sales offset by strong Data Center GPU ramp. Embedded revenue to decline as customers continue to work down their inventory levels. Client segment revenue to decline seasonally. And in the Gaming segment as we enter the fifth year of what has been a very strong gaming cycle and given current customer inventory levels, we expect revenue to decline by significant double-digit percentage. Year-over-year, we expect Data Center and Client segment revenues to increase by strong double-digit percentage given the strength of our product portfolio and the share gain opportunities. Embedded Segment to decline and the Gaming segment revenue to decline by significant double-digit percentage. In addition, we expect first quarter non-GAAP gross margin to be approximately 52%. Non-GAAP operating expenses to be approximately $1.73 billion. Non-GAAP effective tax rate to be 13% and the diluted share count is expected to be approximately 1.63 billion shares. While we are not providing specific full year guidance for 2024, let me provide some color. Directionally, for the year, we expect 2024 Data Center and the Client segment revenue to increase driven by the strengths of our product portfolio and the share gain opportunities. Embedded segment revenue to decline and the Gaming segment revenue to decline by significant double-digit percentage. We expect to expand gross margin in 2024 and continue to invest to address the large AI opportunities while driving operating model leverage to deliver earnings per share growth faster than top line revenue growth. In closing, we delivered solid financial results in 2023. We further strengthening our product portfolio and establishing ourselves as a leading provider of Data Center GPUs for AI. We are very well positioned to build on this momentum and deliver strong financial performance in 2024 and beyond. With that, I'll turn it back to Mitch for the Q&A session.
Thank you, Jean. John, we're happy to poll the audience for questions.
Thank you, Mitch. We will now be conducting a question-and-answer session. [Operator Instructions] And the first question comes from the line of Aaron Rakers from Wells Fargo. Please proceed with your question.
Yeah. Thanks for taking the question. Just kind of framing the outlook and the guidance for this calendar first quarter. I guess the first question is, can you help us on a relative basis the $400 million of Data Center GPU revenue that you expected in Q4. What did that ultimately kind of fell off to be? And then, on the guidance into 1Q, can you help us appreciate what seasonal is defined as, as we think about the server business into the 1Q guide?
Sure, Aaron. Let me start and then see if Jean has something to add. So, relative to the Data Center GPU business, we were very pleased with performance that we saw in the fourth quarter. It was always going to be a very sort of back-end quarter weighted as we were ramping the product and we saw MI300A, our HPC product actually ramped very well. And then we saw MI300X. The AI product actually exceed our expectations based on strong customer demand, the way the qualifications went and then the ramp -- manufacturing ramp. So, we were over $400 million for that business in the fourth quarter. And then, going into the first quarter, as we look at the business, server seasonality, call it, something around, let's call it, high-single-digit, low-double-digit. There are also some other pieces of the Data Center business. I think, the key piece of it is we had originally expected the ramp to be a little bit more shallow of our MI300X and what we're seeing now is the supply chain is operating really well, and the customer demand is strong. And so, we will see MI300X increase as we go into the first quarter, and things are going relatively well so.
Yeah, Aaron, I'll give you some color about Client seasonality and others. So, Client is very similar to server, typically Q1 is high-single-digit to low-double-digit. That's consistent with past. On the Embedded side, it's very consistent with what we said in the past and the consistent with what you see in the industry's Embedded business is going through a bottoming process, and we think Q1, it will have a low-double-digit sequential decline. That's Embedded. On the Gaming side, Lisa mentioned during his -- her prepared remarks is we have the latest stage of product cycle in the year five of gaming console. But at the same time, we also have inventory at the customers. So, the combination of those impact, we expect the Q1 Gaming sequential declines probably more than 30%, so hopefully that help you a little bit.
Yeah. Very helpful, Jean. And as a quick follow-up, I'm just curious. The traditional server demand that you see, I know when we looked at server CPU, shipments are down north of 20% year-over-year. Are you seeing any signals or how are you thinking about a recovery in that traditional, call it, non-AI general-purpose server market as we move through '24?
Sure, Aaron. So look, I think, I agree with your characterization of the 2023 demand, although we did see some strong progress in the second half of the year, especially as customers in Cloud and Enterprise adopted our Genoa and our Zen 4 family. So going into 2024, I would say the traditional server market is probably still mixed, especially into the first half of the year. There's still some cloud optimization going on, as well as sort of enterprise being a little bit cautious. That being the case though, we also see opportunities for us to continue to grow share in the traditional server business. I think our portfolio is extremely strong. The adoption of Genoa and Bergamo, as well as our new Siena product lines are getting a lot of traction. And then, we also see Turin, our Zen 5 product coming in the second half of the year. So, even in a mixed demand environment, I think we're bullish on what traditional server CPUs can do in 2024.
And the next question comes from the line of Timothy Arcuri with UBS. Please proceed with your question.
Thanks a lot. Lisa, I'm wondering if you can give us a little bit of sense in terms of the milepost that you're kind of marching toward on this $400 billion TAM that you have for 2027. For example, do you think you can gain share at a rate that's kind of similar to the rate that you gained share for server CPU or I guess maybe asked a different way, is it reasonable to kind of look at your consumer GPU share of 20 plus percent, is that a reasonable bogey, or do you have aspirations higher than that, perhaps?
Yeah. Thanks, Tim, for the question. I would say a couple of things. First of all, we're really pleased with the progress that we've made in our Data Center GPU business. I think the ramp that we've seen, the customer traction that we've seen even in the last few months, I think has been great. And that gives us a lot of confidence in the ramp of this business. I think the beauty of the AI market here is, it's growing so quickly that I think we have both the market dynamic as well as our ability to gain share in that framework. The point I will make is our customer engagements right now are all quite strategic, dozens of customers with multi-generational conversations. So, as excited as we are about the ramp of MI300 and, frankly, there's a lot to do in 2024. We are also very excited about the opportunities to extend that into the next couple of years out into that '25, '26, '27 timeframe. So, I think, we see a lot of growth. I think it's a little early to make market share projections, but I would say it's a significant growth driver given the market demand, as well as our own product capabilities.
Thanks a lot. Jean, I guess as a follow-up. I know that you don't want to guide the full year. But I'm wondering if I can pin you down just to touch on maybe a milepost that you're kind of marching to for 2024 growth is up 20% for the whole company. Is that a reasonable target? And then I guess within Data Center, if you just add the incremental Data Center GPU revenue and you assume that the server business grows a little bit, it seems like that should maybe double year-over-year, but I'm kind of wondering if you can give us any ranges on those numbers? Thanks.
Hi, Tim. Thank you for the question. Yeah, we're not guiding a year. It's very early of the year, literally, January. I think the way to think about it is, Lisa mentioned during her prepared remarks we feel pretty good about both our Data Center and the Client business to grow in 2024. Of course, the largest incremental revenue opportunities are going to come from Data Center between both the server side gaining more share, and Data Center GPU side with the significant ramp up of our MI300. I think that's how we think about it. We do have a headwind from Gaming segment. We do think year-over-year, we'll see very significant double-digit decline on the Gaming segment. And at the same time Embedded is going through the bottoming process. We do think the second-half we will see the recovery. So those are the puts and takes. I can talk about it.
And the next question comes from the line of Matt Ramsay with TD Cowen. Please proceed with your question.
Thank you very much. Good afternoon. Lisa, I wanted to ask, I mean there's been so much focus and scrutiny as there should be on the really exciting progression with MI300 and I mean we've progressed over the last six months from I think some doubts in the investment community on to software and your ability to ramp the product and now you've proven that you're ramping it what, I think you said dozens of customers right across different end-markets. So, it's what I'm interested in hearing a little bit more about and you guys have been open about what some of the forward programs in your traditional server business look like from a roadmap perspective. I'd be interested to hear how you're thinking about the roadmap in your MI accelerator family. Is it going to -- they're going to continue to be parts that are CPU and GPU together? Or is that a primarily a GPU only roadmap? What kind of cadence are you thinking about? I'd just be, any kind of color you can give us on some of the forward roadmap trajectory for that program would be really helpful. Thanks.
Yeah, sure, Matt. So, I appreciate the comments. I think the traction that we're getting with the MI300 family is really strong. I think what's benefited us here is our use of chiplet technologies, which has given us the ability to have sort of both the APU version, as well as the GPU version and we continue to use that to differentiate ourselves and that's how we get our memory bandwidth and memory capacity advantages. As we go forward, you can imagine, like we did in the EPYC timeframe, we planned multiple generations in sequence. That's the way we're planning the roadmap. One of the things I will note about the AI accelerator market is the demand for compute is so high that we are seeing sort of an acceleration of the roadmap generations here and we are similarly planning acceleration of our roadmap. I would say that we'll talk more about the overall roadmap beyond MI300 as we get into later this year. But you can be assured that we're working very closely with our customers to have a very competitive roadmap for both training and inference that will come out over the next couple of years.
Thank you for that, Lisa. Just as a follow-up, I guess one of the questions that I've been getting a lot in different forms is, with respect to the $400 billion TAM that you guys have laid out for 2027. Maybe you could give us a little look under the hood as the, I guess, the -- I've got 100 versions of the same question which is, how the heck did you come up with that number. So, if you could give us a little bit more in terms of are we talking about systems and accelerator cards? Are we talking about just the silicon? Are we talking about full servers? And what kind of sort of unit assumptions? Any kind of thing that you can give us on-market sizing or what gives you the visibility so early into this generative AI trend to give a precise number of three years out? That would be really, really helpful. Thank you.
Sure. Well, Matt, I don't know-how precise it is, but I think we said approximately $400 billion. But I think what we need to look at is growth rate and how do we get to those growth rates. I think we expect units to grow sort of substantial double-digit percentage. But you should also expect that content is going to grow. So, if you think about how important memory and memory capacity is as we go forward, you can imagine that we'll see acceleration there and just the overall content as we go to more advanced technology nodes. So, there's some ASP uplift in there. And then, what we also do is, we are planning longer-term roadmaps with our customers in terms of how they're thinking about sort of the size of training clusters, the number of training clusters. And then, the fact that we believe inference is actually going to exceed training as we go into the next couple of years just given as more enterprises adopt. So, I think as we look at all those pieces, I think we feel good that the growth rate is going to be significant and sustained over the next few years. In terms of what's in that TAM, it really is accelerator TAM. So, within accelerators, there are certainly GPUs and there will also be some ASICs that are other accelerators that are in that TAM. As we think about sort of the different types of models from smaller models to fine-tuning of models, to the largest large language models, I think you're going to need different silicon for those different use cases. But from our standpoint, GPUs are still going to be the sort of the compute element of choice when you're talking about training and inferencing on the largest language models.
And the next question comes from the line of Joe Moore with Morgan Stanley. Please proceed with your question.
Great. Thank you. I think you talked about the MI300 cloud workloads being kind of split between the more customer-facing workloads versus internal. Can you talk about how you see the breakdown of that and how is your ecosystem progressing? This is a brand-new chip. It seems impressive they are able to support kind of a broad range of customer-facing workloads and cloud.
Yeah, sure, Joe. So, yes, look, we are really happy with how the MI300 has come up and we've now deployed and working with a number of customers. What we have seen is certainly ROCm 6 has been a very important, as well as the direct optimization with our top cloud customers. We always said that the best way of optimizing the software is working directly on the most important workloads. And we've seen performance come up nicely, which is what we expect frankly with the GPU capabilities that we would have to do some level of optimization, but the optimization has gone well. I think to your broader question. The way I look at this is, there are lots of opportunities for us to work directly with large customers, both on the Cloud side as well as on the Enterprise side, who have specific training and inferencing workloads. Our job is to make it as easy as possible for them and so our entire tool chain all of our, sort of the overall ROCm suite has really gone through significant progress over the last six to nine months. And then, we're also getting some nice support from the open-source community. So, the work that Hugging Face is doing is tremendous. In terms of just real-time optimization on our hardware, our partnership with OpenAI on Triton and our work across a number of these open source models has helped us actually make very rapid progress.
Great. And for my follow-up, I guess a lot of the forecasting of your business that I'm hearing is coming from supply chain and we're sort of hearing AMD is building X in Asia. I guess, how would you ask us to think about that? Are you looking at being kind of sold-out for the year and so the supply chain would be close to revenue? Are you building for the best-case scenario? Just I worry about sometimes expectations when people hear the supply chain numbers. And I'm just curious how you bridge the gap.
Yeah. So, I mean, Joe, I think we updated our revenue expectations this quarter from our original number of $2 billion to $3.5 billion to try to give some bounding on some of the discussion out there. The way to think about the $3.5 billion is these are customers that we're working with, who have given us firm commitments on what they need. As you know, the lead times on these products are quite long. So, it's important to have those forecasts in early and we have a strong order book. So, that gives us good confidence to exceed the $3.5 billion. From a supply chain standpoint, our goal is always to build more supply we -- and so, from that standpoint, we have also worked with our supply chain partners and secured significant capacity. Think about it as first half capacity is tight and more comes on in the second half of the year, but we've certainly made more progress there. So, we do have more supply, and we're going to continue to work with our customers on their deployments and we'll update that number as we go through the year.
And the next question comes from the line of Toshiya Hari with Goldman Sachs. Please proceed with your question.
Hi. Thank you for taking the question. I had one on the MI300 as well, Lisa. I guess, first of all, how should we think about the quarterly trajectory beyond Q1? You talked about Q1 being up sequentially. Is it fair to assume kind of a straight line as we progress through the balance of the year? Or is it more second half skewed? How should we think about that? And I guess more importantly, some of the cloud potential customers that have yet to officially sign-up for or sign-off on the MI300. I guess what's the sticking point? Is it just a function of time and you just need a little bit more time to go back-and-forth and tweak things or is there a software kind of concern? I guess what's holding them back at this point?
Yeah, Toshiya, thanks for the question. So, first on the MI300 trajectory. I think you would expect that revenue should increase every quarter from now through sort of the end of the year, but it will be a bit more second half weighted and part of that is just customers as they're finishing up their qualifications in their lines as well as sort of how our supply chain is ramping. So, yes, it should increase each quarter, but be a bit more second half weighted. And then, to your comment about customers, look, we are engaged with all of sort of the large customers. These are all folks that know what's really well, given our deep relationships in EPYC. I think people just have different adoption cycles as they consider what they're trying to do in their roadmap. But I view this as still very, very early innings for us in this space. And I think the question was asked earlier. I think the key is this is not just about MI300 conversation. But it really is about sort of our long-term multi-generational roadmap. And so, that's the context on which we're working with our largest customers, as well as, as you know, there's a lot of demand coming from folks that are more AI centric and not necessarily typical cloud customers, but more enterprise or let's call it AI-specific companies that we're also very well engaged with.
Got it. That's super helpful. And then as my follow-up, maybe one for Jean on the gross margin side. You're guiding Q1 to 52%. I'm curious, again, I'm sure you're not going to give quantitative guidance beyond Q1, but how to think about the trajectory for Q2 and beyond? I'm pretty sure you're working through some kinks as it pertains to the Instinct ramp. Hopefully, that improves over time. So that should be a tailwind. FPGAs perhaps the second half turns for the better. And you've got server CPU volume growth throughout the year. So, it feels like you've got multiple tailwinds as we think about gross margin progression on a sequential basis. But what are the potential headwinds as we move throughout 2024? Thank you.
Yeah, Toshiya, thank you for the question. Yeah, you're absolutely right. We have some puts and takes that impact our gross margin. We guided the Q1, 120 basis points higher than Q4 sequentially, primarily because the higher Data Center contribution actually more than offset the decline of Embedded business in Q1. Going forward, the way to think about it is as you said is the major driver is going to be Data Center business is going to grow much faster than other segment. That mix change will help us to expand the gross margin nicely. I think you also are spot on, the Embedded coming back in second half, which will be a tailwind. With the Data Center GPU, we are at the very early stage of ramp. We are improving testing time yield and continue to expand gross margin and we expect to be accretive to corporate average. So, those are all the tailwinds coming in the second half. I would say the headwinds side continue to be in the first half where we see Embedded business not only Q1 we see sequential decline, Q2 probably are going to be sequentially flattish versus Q1. That is a headwind for us. Because it does have a very nice gross margin. But overall, we feel pretty good about the trajectory of the gross margin improvement, especially second half.
And the next question comes from the line of Ross Seymore with Deutsche Bank. Please proceed with your question.
Thanks for letting me ask the question. I wanted to get into the competitive environment. First on the Instinct side of things. How that's going? It doesn't seem to be slowing down your ramp whatsoever, but then also on the straight server CPU side of things. Lisa, you said you're gaining share in that area. But as we think about future road maps, pricing incentives, those sorts of things, any meaningful change in the competitive environment that you're seeing throughout 2024?
Sure, Ross. So look, I think the environment for us is always competitive. So, I think that has not changed. If I look at the Instinct side, I think we have -- I think we've shown that MI300 and our roadmap are actually very competitive. There are some places where let's call it, it's more even like in the training environment. But as we look at the inferencing environment, we think we have significant advantages. And that's showing through in some of our customer work. So we think for both training and inference, we will continue to be very competitive. And then, as you go into the CPU side again, from our view, with each generation of EPYC, we've continued to gain share. I think, we exited the fourth quarter at record share for AMD. And we're still quite underrepresented in Enterprise. So I think there is an opportunity for us to continue to gain share as we go through 2024. From a competitive standpoint, what we see is Zen 4 is extremely competitive right now with Genoa, Bergamo, Siena. And as we go into Turin, we're deep in the design in-cycle for Zen 5 and Turin and we feel very good about how we're positioned.
Thanks for that. I guess as my follow up. On the Data Center side, another theme that's been pervasive throughout 2023 at least was the GPU side driving out the CPU side. You mentioned that there is still a little bit of cloud digestion going on within your EPYC business. But where do you see that standing? I know you're going to gain share, et cetera, but you guys fully benefit from the Instinct side for the Data Center GPU side, but what about on the CPU side of things? Is that headwind now behind us or is it still an issue?
I think we expect the CPU business from a market standpoint to grow, Ross. As we go into 2024, I think the rate and pace of growth will depend a little bit on the macro and just overall CapEx trends. But from our standpoint, we are starting to see some of our larger customers plan their refresh cycles. There's a lot of let's call it older equipment that has yet to be refreshed and the value proposition for refresh is so strong because the energy efficiency and sort of the footprint of the newer generations are so much better than sort of the four or five year old infrastructure that we do see that refresh cycle happening as we get into 2024. I think the exact timing, we will have to understand more as it, as the market evolves as we go through the year.
And the next question comes from the line of Vivek Arya with Bank of America Securities. Please proceed with your question.
Thanks for taking my questions. So first one, Lisa, from you gave us the $2 billion plus number for MI before, now you've have raised it to over $3.5 billion. And I'm curious what drove the change, was it incremental demand signals, was it supply? And can you supply more of, let's say, demand is $4 billion or $5 billion or $6 billion, right, what is the limitation? And sort of related to that, on the competition side, your competitor will launch their B100 later in the year, do you think that will change the competitive landscape in anyway?
Yes. Sure, Vivek. So, I think what we said is as we went from $2 billion to $3.5 billion, it really is mostly customer demand signals. So as orders have come on books and as we've seen programs moved from, let's call it, pilot programs into full manufacturing programs, we have updated the revenue forecast. As I said earlier, from a supply standpoint, we are planning for success. And so, we worked closely with our supply chain partners to ensure that we can ship more than $3.5 billion, substantially more depending on what customer demand is as we go into the second half of the year. And then, in terms of again roadmaps, as I said, we are very focused on a competitive roadmap this -- that sort of what the next generations are beyond MI300. So, I do believe that we have a strong roadmap in-place and continue to work with our customers to sort of adopt our roadmap as quickly as possible.
Got it. And a longer-term question, Lisa. If I look at the success that AMD has enjoyed, its many factors, but a few of them included your early adoption of chiplets and the strong partnerships you have had with TSMC. But now we are seeing your x86 competitor Intel also adopt chiplet or tile technology as they call it. And then I think recently the manufacturing update they gave, they said, they are two years ahead in terms of incorporating gate all-around and backside our delivery. So, let's assume they are right and they have either caught up to TSMC or maybe they are ahead. What impact does that have on AMD in kind of the medium to longer term?
Yeah. Sure, Vivek. Look, we're always looking at what's next, right? So, on the chiplet technology, I mean, we're sort of on the fourth generation of the chiplet technologies. I think we've learned a lot about how to optimize performance there. We are very aggressive with our adoption of leading-edge technology as it's needed. But I think those are only a few of the pieces. We're also focused on continuing to innovate on architecture and design. So, I think the longer-term question that you ask is I think we're expecting that the competition is going to be on a similar process technology and even in that case, I think we feel like we have a very strong roadmap going forward and will continue to drive both the CPU and the GPU roadmap very aggressively.
And the next question comes from the line of Harsh Kumar with Piper Sandler. Please proceed with your question.
Yes, hey, thanks for letting me ask the question, guys. I have two questions. Let me start-off with the accelerator side. The question we get a lot from our customers is they want to understand the value proposition of the MI300. So, Lisa, I was hoping you could give us some understanding of price versus power comparison or compute power? And then today, are you seeing your customers that are buying the MI300 are they primarily buying it for inferencing today or are they using it primarily for tooling? And maybe for Jean. Jean, do you think is it possible for MI300 to finish the year at a run-rate of about $1.5 billion?
Okay, Harsh. So, let me start, to your question about the value proposition for MI300. Again, customers are using it for different reasons, but presume that there is a performance per dollar benefit to using AMD. So that's one piece of it. The other piece of it though is we intrinsically have more bandwidth and memory capacity on MI300X compared to the competition. And what that means is for large language models that are many tens of billions of parameters you make -- you could potentially do the workload in fewer GPUs. So, it's a substantial system savings and allows you to do much more work within the same system. In terms of what customers are using MI300 for today, I would say there are a number of customers using it for large language model inferencing and there are also customers that are using it for training. So I think the whole point is being a strong partner. When you put these AI systems in-place, they are sometimes mixed-use systems. So they would be used for both training and inference.
John, we have time for two more questions.
Yeah, Harsh. Let me answer your question about the MI300. Exiting Q4 2024, is it possible to get to $1.5 billion? It is possible, right, because Lisa mentioned earlier, we'll see sequential increase in each quarter and more back-end loaded in second-half and we do have a supplies more than $3.5 billion. And of course we will continue to make progress with our customers. So the math, yeah, it's possible, but right now we are really looking at focus on the execution of the current $3.5 billion plus.
And the next question comes from the line of Stacy Rasgon with Bernstein Research. Please proceed with your question.
Hi, guys. Thanks for taking my questions. For the first one, you talked about the -- you expected a more shallow ramp of the MI300 and it's clearly doing better than that. So, some of the upside I guess in the near-term is that being pulled forward from the second-half or is this like a step-up or is it more of a step-up in-demand both in the first half and the second-half relative to what you were seeing before? Like how do I interpret that by shallow comment that you made?
Sure, Stacy. I don't think it's a pull-forward of demand. I think what it is we wanted to see how long it would take for customers to fully qualify and get their workloads performance. So, yeah, that depends a lot on the actual engineering work that's done and now that we're, let's call it a quarter later, we've seen that it's gone really well. So, it's actually gone a bit better, then our original forecast. And as a result, we've seen stronger demand signals and customers are gaining confidence in their ability to deploy a significant number of MI300 this year.
Got it. Thank you. For my follow-up I wanted to ask Matt's question a little more directly, you didn't quite answer it. That the $400 billion number that you've got out there, is that just silicon in chips or is there hardware and servers and stuff like that in that number as well, like what's in that number?
Yeah, I thought I had answered it, but yes, I'll answer it again. It is accelerator chips. It is not systems. So, think of it as GPUs, ASICs that will be there. Those types of things, but it includes, obviously, it includes memory and other things that are packaged together with the GPUs.
Yeah, memory will be quite significant, right? So, memory is a big portion of the two.
And the final question comes from the line of Chris Danely with Citi. Please proceed with your question.
Hey, thanks for squeezing me and team. I guess, question for Lisa as MI300 revenue ramps how do you see the customer concentration, let's say a year or two from now? Do you think you'll have one or two customers that are in double-digits or one or two that are half the revenue or do you think it will be totally fragmented?
I don't think it will be one or two that are half the revenue, Chris. I think we are building this as a -- really, we're happy to see sort of the broad adoption as always with sort of the large cloud partners. We might see sort of one or two that are higher than others, but I don't think you see the type of concentration that you mentioned.
Great. And then just a follow-up on somebody else's question on sort of Intel's roadmap versus TSMC. So, I'm sure you're intimately familiar with TSMC's manufacturing roadmap and we've all seen the Intel open up the kimono on what they expect to happen over the next couple of years. I mean, do you think Intel is going to close the gap somewhat with what you've found here over the next couple of years? Do you think they'll be able to maintain the lead?
Look, I feel very good about our partnership with TSMC, they continue to execute extremely well. We'll see what happens over the next few years. But I'd like to kind of reemphasize what I said earlier, even in the case of process parity, we feel very good about our architectural roadmap and all the other things that we add, as we look at our entire portfolio of CPUs, GPUs, DPUs, adaptive SoCs and kind of put them together to solve problems. I think we feel really good about what we can do with our customers. So, we're always going to be paying attention to sort of the process race, but I think we feel very good about sort of our strategy and how do we continue to sort of push the envelope on the computing roadmaps.
And that is the end of the question-and-answer session. I would like to turn the floor back over to the AMD team for any closing comments.
Great, John. That concludes today's call. Thank you to everyone for joining us today.
And this concludes today's teleconference. You may disconnect your lines at this time. Thank you for your participation.