Cadence Design Systems, Inc.

Cadence Design Systems, Inc.

$311.87
5.36 (1.75%)
NASDAQ Global Select
USD, US
Software - Application

Cadence Design Systems, Inc. (CDNS) Q1 2024 Earnings Call Transcript

Published at 2024-04-22 20:54:09
Operator
Good afternoon. My name is Regina and I will be your conference operator today. At this time, I would like to welcome everyone to the Cadence First Quarter 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. I will now turn the call over to Richard Gu, Vice President of Investor Relations for Cadence. Please go ahead.
Richard Gu
Thank you, operator. I'd like to Welcome everyone to our First Quarter of 2024 Earnings Conference Call. I'm joined today by Anirudh Devgan, President and Chief Executive Officer and John Wall, Senior Vice President and Chief Financial Officer. The webcast of this call and a copy of today's prepared remarks will be available on our website cadence.com. Today's discussion will contain forward-looking statements, including our outlook on future business and operating results. Due to risks and uncertainties, actual results may differ materially from those projected or implied in today's discussion. For information on factors that could cause actual results to differ, please refer to our SEC filings, including our most recent Forms 10-K and 10-Q, CFO Commentary, and today's earnings release. All forward-looking statements during this call are based on estimates and information available to us as of today, and we disclaim any obligation to update them. In addition, we'll present certain non-GAAP measures which should not be considered in isolation from or as a substitute for GAAP results. Reconciliation of GAAP to non-GAAP measures are included in today's earnings release. For the Q&A session today, We would ask that you observe a limit of one question and one follow-up. Now I'll turn the call over to Anirudh.
Anirudh Devgan
Thank you, Richard. Good afternoon, everyone. And thank you for joining us today. I'm pleased to report that Cadence had a strong start to the year delivering solid results for the first quarter of 2024. We came in at the upper end of our guidance range on all key financial metrics and are raising our financial outlook for the year. We exited Q1 with a better than expected record backlog of $6 billion, which sets us up nicely for the year and beyond. John will provide more details in a moment. Long-term trends of hyperscale computing, autonomous driving, and 5G, all turbocharged by AI super-cycle, are fueling strong broad-based design activity. We continue to execute our long-standing Intelligent system design strategy as we systematically build out our portfolio to deliver differentiated end-to-end solutions to our growing customer base. Technology leadership is foundational to Cadence and we are excited by the momentum of our product advancement over the last few years, and the promise of our newly unveiled products. Generative AI is reshaping the entire chip and system development process. And our Cadence.AI portfolio provides customers with the most comprehensive and impactful solutions for chip-to-systems intelligent design acceleration. Built upon AI-enhanced core design engines, our GenAI solution boosted by foundational LLM co-pilot are delivering unparalleled productivity, quality of results and time to market benefit for our customers. Last week at CadenceLIVE Silicon Valley, several customers including Intel, Broadcom, Qualcomm, Juniper, and Arm shared their remarkable successes with solutions in our Cadence.AI portfolio. Last week, we launched our third-generation dynamic duo, the Palladium Z3 emulation and Protium X3 prototyping platform to address the insatiable demand for higher performance and increased capacity hardware accelerated verification solutions. Building upon the successes of the industry leading Z2, X2 systems, this new platform set a new standard of excellence, delivering more than twice the capacity and 50% higher performance per rack than the previous generation. Palladium Z3 is powered by our next generation custom processor and was designed with Cadence AI tools and IP. The Z3 system is future proof with its massive 48 billion gate capacity, enabling emulation of the industry's largest design for the next several generations. The Z3 X3 systems have been deployed at select customers and were endorsed by Nvidia, Arm and AMD at launch. We also introduced the Cadence Reality Digital Twin Platform which virtualizes the entire data center and uses AI, high-performance computing, and physics-based simulation to significantly improve data center energy efficiency by up to 30%. Additionally, Cadence's cloud native molecular design platform Orion will be supercharged with Nvidia's BioNemo and Nvidia microservices for drug discovery to broaden therapeutic design capabilities and shorten time to trusted results. In Q1, we expanded our footprint at several top tier customers and furthered our relationship with key ecosystem partners. We deepened our partnership with IBM across our core EDA and systems portfolio, including a broad proliferation of our digital, analog and verification software and expansion of our 3D-IC packaging and system analysis solutions. We strengthened our collaboration with Global Foundry through a significant expansion of our EDA and system solutions that will enable GF to develop key digital analog RF/MM-Wave and silicon photonics design for aerospace and defense IoT and automotive end-markets. We announced a collaboration with Arm to develop a chiplet-based reference design and software development platform to accelerate software-defined vehicle innovation. We also further extended our strategic partnership with Dassault Systems, integrating our AI-driven PCB solution with Dassault's 3DEXPERIENCE Works portfolio, enabling up to a 5x reduction in design turnaround time for solid work customers. Now let's talk about our key highlights for Q1. Increasing system complexity and growing hyperconvergence between the electrical, mechanical, and physical domain is driving the need for tightly integrated co-design and analysis solutions. Our System Design and Analysis business delivered steady growth as our AI-driven design optimization platforms integrated with our physics-based analysis solution, continued delivering superior results across multiple end markets. Over the past six years, we have methodically built out our system analysis portfolio. And with the signing of the definitive agreement to acquire BETA CAE, are now extending it to structural analysis, thereby unlocking a multi-billion dollar TAM opportunity. BETA CAE is leading solutions have a particularly strong footprint in the automotive and aerospace verticals, including at customers such as Stellantis, General Motors, Renault, and Lockheed Martin. Our Millennium supercomputing platform, delivering phenomenal performance and scalability for high fidelity simulation is ramping up nicely. In Q1, a leading automaker expanded its production deployment of Millennium to multiple groups after a successful early access program in which it realized tremendous performance benefits. Allegro X continued its momentum and is now deployed at well over 300 customers. While Allegro X AI, the industry's first fully automated PCB design engine, is enabling customers to realize significant 4 times to 10 times productivity gain. Samsung used Celsius Studio to uncover early design and analysis insights to precise and rapid thermal simulation for 2.5D and 3D packages, attaining up to a 30% improvement in product development time. And a leading Asian mobile chip company use optimality intelligence system explorer AI technology and Clarity 3D Solver obtaining more than 20 times design productivity improvement. Ever-increasing complexities in the system verification and software bring-up continue to propel the demand of our functional verification products. With hardware accelerated verification, now a must have part of the customer design flow. On the heels of a record year, our hardware products continue to proliferate at existing customers, while also gaining some notable competitive wins, including at a leading networking company and at a major automotive semiconductor supplier. Demand for hardware was broad-based with the particular strengths seen at hyperscalers and over 85% of the orders during the quarter included both platforms. Our Verisium platform that leverages big data and AI to optimize verification workloads, boost coverage and accelerate root cause analysis of bugs saw accelerating customer adoption. At CadenceLIVE Silicon Valley, Qualcomm said that they used Verisium [Stem AI] (ph) to increase total design coverage automatically while getting up to a 20x reduction in verification workload runtime. Our Digital IC business had another solid quarter as our digital full flow continued to proliferate at the most advanced nodes. We had strong growth at hyperscalers, and over 50 customers have deployed our digital solutions on three nanometer and below design. Cadence Cerebrus, which leverages Gen.AI to intelligently optimize the digital full flow in a fully automatic manner now has been used in well over 350 tapeouts. Delivering best in class PPA and productivity benefits, it's fast becoming integral part of the design flow at marquee customers, as well as in DTCO flows for new process nodes at multiple foundries. In custom IC business, Virtuoso Studio, delivering AI-powered layout automation and optimization continued ramply, strongly, and 18 of the top 20 semi have migrated to this new release in its first year. Our IP business continued to benefit from market opportunities offered by AI and multi-chiplet based architecture. We are seeing strong momentum in interface IPs that are essential to AI use cases, especially HBM, DDR, UCIe, and PCIe at leading edge nodes. In Q1, we partnered with Intel Foundry to provide design software and leading IP solutions at multiple Intel-advanced nodes. Our TenSilica business reached a major milestone of 200 software partners in the Hi-Fi ecosystem, the de facto standard for automotive infotainment and home entertainment. And we extended our partnership with one of the top hyperscalers in its custom silicon SOC design with our Xtensa NX controller. In summary, I'm pleased with our Q1 results and the continuing momentum of our business. [Piling] (ph) chip and system design complexity and the tremendous potential of AI-driven automation, offer massive opportunities for our computational software to help customer realize these benefits. In addition to our strong business results, I'm proud of our high-performance inclusive culture and thrilled that Cadence was named by Fortune and Great Place to Work as one of the 2024's 100 best companies to work for, ranking number 9. Now I will turn it over to John to provide more details on the Q1 results and our updated 2024 outlook.
John Wall
Thanks, Anirudh, and good afternoon, everyone. I am pleased to report that Cadence delivered strong results for the first quarter of 2024. First quarter bookings were a record for Q1 and we achieved record Q1 backlog of approximately $6 billion. A good start to the year coupled with some impressive new product launches, sets us up for strong growth momentum in the second half of 2024. Here are some of the financial highlights from the first quarter starting with the P&L. Total revenue was $1.009 billion. GAAP operating margin was 24.8% and non-GAAP operating margin was 37.8%. GAAP EPS was $0.91 and non-GAAP EPS was $1.17. Next, turning to the balance sheet and cash flow, cash balance at quarter end was [$1.012 billion] (ph). While the principal value of debt outstanding was $650 million. Operating cash flow was $253 million. DSOs were 36 days and we used $125 million to repurchase Cadence shares in Q1. Before I provide our updated outlook, I'd like to share some assumptions that are embedded in our outlook. Given the recent launch of our new hardware systems, we expect the shape of hardware revenue in 2024 to weigh more toward the second half, as our team works to build inventory of the new system. Our updated outlook does not include the impact of our [pending] (ph) BETA CAE acquisition and it contains the usual assumption that export control regulations that exist today remain substantially similar for the remainder of the year. Our updated outlook for fiscal 2024 is revenue in the range of $4.56 billion to $4.62 billion. GAAP operating margin in the range of 31% to 32%. Non-GAAP operating margin in the range of 42% to 43%. GAAP EPS in the range of $4.04 to $4.14. Non-GAAP EPS in the range of $5.88 to $5.98. Operating cash flow in the range of $1.35 billion to $1.45 billion. And we expect to use at least 50% of our annual free cash flow to repurchase Cadence shares. With that in mind, for Q2 we expect revenue in the range of $1,030 million to $1,050 million. GAAP operating margin in the range of 26.5% to 27.5%. Non-GAAP operating margin in the range of 38.5% to 39.5%. GAAP EPS in the range of $0.73 to $0.77. Non-GAAP EPS in the range of $1.20 to $1.24. And as usual, we've published a CFO commentary document on our investor relations website, which includes our outlook for additional items, as well as further analysis and GAAP to non-GAAP reconciliations. In summary, Cadence continues to lead with innovation and is on track for a strong 2024 as we execute to our intelligent system design strategy. I'd like to close by thanking our customers, partners, and our employees for their continued support. And with that operator, we will now take questions.
Operator
[Operator Instructions] Your first question comes from the line of Joe Vruwink with Baird. Please go ahead.
Joe Vruwink
Great. Hi everyone. Thanks for taking my questions. Maybe just to start with your outlook for the year. Can you perhaps provide maybe your second half assumption before this quarter versus where it stands today in terms of just recalibrating around delivery schedules and maybe a good way to frame it, I think in the past you gave a share of this year's revenue that was going to come from upfront products. Is that still the right range? But if it is the right range, you can obviously see more is going to end up landing in the second half. And so that kind of puts to your original views or how is that, I guess, skewed relative to what might have been the expectation a quarter ago?
John Wall
That's a great question, Joe. And I think you've hit on the main point there that upfront revenue is driving a lot of the quarter-over-quarter trends this year. That -- when I look at last year, you recall that we had a large backlog of hardware orders and we dedicated 100% of the production, hardware production in Q1 to deliver that hardware in Q1 2023. As a result, in Q1 2023, 20% of our Q1 ‘23 revenue was from upfront revenue sources. That in contrast, Q1 this year, it's only 10% of the total revenue for this Q1 is coming from upfront revenue. But again, last year, and to reflect on where we thought we were this time last quarter, that we still expect that upfront revenue will probably be 15% to 20%. I mean, around the midpoint there is 17.5% in expectation for upfront revenue this year. And a midpoint of say 82.5% for recurring revenue. That's still the same as what we thought this time last quarter. That contrast with last year was I think 16% of our revenue was upfront last year. And to put dollar terms on it, last year $650 million of our revenue was upfront. This year, we're expecting roughly $800 million to be upfront. But the first half versus first half, last year, we had $350 million in the first half and $300 million in the second half because we had prioritized all those shipments in hardware and it skewed the numbers toward the first half last year. So $350 million and $300 million ending with the $650 million of upfront revenue last year. This year it looks more like $250 million and $550 million at the back end. But I know that's largely as a result of, we had a record backlog, our record bookings quarter in Q1. We've got a substantial backlog in IP that we're scaling up to deliver, a lot of that revenue falls into the second half. And also we launched these new hardware systems last week. Hardware revenue is expected to be more second half weighted now, because based on what we've heard, and I'll let Anirudh chime in here on the technical aspects of the new hardware systems, but we expect them to be so popular that a lot of demand will shift to those new hardware systems and we'll have to ramp up production to be able to deliver that demand. So it shifts some of the upfront revenue to the second half. So I think upfront revenue is really driving a lot of the skewed metrics. Anirudh, do you want to talk about Z3?
Anirudh Devgan
Yeah, absolutely. So we are very proud of the new systems we launched. As you know, we are a leader in hardware-based emulations with Z2 X2. And last time we launched them was in 2021. So that was like a six-year cycle. You know Z1 X1 was 2015 and then Z2 X2 was 2021. So what I'm particularly pleased about is, we have a major, major refresh, you know, it's a game-changing product, but it was also developed in only three years. So in 2024, we have a new refresh and it's a significant leap in terms of capacity. And even last week at our CadenceLIVE conference, Nvidia and Jensen talked about how they use Z2 to design their latest chip like Blackwell. And it's also used by all the major silicon companies and system companies to design their chips. But what is truly exciting about Z3 and X3 is this a big leap, it’s like Z3 is 4 times or 5 times more capacity than Z2. It's a much higher performance. So it sets us up nicely for next several years to be able to design the several generations of the world's largest chips. So that's the right thing to do. And the reason we can do it in three years versus six years is, we use our own design internally in Cadence for TSMC advanced node. So we're using all our latest tools, all the latest AI tools, we are using all our IP. There's a very good validation of our own capabilities that we can accelerate our design process, but really sets up hardware verification and overall verification flow for using the new systems. Now, as a result, normally there is a transition period when you have a new system and we went through that twice already in the last 10 years. And the customers naturally will go to the new systems and then we build them over next one or two quarters. But that is the right thing to do for the business long-term. The time -- it's good to accelerate the -- because these AI chips are getting bigger and bigger, right? So the demand for emulation is getting bigger and bigger, and I can give you more stats later. So we felt that it was important to accelerate the development of the next generation system, to get ready for this coming AI wave for next several years, and we are very well positioned. As a result, it does have some impact on quarter-to-quarter, but that's well worth it in the long run.
Joe Vruwink
That's all very helpful, Thank you. Second question, I wanted to ask how -- some of the things you just spoke of, but also AI, start to change the frequency of customers engaging with you, how they approach renewals. So you just brought up how the [Harbor] (ph) platforms, the Velocity, there has improved from first generation to next six years. Now we're down to a three year new product cycle. When I listened to your customers last week talk about AI, they're not just generating ML models that can be reused, but then, of course, each run becomes better if you're incorporating prior feedback. So it would just seem like AI itself not only creates stickiness, but there would be an incentive to deploy it maybe more broadly than a customer traditionally would think about deploying new products. Does that mean the average run rates of a renewal ends up becoming much bigger and we'll start to see that flow in the backlog?
Anirudh Devgan
Yeah, that's the correct observation. You know, like as you know, what we have said before, AI has a lot of profound impact to Cadence, a lot of benefit to our customers. So there are three main areas. One is, you know, the build out of the AI infrastructure, whether it's Nvidia or AMD or all the hyperscalers. And we are fortunate to be working with all the leading AI companies. So that's the first part. And in that part, as they design bigger and bigger chips, because the big thing in AI systems is they are parallel. So they need to be bigger and bigger chips. So the tools have to be more efficient, the hardware platform have to support that. And that's why the new systems. Now, the second part of AI is applying AI to our own products, which is the Cadence.AI portfolio. And like you mentioned last week, we had several customers talking about success, you know, with that portfolio, including Intel, you know, like I mentioned Intel, Broadcom, Qualcomm, Juniper, Arm, and the results are significant. So we are no longer in kind of a trial phase of whether these things will work. Now we're getting pretty significant improvements. Like we mentioned, MediaTek got like 6% power improvement. And one of the hyperscale companies got 8% to 10% power improvement. These are significant numbers. So it is leading to deployment of our AI portfolio. And I think we mentioned like the AI run rate on a trailing 12 months basis is up 3x. And I think design process already was well automated. EDA has a history of automating design over the last 30 years. So AI is in a unique position because you need the base process to be somewhat automated to apply AI. So we were already well automated and now AI can take it to the next level of automation. So that's the second part of AI which I'm pretty pleased about, is applying to our own product. And then the third part of AI proliferation is new markets that open up, which things like data center design with reality that we announced or Millennium, which is designing systems with acceleration or digital biology. Those are like a little, they take a little longer to ramp up, but we have these three kinds of impact of AI. The first being direct design of AI chips and systems. Second, applying AI to our own products. And third being new applications of AI.
Joe Vruwink
That's great. Thank you very much.
Operator
Your next question will come from the line at Charles Shi with Needham & Company. Please go ahead.
Charles Shi
Thanks. Good afternoon. I just wanted to ask about the China revenue in Q1. It looks pretty light. I just wonder whether that's part of the reason that's weighing on your Q2. I understand you mentioned that you're going through that second-gen to third-gen hardware transition right now. Maybe that's another factor, but from your geographical standpoint, is what's the outlook for China for the rest of the year and specifically Q2. Thanks.
John Wall
Hi Charles, that's a great observation. If you recall this time last year we were talking about a very strong Q1 for China for functional verification and for upfront revenue. I think those three things are often linked. You contracted with this year, China is down at 12%. Upfront revenues is lower at 10% compared to 20%. And functional verification, of course, is lapping those really tough comps when we dedicated 100% production to deliveries. I think when you look at China, we're blessed that we have the geographical diversification that we have across our business. But -- what we're seeing in China is strong design activity. And while the percentage of revenue dropped to 12%, it pretty much goes in-line with a lower hardware, lower functional verification, lower upfront revenue quarter would generally lead to a lower China percentage quarter. But we have good diversification. While China is coming down, we can see other Asia increasing and our customer base is really mobile. That geographical mix of revenue is based on consumption and where the products are used. But as we do more upfront revenue in the second half, we'd expect the China percentage to increase.
Charles Shi
Thanks. I want to ask another question about the upcoming ramp of the third generation hardware. Where exactly is the nature of the demand? Is it the replacement demand, like your customers replacing your Z2 X2 with the Z3 X3, or you expect that lot more great deal of customers adopting Z3 X3 and more importantly I think you mentioned about 4 times to 5 times capacity increase they can design a larger -- much larger chips with a lot more transistors. How much of an ASP uplift you are expecting from the Z3 X3 versus Z2 X2?
Anirudh Devgan
Charles, all good observations. So let me try to answer that one by one. So, I mean, in terms of your last point, normally if the system has more capacity like this one has, it can do more. So it produces, it gives more value to our customers. So we are able to get more value back. So typically newer systems are better that way for us and better for the customer. And to give you an example, I mean, these things are pretty complicated. So, we'll just take Z3 for example. So Z3 itself, we designed this advanced TSMC chip by ourselves and this is one of the biggest chips that TSMC makes. And one rack will have like, more than a hundred of these chips. And then we can connect like up to 16 racks together. So if you do that, you have thousands of full radical chips emulating -- that's, and these are all liquid cooled connected by optical and InfiniBand interconnect. So this is like a truly a multi rack supercomputer. And what it can do is just emulate very, very large systems very, very efficiently. So even Z2, like Nvidia talked about it last week, even Blackwell, which is the biggest chip in the world right now with 200 billion transistors, was emulated on few racks of Z2. Okay, so now with 16 rack of Z3, we can emulate chips which are like 5 times bigger than Blackwell, which is already the biggest chips in the world, right? So that gives a lot of runway for our customers because with AI, the key thing is that is the capacity of the chip needs to keep going up, not just a single chip. Look at Blackwell, they have two full radical chips on a package. So as you know, you will see more and more, not just big chips on a single node, but multiple chips in a package for this AI workload and also 3D stacking of those chips. So what this allows is not just emulating a single large chip, but multiple chips, which is super critical for AI. So I think this is what I feel that this puts us in a very good position for all this AI boom that is happening, not just with our partners like Nvidia and AMD, but also all the hyperscalers companies. And so that will be the primary demand is more capacity chips require more hardware. And then X3 will go for that with the software prototyping which is used on FPGA. And then we have some unique workload capabilities apart from size of these big systems being, the capacity being much better and performance, there are new features for low power, for analog emulation that helps in the mobile market. So we talked about Samsung, working with us, especially on this four state emulation, which is a new capability in emulation over the last 10 years. So I think it's just -- it's a combination of new customers, a combination of competitive win, but also continuing to lead in terms of the biggest chips in the world which are required for AI processing now and you know years from now. I think the size of these chips as you know is only going to get bigger in the next few years and we feel that Z3 X3 is already set up for that.
Charles Shi
Thanks.
Operator
Your next question will come from the line of Lee Simpson with Morgan Stanley. Please go ahead.
Lee Simpson
Great, thanks. And thanks very much for squeezing me on. Just wanted to go back to what you said last quarter, if I could. It did seem as though you were saying that there was an element of exclusivity around your partnership with Arm, your EDA partnership around Arm total design. I wondered how that was developing, if indeed you're collaborating to accelerate the development of custom SoCs using Neoverse. It looks as though it's pulled in quite a lot of work or continues to pull in quite a lot of work around functional verification. And I guess as we look at now third generation tool sets for Palladium and Protium, leaving aside some of the rack scale development that we're seeing out there, whether or not Arm’s total design, I guess development work is pulling in or is likely to pull in some of that second half business. That means not just hyperscalers, but perhaps in AI PCs and beyond. Thanks.
Anirudh Devgan
Yeah, thank you for the question. I mean, we are proud to have a very strong partnership with Arm and with our joint customers, Arm and Cadence customers. I think we have had a very strong partnership over the last 10 years, I would like to say, and it's getting better and better. You know, and yes, we talked about our new partnership on Total Compute. Also, I think this quarter we talked about our partnership with HARMAN Automotive. Because what is interesting to see, which of course you know this already, but Arm continues to do well in mobile, but also now in kind of HPC server and automotive end markets. So we are pleased with that partnership, you know, and they are also doing more subsystems and higher order development and that requires more partnership with Cadence in terms of the backend, Innovus and Digital Flow and also verification with hardware platforms and other verification tools.
Lee Simpson
Great, maybe just a quick follow up. You know, we've seen quite a bit of M&A activity from yourselves of late, you know, including the IP house acquisition of Invecas. You've had Rambus bought, you've now acquired BETA in the computer-aided emulation space for the car. There's been quite a lot of speculation in the market about the possibility of a transformative deal being done. I guess, given that we have you on the mic here, maybe if you get a sense from yourself, what would be the sort of thing that a business like Cadence could look for? Would you look for a high value and a contiguous vertical to what you've already addressed, let's say in automotive, or would it be something more waterfront, a business that spans several verticals, maybe being more relevant across the industrial software space? Could that be the sort of ambition that Cadence would have given the silicon to systems opportunities that are emerging? Thanks.
Anirudh Devgan
Well, thank you for the question. And a lot of times there are a lot of reports and we don't normally don't comment on these reports and people get very creative on these reporting. But What I would like to say is that our strategy hasn't changed. It's the same strategy from 2018. First of all, I want to make sure that we are focused in our core business, which is EDA and IP. And, yes, I launched this whole initiative on systems and it's super critical, you know, chips silicon to systems. But what is one thing that I even mentioned last time, what is different from 2018 to now, is that EDA and IP is much more valuable to the industry. You know, Our core business itself has become much more valuable because of AI. So our first focus is in our core business. We are leading in our core business. Our first focus is on organic development. That's what we like. We always say that's the best way forward. Now, along with that, we will do some, we have done, like you mentioned, some opportunistic M&A, which is usually, I would like to say, the tuck-in M&A in the past. And that adds to our portfolio, it helped us in system analysis. We also did it in IP because I'm very optimistic about IP growth this year. And we talked about our new partnership with Intel Foundry in Q1. Also, we acquired Rambus IP assets, which are HBM. And HBM is of course a critical technology in AI. And we are seeing a lot of growth in HBM this year. Now, if we have booked that business, the deliveries will happen towards second half of the year, as John was saying earlier. But so that's the thing. Now in terms of BETA, it made sense because it is a very good technology. It's the right size for us. And we are focused on finishing that acquisition, and also integrating that -- that will take some time. So that's our primary focus in terms of M&A. And it's a very good technology. They have very good footprint in automotive and aerospace vertical. So just to clarify, we have the same strategy from ‘18, and that's doing working as well. It's primarily organic with very synergistic computational software, mostly tuck-in acquisitions.
Lee Simpson
That's great. Thank you.
Operator
Your next question comes from the line of Ruben Roy with Stifel. Please go ahead.
Ruben Roy
Thank you. Anirudh, I had a follow up on the Z3 X3 commentary that you had. And one of the things I was thinking about, especially as you talked about the InfiniBand low latency network across the multiple racks of Z3, you had mentioned that you're up to 85% attach rate of both systems with the Z2 X2. I would imagine that would continue to go up and if you can comment on if the new systems incorporate InfiniBand across Z3 and X3 and if so, do you expect that to be sort of a selling point for your customers that are designing these big chips, which in many cases these days have software development attached to the design process. Do you think that the attach rates continue to move higher for both systems?
Anirudh Devgan
Yes, absolutely. I think I started this in, I forget now, 2016, I think, in a Dynamic Duo are ‘15 and ‘16, which is we have a custom processor for palladium and we use FPGA for Protium. So this is what we call dynamic duo, because then palladium is best in class for chip verification and RTL design, and Protium is best-in-class for software bring up and with the common front end. So as a result, over the years, this has become the right approach. And our customers are fully embracing both these systems as they invariably do both chip development and software development. I mean, perfect example is of course, our long-term development partner, Nvidia. I mean, Nvidia is no longer doing just chip development. They have a massive software stack. And that's true for all the hyperscalers. So we see that trend continuing. And now we do use, you know, Nvidia's products like InfiniBand in our systems on Z3 to your question, which is, because Z3 is a very unique architecture. So it requires very, very high speed interconnect. So it's almost like a super computer. So then it requires optical and InfiniBand in Z3. Now in X3, we are using AMD FPGAs, which are fabulous, but it does not require that tight interconnect speed. So InfiniBand is more used in Z3 versus X3. But X3 is a great system too, we're using the latest AMD FPGAs, it has 8x higher capacity than X2, and all kinds of innovation on the software side as well. So we are very pleased -- I'm very confident that we have true leadership in these hardware platforms, both Palladium and Protium. And we're also pleased, like I said earlier, that we are able to refresh it much sooner than the market expected, given our track record. And then we are seeing a lot of demand for both of these systems together going forward.
Ruben Roy
That's helpful. Thank you, Anirudh. And then a follow up for John. Anirudh mentioned HBM IP business, booked and shipping in second half. I was wondering if you can kind of give us a bigger picture update on how you're viewing IP in general in terms of bookings relative to sort of ramps of those IP sales. Is it sort of the entire segment sort of a second half? Should we think about the second half ramping at a heavier weight than first half or any update there would be helpful?
John Wall
Yeah, thanks Ruben. I mean Q1 IP performance and bookings were ahead of our expectations. And everything remains on track there for a very strong growth year for 2024 for the IP business. Of course, the timing of revenue recognition depends on the timing of deliveries, but we had a tremendous bookings quarter in Q1 and we're preparing to scale for a number of deliveries of IP in the second half, but we expect the IP to have a very strong year this year. We're pleased with the overall business momentum, but we need to scale up some headcounts to prepare to deliver on some of the larger backlog orders.
Anirudh Devgan
Yeah, 1 thing, I want to highlight, I think you may have seen this, I just want to highlight our partnership with Intel and IFS. That was concluded in Q1. And so it's really good to see, you know, [Pat] (ph) and Intel investing more in the foundry business and also working more closely with us. So that's also a key contributor to IP, but like John said, we have to hire the people, do the -- we need to port our portfolio to the Intel process, okay? And that takes some time. So that's more will come towards the end of the year and next year. But we are pleased with that new partnership on IP.
Ruben Roy
Very helpful. Thanks, guys.
Operator
Your next question comes from the line of Jay Vleeschhouwer with Griffin Securities. Please go ahead.
Jay Vleeschhouwer
Thank you. For you -- John, first, and then Anirudh. So for John, thinking back to a recent conversation we had, could you comment as a measure of EDA market health or dynamics, what you're seeing or expecting in terms of intra-contract new or expansion business. You know, this is an ongoing phenomenon in EDA, maybe talk about what you're seeing in that kind of business beyond the customer renewals schedule. And then relatedly, how are you thinking about pricing for this year given that EDA generally has substantially better pricing capacity than you might have had years past. And then my follow up for Anirudh.
John Wall
Sure, thanks, Jay. Great question. I think what you're getting at there is what we would call add-ons. Typically, we have the very predictable software renewal business. And you'll see in the recurring revenue part of our business, I think we're at double digit revenue growth. But over the past few years, I think that's been at low teens. But we're seeing that a number of customers that have adopted AI tools are maybe not coming back and purchasing add-ons as frequently, but right now we're focused on proliferating those AI tools into accounts. I think there's an opportunity to increase pricing there, but maybe now is not the right time. I think we have such strong momentum on the upfront revenue business. We're preparing for scale into the second half there. But we'll have plenty of revenue growth in the second half of the year. We can continue to focus on proliferating our AI tools and technology into accounts. And pricing is something certainly we can focus on more intently in future years, but right now the focus is on proliferation. Anirudh, do you have anything to add to that?
Anirudh Devgan
No.
Jay Vleeschhouwer
Okay, Anirudh so [piggyback] (ph) to your conference last week, particularly the Gen AI track, it was interesting of course to hear the adoption presentations by Renesas, Intel and so forth. But what seemed to be taking place is a heavy focus on Cerebrus which makes sense, it is the one longest end-market. So perhaps you could talk about how you are thinking about the adoption curve for the other brands aside from Cerebrus? And are there any critical parts of the design flow that might not necessarily be amenable to AI enablement. We hear a lot about implementation, analog, verification but we don't hear a lot about AI as being applicable to synthesis, for example. So maybe talk about those areas where it makes a lot of sense and knows where perhaps it will remain more or less conventional technology.
Anirudh Devgan
Yes. Thanks, Jay for the question. So as you know, we have five major AI platforms with Cerebrus and Digital implementation being the one that has been out the longest and Cerebrus is doing quite well, like you noted. And we also commented on more than 350 tapeouts, lot of PPA improvement. But all the other ones are doing well, too. Sometimes we have like too many products, we don't talk enough about the others, but like verification, like Verisium is doing quite well. And I mentioned Qualcomm last week talked about pretty impressive results because verification, as you know is an exponential problem, because as the chips get bigger, the verification task gets exponentially bigger. So the benefit of AI can be significant in verification. So I think, you will see that in the next few quarters and years that verification will be as important as implementation in terms of benefits of AI. And then the other area I’d like to highlight is PCB and Allegro and Packaging because that area hasn't seen that much automation. And PCB – and Allegro is a leading-platform for packaging and PCB, but really proud of Allegro X AI. And we talked about several customers, including Intel last week talked about 4x to 10x improvement using X AI in PCB. So apart from Digital, I think the next two ones, I feel are verification and Allegro and PCB and then the areas that haven't done as well, I mean is more not in design optimization is like design generation. And I think, there -- this LLM based models do provide a lot of promise. So historically, we haven't done as much design-generation, which is -- this is like almost pre-RTL, right going from Spec to RTL. That's the -- truly the creative part of the design process. And then once you have RTL, it is more optimization part in digital and verification. So I think that's where we have to see, but some initial results, which we haven't talked but I think mentioned last week. But we work with a -- but we have to see it still in early stages, but we work with one or two customers in which we took like a 40, 50 page Spec document, this English document, and able to automatically generate RTL from it, okay? And the RTL quality is pretty good. So again we have to see how that goes, but that requires these really advanced LLM capabilities. So that's something to be seen. But if that works well, that could be another kind of very interesting kind of application of Gen AI.
Jay Vleeschhouwer
Okay, very good. Thank you.
Operator
Your next question comes from the line of Gary Mobley with Wells Fargo Securities. Please go ahead.
Gary Mobley
Hi guys. Thanks so much for taking my questions. John, I appreciate the fact that China revenue in the first quarter was down against a tough year-ago comp on the hardware verification side as you work on backlog. And I assume that you still expect China to be dilutive to overall company growth in the fiscal year. Could you speak to whether or not you are starting to see US export controls begin to impact your ability to do business there, whether that be a function of restrictions around gate-all around or certain China customers added to the entity list.
John Wall
Hi, Gary, thanks for the question. And just to clarify, I think last quarter, I said I expected China revenue to be flat to down this year. I think, we still expect that. And that's because last year was such a strong year and there was a lot of -- there was kind of an oversized-portion of that hardware catch-up that we had that was delivered to China. So I think, it skewed the China number higher last year. So we are lapping pretty tough comps. But the design activity in China remains very strong, though. And -- we have a lot of diversification. There is strength in other parts of the world -- but we're very comfortable with the 2024 outlook and we factored all the impact of geopolitical risk in there to the best we can and try to derisk China, as much as we can in our guide.
Gary Mobley
Okay. The follow up, I want to ask about bookings trends for the balance of the year. You obviously highlighted better than seasonal Q1 booking trends. How would you expect the bookings to play out for the balance of the year? And to what extent will Z3 and X3 factor into that for the balance of the year? Thank you.
John Wall
Yes. I mean, it's hard to predict in terms of Z3 and X3 that we definitely need another quarter to see that. I expect -- we expect strong demand and we expect strong revenue growth into the -- that we are preparing for scale into the second half on the hardware side, but we need to at least see another quarter of demand. And normally with hardware, I don't like taking up the year for hardware until I see the pipeline in the summer. So we are trying to be conservative there. But generally on the hardware side, yes we are basically preparing for scale we’re trying to build -- we'll build those systems as quickly as possible. We expect strong demand there.
Gary Mobley
Okay. Thank you.
Operator
Your next question comes from the line of Jason Celino with KeyBanc Capital Markets. Please go ahead.
Jason Celino
Hi, thanks for [heading] (ph) me. And Anirudh, congrats to your R&D team. [I] (ph) -- Impressed that they reduced the cycle there, all while designing that among [box] (ph), too, right? So -- maybe first, just how many of the -- for the Z3 and X3, does it become available in Q3? I guess when can customers start putting orders in for that?
Anirudh Devgan
Yes. First of all, thanks. And yes, they become available now, okay? But it will ramp Q3 and then Q4. But we already have them running at several early customers. So I mean, normally when we announce something, as you know, like and one of our lead partners, they have been running for three months already and very stable. But in general, it will be more Q3 and then Q4 in terms of -- because normally, in any system, there is like a three months to six months kind of overlap. So we will still sell Z2 X2, and then move to Z3 X3, so that's a natural part. And that's also contributing to this quarter-by-quarter variation a little bit, but it will ramp. And Q3 will be bigger and then Q4 should be bigger than that.
John Wall
Yeah, we try to derisk the guide -- with the assumption that there is going to be strong demand for the newer systems. But it will give us the opportunity to put some of the older systems into the cloud because we have a large underserved community that want to use our emulation capacity. But we haven't had a lot of capacity to share with them through our cloud offering. To the extent we do that, that will lead to ratable revenue though, because I think when it is used in the cloud, you get revenue over time, whereas when we deliver and they use it on-prem, we take revenue upfront.
Anirudh Devgan
But the demand it -- takes like one to two quarters to ramp…
Jason Celino
Okay. Because that's kind of -- what I was going to ask next is I think last time in 2021, you had like a six month period where you are selling both. And I think, you were trying to clear inventory for the Z1 and X1. It doesn't sound like you will be trying to do that again. Because when I think, about this Q2 air pocket, is it a function of customers waiting for Z3 X3? Or is it a function of they might not want to buy the older version?
John Wall
Well, the guide -- we've de-risk the guide on the assumption that many customers might wait. But we intend to sell them side by side. But to the extent the customers wait, it will shift some hardware revenue into the second half of the year. And we have anticipated that. So that's within the guide. To the extent the customers continue to buy Z2. And we're not putting those into the cloud, but selling those outright as well. Well, then that will change the profile of the shape of revenue. But we expect that this new system, the strength of this new system will trigger a lot of demand for it.
Jason Celino
Okay, perfect. Thank you both.
Operator
Your next question comes from the line of Vivek Arya with Bank of America Securities. Please go ahead.
Vivek Arya
Thank you for taking my questions. I think you mentioned second half growth will be driven a lot more by hardware. Do you think you will see all the benefit of the hardware refresh within this year? Will it be done? Will it continue into 2025. I guess my bigger question really is that if I exclude the upfront benefit from last year and this year, your recurring business is expected to grow about 10%. And I'm curious, Anirudh, is that in-line with the kind of recurring revenue growth you are expecting or we should be expecting going forward, right along with periodic hardware refreshes -- or is that not the right way to interpret your core recurring part of your business?
Anirudh Devgan
Very good question. First of all in non-recurring, it's not just hardware, but it's also IP in terms of the second half because like we mentioned, we have new IP business driven by HPM and AI and also by Intel IFS. So that is also back-end loaded along with hardware. And then hardware, hardware normally when we launch a new system, it takes one years or two years for it to fully. So even though we are not commenting about next year, I would be surprised if this time, it's only a six month impact. So I expect like these things is built for to be used in design for next five years, seven years. So the impact will be also not just this year but following years. And in terms of recurring revenue, I think the best way like we have said is to look at a three year CAGR basis because there could be some fluctuation in all. And overall, we are pleased with the recurring revenue growth and we go from there.
John Wall
Yes. And Vivek if I could -- I'd like to kind of take -- carry in some of Gary's question earlier that I don't think I addressed because he was asking about the bookings profile for the year. Q2 for software renewals, I think is our [latest] (ph) software renewals quarter for the year. But I think, we explained last quarter that we expect the weighting of bookings first half to second half to be about 40-60 this year. But the recurring revenue right now in the guide is about double digits -- above 10%. And in the past, it's been about 13%. Now we are not really anticipating a huge number of add-ons, but to grow that above 10%. To the extent that, that comes through, it will be upside to the guide. But what we try to do when we do the guide is de-risk for the risks that we can see.
Vivek Arya
Thank you. For my follow-up question on incremental EBIT margin. Do you think this greater mix of hardware is impacting the incremental EBIT margin. I think, if I calculate it correctly, the new guidance is still below the 50% incremental, right, or right about -- which is lower than what you have had the last two years, three years. Is that the right interpretation? And what can change that?
John Wall
Yes, Vivek, I think what you are referring to really is that, I mean, for what, seven years in a row now, we think we've been achieving over 50% incremental margins. It's a matter of pride here, we try to achieve that every year -- we'll certainly be trying to achieve that this year. I think we are in the high 40s. It's probably about 47% when you look at this guide right now. I think, one of the biggest challenges with something like that is you know, we do small tuck-in M&A, but I don't want to go over Lee Simpson – answer Anirudh gave to Lee Simpson, but organic is delicious here. At Cadence, we focus on innovation and growing with organically driven products and then with small tuck-in M&A. But to the extent that we do some larger M&A and of course, we have BETA CAE, which apparently is the gold standard in structural simulation. So that's a big acquisition for us. But -- now I think the size of that probably still qualifies as a small tuck-in. But when you do something like that -- that those M&A transactions typically are headwinds to that incremental margin calculation in the short-term, they will be beneficial in the long-term. But in the short-term, M&A can be -- dilutive pretty much in the first year and then becomes accretive later. When we look at our incremental margin that's a headwind. But we try to overcome that headwind because normally, all we do with these small tuck-in M&As So I haven't given up on 50% incremental margin for this year. It's a challenge, but we'll do our best to achieve this.
Vivek Arya
Thank you.
Operator
Your final question will come from the line of Harlan Sur with JPMorgan. Please go ahead.
Harlan Sur
Good afternoon. Thanks for taking my question. After a strong 2023, SDA is starting the year relatively flattish and down about 5% to 6% sequentially. I think, like -- it's an unusual starting point for SDA, especially given all of the drivers that you guys have articulated. Is SDA expected to also be more second half loaded? And do you expect SDA, this is ex BETA CAE, but do you expect SDA to grow in-line or faster than your overall corporate growth target for the full year?
John Wall
Yes, Harlan. That's a great question. And thanks for highlighting that -- because I had that on my list of things to say. I think there's something funny going on with the rounding on when you kind of apply the growth rates for SDA for Q1 over Q1, the actual growth rate is probably high single digits Q1-over-Q1. I know, that's lapping tough comps against Q1 '23. I think, if you look on a two year CAGR basis, I think it's up about 17% per annum on a two year CAGR basis for SDA. But we're expecting strong SD&A growth again this year, and it will be higher than the Cadence average. That's our expectation.
Harlan Sur
Great. Thanks for that. And then Anirudh lots of new accelerated compute AI SoC announcements just even over the past few weeks where we saw flagship Blackwell GPU announcement by one of your big customers Nvidia. But we've actually seen even more announcements by your cloud and hyperscale customers bringing their own custom [ASIC] (ph) to the market with Google with TPU V5, Google with their Arm-based CPU ASIC; Meta unveiled their Gen 2 TPU AI classes of chips as well. And in addition to that, like their road maps seem to be accelerating. So can you give us an update on your systems and hyperscale customers? I mean are you seeing the design activity accelerating within this customer base? And is the contribution mix from these customers rising above that sort of roughly 45% level going forward?
Anirudh Devgan
Yeah Harlan, that's a very good observation. And the pace of AI innovation like is increasing and not just in the big semi companies, but of course, in these system companies. And I think several announcements did come out, right, including, I think now Meta is public that Meta is designing a lot of silicon for AI, and of course, Google, Microsoft, Amazon. So all the big, really hyperscaler companies, along with Nvidia, AMD, Qualcomm, all the other kind of Samsung had AI phone this year. So I mean, there is a lot of acceleration both on the semi side and on the system side. And we are involved with all the major players there, and we are glad to provide our solutions. And I do think -- and this is the other thesis we have talked about for years now, right, five years, seven years that the system companies will do silicon because of a lot of reasons for customization, for schedule and supply chain control for cost benefits, if there is enough scale. And I think, the workload of AI, like if you look at I think some of the big hyperscaler and social media companies, they are talking about using like 20,000, 24,000 GPUs to train these new models. I mean this is immense amount. And then the size of the model and the number of models increased, so that could go to a much, much higher number than right now that is required to train these models and of course, to do inference on these models. So I think, we are still in the early innings in terms of system companies developing their own chips and at the same time, working with the semi companies. So I expect that to grow and those that -- our business with those system companies doing silicon, I would like to say is growing faster than Cadence average. But the good thing is the semi guys are also doing a lot of business. So I don't know, if that 45% will -- because that's a combination of a lot of companies. But overall, the AI and hyperscalers, they are doing a lot more than so are the big semi company.
Harlan Sur
Perfect. Thank you.
Operator
I'll now turn it back over to Anirudh Devgan for closing remarks.
Anirudh Devgan
Thank you all for joining us this afternoon. It is an exciting time for Cadence as our broad portfolio and product leadership highly positions us to maximize the growing opportunities in the semiconductor and systems industry. And on behalf of our employees and our Board of Directors, we thank our customers, partners and investors for their continued trust and confidence in Cadence.
Operator
Thank you for participating in today's Cadence first quarter 2024 earnings conference call. This concludes today's call, and you may now disconnect.