Advanced Micro Devices, Inc. (NASDAQ:AMD) Deutsche Bank 2023 Technology Conference August 31, 2023 11:00 AM ET
Company Participants
Jean Hu – EVP and CFO
Conference Call Participants
Ross Seymore – Deutsche Bank
Ross Seymore
Good morning, everyone. Welcome to the second day of the Deutsche Bank Technology Conference. I’m Ross Seymore, semiconductor analyst here in the US for Deutsche Bank. We’re very pleased to kick it off this morning, day two with Jean Hu, the EVP and CFO of AMD. So Jean, welcome to Dana Point. Thank you very much for coming.
Jean Hu
Yes. Thank you. Thank you so much for having us.
Ross Seymore
So lots to talk about, a little thing called AI, we’ll get into in a minute. But I want to start off with some — just kind of updated views on the macro market right now. You guys have had significant ups and downs as has everybody on the PC side as well as the data center side of things. So just at a high level, where are we in that kind of corrective process?
Jean Hu
Yes, thanks for the question. So let me share with you what we are seeing in the end market that we participate, talking about the PC or client for us. As we all know, the whole PC industry has gone through one of the probably the worst down cycle during the last three decades. So definitely, Q1 is the bottom. We definitely see that. And if you look at our business, our Q2 client segment actually grew 35% sequentially. We also guided a very significant double digit growth in Q3. Second half seasonally typically, PC is better. And next year, you do have a Windows 10 end of life and potentially the AI applications that will help the refresh cycle. So we are quite optimistic about the PC and the client business, the inventory, the sell-through has been normalized. When you look at the data center, it has been quite mixed, the demand environment. We look at the data center, some of the cloud customers, they were going through the inventory digestion post the pandemic. And also, we do see CapEx optimization with other customers. Enterprise, of course, continue to be cautious. So the demand environment is mixed. But I think that being said, given the backdrop, our business has been performing really well. In Q2, we actually see our Genoa family, which is EPYC Gen 4 family almost doubled sequentially on the revenue side. And we also guided the Q3, data center revenue sequentially growing double digits largely driven by our own product cycle.
If you look at the EPYC Gen 4 family, not only Genoa is ramping quickly, but also Bergamo is starting to ramp in second half. We also have a Genoa-X, which is very focused on technical workload. And also Ciena is coming, which is focused on the telco and cost-sensitive segment. So when you look at the lineup we have, we definitely see second half for our server business will continue to grow and continue to gain share. Of course, we have to talk about AI. I think when you look at the accelerator and the GPU market, the surge of generative AI, it’s really exciting market for us. We have been investing in MI250, MI300. We continue to make significant progress, both on hardware and software side. So in second half, we definitely, in Q4, we are on track for MI300, both A and X, to not only launch and ramp. So for us, when we look at the market opportunities, it’s actually quite exciting because the PC is coming out of the down cycle and the server market, we do have our unique product cycle driving our revenue growth. And next year, we should get more meaningful significant opportunities on the AI side.
Ross Seymore
Why don’t we stick with the — well, before I ask this question, just same thing as yesterday. If you have a question, raise your hand. We’re webcast, so wait for the mic to come to you, and if I don’t see you, just wave your hand harder. On the AI side of things, I think in your last call, you talked about engagements being up kind of 7x sequentially. Talk about what an engagement means, people being interested for GPUs because they can’t get enough from one of your competitors is fine, but I’m not sure that, that really translates into business. So when you talk about engagements, define a little bit of that for us?
Jean Hu
Yes. I think during the last earnings call, Lisa mentioned that the engagement of the momentum just surged tremendously. We continue to see that momentum continue. The engagement is very broad. First on the customer side, our engagement with the cloud customers, enterprise customers and the AI startups. And also it’s at all different stage of engagement from qualification to initial discussions, very broad set of spectrum. I think the key is that when you take a step higher to look at this opportunity, we are at the very, very early beginning of generative AI. And the way AMD thinks about it is really end to end. If you look at our product portfolio, we do have a GPU/CPU adaptive compute to the AI engines everywhere. But fundamentally, right, it’s generative AI. What we are all looking at is the opportunity to improve productivity, the experience and potentially generate $1 trillion of GDP. And when you think about that, you actually really fit — how we think about the AI. AI eventually is going to be everywhere, it’s end-to-end, it’s going to be very pervasive. So today, we are very much focused on the GPU side of the engagement because it’s early but at the same time, we are also having our AI engine in the client side and the edge side. We do think eventually, the compute is — different workload will need different compute, different AI capabilities. So the engagement that you’re asking, it’s very broad. It’s not only on the GPU side, we are also working with the companies on the client side, edge side. That’s why it’s so exciting. The way we think about it’s so early. But if you look at the next decade, this generative AI opportunity will drive another significant cycle of compute cycle. For AMD, it’s our view for high performance compute, so it’s perfect.
Ross Seymore
So when people talk about MI300 — well, when they talk about AI, all they think of is MI300. You guys, as you just said, have a much more holistic approach to it. But if I do bring it back to the MI300, and I know there’s a couple of different — the A and the X families of that. What’s the hardware differentiation that AMD believes it brings versus your primary competitor? And I’ll follow up with the software side afterwards.
Jean Hu
I think if you look at how AMD approach customers, the technology and the solutions, Lisa and Mark Papermaster, they have been always trying to focus on what customer needs. So MI300A, that was designed for the high performance computing segment. And what customer need is really very complex, both GPU, CPU, memory bandwidth altogether. So MI300A is tailored to the edge PC. But as you know, AMD is the one leading the technology and the innovation in particular the modular approach for design and 3D packaging along the process technology. So what we have been able to do is quickly spin off the MI300X, which is more tailored to generative AI, the cloud customers. What you — the advantage, key advantage, one of them is definitely memory bandwidth and capacity. When you’re talking about the training influence, that’s probably one of the most important things to increase the total cost of ownership and the efficiency. So from hardware side, we feel really good about how competitive MI300 versus whatever is in the marketplace right now.
Ross Seymore
So what about the software side? I think that’s where people believe there might be a bigger moat because of CUDA having been around for 15 years or whatever it is now. And you guys are taking a different approach with a much more open angle to it. So talk about how you’re faring there and catching up on that front.
Jean Hu
Yes, appreciate the question. I think it’s very important to embrace the open source system. AMD actually, if you think about the software investment, the ROCm has gone through multiple generations. So the investment has been for a long time. And initially, ROCm was more focused on the hyper edge PC segment because the product was more tailored to that segment. And during the last several years, we have been quickly really working with the open source ecosystem advance ROCm quickly, continually to increase the capacity and increase the functions, the features, the capabilities. So right now, ROCm 5.6, we just released. It actually support a lot of the open source framework, right, PyTorch, Triton and the TensorFlow and others. So the capability and the partnership that we have with our customers today constantly we are advance the software quickly. We definitely get to the point, we feel pretty good about some of the general workload. We can really basically move with the software, the model to our hardware quickly. So that has been — like literally every week, you see significant progress.
Ross Seymore
I think if we get a little bit to some numbers around things, at least a general idea, the fourth quarter is really when the ramp is going to start. You mentioned earlier that it will be the supercomputer side as well as some of the CSP side of things. Talk a little bit about how the trajectory goes on the supercomputer side with El Capitan. I know you did Frontier a couple of years ago, so we have an idea.
Jean Hu
Yes, supercomputer, a lot of you know, is the ramp, it’s very lumpy. It tends to be one quarter primarily all the volume, and then it will come down. So in Q4, definitely, we feel very good about tracking on MI300A ramping with El Capitan, that’s when you will see the volume — majority of the volume and then Q1, probably a little bit. I think we also mentioned we do see other AI customers ramping in Q4. And so the way to think about it is definitely El Capitan will come down, but all the other customers, we continue to qualify to work with the partners. Those are going to continue to ramp. But the second half of next year probably you will see much more meaningful revenue from MI300X, which is more focused on the cloud, enterprise and other AI customers.
Ross Seymore
And we were talking out in the hallway before about kind of making sure people’s expectations were correct that it doesn’t just happen overnight, maybe outside of the supercomputer side. So to the extent you have engagements now, I assume you’re going to be launching the chip, like you said, in the fourth quarter or third quarter. And by the way, are you guys going to have a launch event at some point like you did with the Bergamo in June?
Jean Hu
Yes, in Q4.
Ross Seymore
Okay. So you’ll have that launch event then. Is the duration from that launch event to the second half, is that typical, is that because you just — tons of validation is necessary for each CSP?
Jean Hu
Yes, I think as we discussed earlier, our customer engagement are quite broad. And we are at the very, very early stage for — this is multi years or decade kind of evolution and the product cycle. So for us, the qualification of MI300 is quite technical. So it will take time. And it’s also customer model, workload specific. So it all depends on different customers. The model — really large language model and the bigger cluster, that probably takes more time. And maybe if it’s small customers, it will take less time. So it’s all different spectrum of the qualification process. But when you think about the opportunities, the engagement, the momentum we have and the different stage for customers we are ramping, the results we’re building to support the ramp, we are really at the very beginning of this journey. And so the exciting thing is when you build up all the customers’ qualifications, the revenue ramp will just come by stages, right?
Ross Seymore
The last question on AI is on the supply side. Demand is obviously off the charts for everybody. But back-end packaging, front-end wafers, those sorts of things, how is AMD kind of getting things ready, positioning itself for the ramp, any sort of limitations on the supply side?
Jean Hu
Yes. One of the things that I have learned since I joined AMD is the team’s focus not only just on technology, product road map, but also execution, the supply chain side and working closely with the supply chain partners. So we do feel pretty good about working with the partners, make sure we have the HBM memory. We have [indiscernible] capacity going forward and execute on that trajectory. Definitely, there are constraints here and there. But overall, we feel really good about securing ample capacity for next year.
Ross Seymore
Great. So why don’t we just move on to the not direct AI side of your data center business, the server CPUs. You talked about the back half of this year being much stronger, but it sounds like that’s mainly due to your own product cycles, whether it be the general ramp or the Bergamo side of things and even Ciena ramping. Talk a little bit about where you see actual demand. Is it — so is it AMD gaining share, is it that ASPs for all those products are higher than their predecessors? What’s driving that significant growth in the second half?
Jean Hu
Yes, that’s a great question. I think if you look at our server business, we had — since 2017, we have really established a multi-generation road map. And each generation, the key driver is to continue to improve the total TCO of our customers. And when you look at the Genoa, the whole generation and the whole platform, not only Genoa, Genoa-X, Bergamo and Ciena, all of them actually provide the best TCO either its performance per wallet or performance per dollar for our customers. So what we are seeing definitely is our cloud customers, they have been deployed in Genoa across all major cloud customers and also OEM, ODMs. That is the major driver. Q2, it’s almost doubled sequentially. And the second half, Genoa continue to ramp with all the cloud customers and increasingly more enterprise customers. And the Bergamo, Meta [mutually] adopting Bergamo from Facebook to WhatsApp to even the Instagram. So that absolutely help us with the momentum. I think fundamentally, if you look at today’s data center, cloud or enterprise, one of the key things is about operating cost, how they can be more efficient, power efficient and saving money. So what we’re offering really is that opportunity for customer saving money, that’s the product cycle we are really driving not only this generation and next generation.
Ross Seymore
And what’s your thoughts on the CPU crowding out or getting crowded out by the GPU argument? In the first half of the year, that kind of made sense. In the second half of the year, obviously, big GPU growth at one of your competitors, but you’re doing great in the second half on your CPU side of things. So what’s your thought on that dynamic?
Jean Hu
Yes, there’s definitely some optimization we have been seeing with the cloud customers. And even in the enterprise, everybody is trying to figure out how to invest in AI. People may be cautious about the CPU. But fundamentally, our belief is different compute engines fit best for different workloads. In the end, it’s all about the TCO. A lot of workload today will use surfing the web, the Facebook and the Instagram, all those things, CPU can support it very efficiently. So our view is those workloads, the medians of software code writing on the x86 CPUs, it will continue. Right now, there may be some optimization. But in the long term, it will continue. And more importantly, even if you look at the GPUs, you still need [indiscernible] CPU. It plays a critical role to manage the GPU, the cluster. So I think — our view definitely is in the longer term, you will see the compute everywhere. CPU can do inference too, can do small recommendation too. Customers are going to really focus on what’s the best from TCO perspective, replacement cost, whatsoever. In the end, it all go back to economics, right, what makes most sense from semiconductor hardware perspective. And the AMD has been always just focused on that is what the customer really need and what economics they can drive from a solution we provide.
Ross Seymore
The last question on the data center side I want to talk about is a little bit on competition. We’ve seen and you and I have talked about some of the competitive pressures on the client side of things over the last year in the midst of a downturn admittedly. But on the data center side, AMD has done a superb job gaining tons of share, performance leadership, cost leadership, all of those sorts of things. But there’s other architectures that are coming, there’s internal ASICs that are being built. And then your largest CPU competitor is also accelerating their road map. So how do you see the competitive environment as you look for next year or the year after?
Jean Hu
Yes, I think the first and foremost is we have always been assuming the market is very competitive. I think if you look at how we push the road map and technology evolution, there are multiple elements how we think about the competitive advantage and the moat we’re going to build is from first, the design innovation. So Mark Papermaster and Lisa, they are very focused on that. And secondly, packaging because we know most [volume] is slowing. So for long, long time since they joined the AMD, they have been focusing on 3D packaging, really drive the leadership in the packaging in a lot of innovations there. And third, working with partners like TSMC. There are a lot of process co-innovation between AMD and TSMC to drive our server road map to today, so those combination and also flawless execution. So every two years, we have the cadence to come up with a new generation. And if you look at the Milan, which was introduced and in production since 2021, so Milan today is still very competitive compared to Sapphire Rapids. We still drive tremendous adoption even with the AI, right? The [Indiscernible] still is very, very cost efficient solution for that. Then Genoa, it’s unmatched performance. So we do think we’ll continue to drive the innovation and the road map evolvement with execution. Next year, we’ll have [indiscernible]. And so we feel pretty good about where we are and especially the fundamental drive is to provide customers with the best TCO.
Ross Seymore
So the last question on that topic is, I know pricing matters less in these infrastructure cloud markets than it does in the client or consumer side of things. However, once performance is relatively equivalent amongst peers, then TCU can be a code speak for pricing pressure. Do you see that happening? And we can talk about gross margin a little bit later. But how do you see the pricing environment on the data center side of things?
Jean Hu
Yes. So it’s going back to AMD’s strategy is to have a platform of different technologies. If you look at the Genoa family, we have a Genoa, really, it’s for the most highly performance workload customer need. But then Bergamo with 128 calls, the density is really high. It’s cloud native workload. And then Ciena is probably most cost efficient and work for telco and the small, medium business side, the edge side. Then Genoa-X is for technical workload, which probably is the most expensive. But the key thing is to give customers different product for different workloads so they can achieve the best economics. Even when we look at the Milan today, it’s probably — if a customer feel like they really — it’s enough or sufficient to use Milan to get the best cost advantage. So the idea is to try to continue to deliver more value to customers, add more features. But at the same time, our pricing needed to reflect the value we add, the IP, the technology we provide to customers.
Ross Seymore
Great. So it’s been a lot of time on data center side of things. I think that’s appropriate, given your story. The other segments, I’m going to hop into and other topics, it will be a little more rapid-fire. And again, if people have questions, just raise your hand. So on the client side of things. Is AMD shipping back to demand at this point?
Jean Hu
Yes. I think the client side, we went through a very significant inventory adjustment. So we have been like sell-in much lower versus sell-through so the downstream inventory digestion can go through. Right now, it’s really normalized. I think the sell-in, sell-through is quite balanced. And I think second half, you probably — it’s going to be the same balance of the sell-in and the sell-through, continue to get the inventory out.
Ross Seymore
Do you believe the PC market will be a growth market for you because of your ability to gain share, or is that kind of stabilized in a market that might be 300 million units or somewhere around there?
Jean Hu
We do think the PC or our client business is a growing business for us. I think fundamentally, when we get through this down cycle, we do see some of the potential tailwinds on the Windows 10 and also the AI capabilities everybody is driving. That refresh cycle probably will be better next year compared to this year. And for us, we actually have a set of very competitive products. And while our Ryzen 7040 actually included the AI engines, if you look at the volume and the attractions and the revenue generating from this particular product line, it has been quite impressive. So for us, when we look forward, not only the market is more stabilized but we do have very competitive product portfolio. And we are focusing more on the premium side of the PC market, right? We don’t have anything with Chromebook, all those things. So we do think we can continue to drive share gain going forward with our competitive product portfolio.
Ross Seymore
And keeping on with the rapid fire, the embedded side, a little bit different in its cyclical correction timing, great business. It’s grown really nicely this year, but it seems like it’s finally paying the cyclical price a little bit. Where are we in that directed process and when do you think that can return to growth?
Jean Hu
Yes. And Ross, you and I we talk about this, this often like this is down cycle in semiconductor is so different, right? It’s almost like a stage, one cycle off the other, every vertical literally going through different cycles, different time. For us, PC servers, they are actually already coming out of the cycle, but the embedded literally is starting the cycle in the sense is the lead time has been normalized. It used to be 53 weeks or longer. But right now, it’s more normalized. So what you are seeing is definitely customer is more ordering to their demand versus in the past, they tend to order way ahead of time. We did see all the delinquencies that we used to have. It’s all gone, right? Right now, we’re really looking at the customers adjusting, some of them adjusting their inventory positions. So you do see the Q3 sequentially, we guided it down. But Q4 is going to be flattish versus Q3. And we hope next year, typically, inventory adjustment takes a couple of quarters. But second half of next year, we do think we’ll see the Embedded business taking up. More importantly, if we look at our Embedded business, aerospace and defense is doing really well. Demand continue to be strong and testing, emulation and the healthcare side continue to be really strong. We also have the AI side of things incorporated with our Embedded business, especially the other thing we see is between Zarlink and AMD’s embedded processor business, we do see tremendous synergies. We get more design wins with the Ryzen or [Indiscernible] in cybersecurity in our different networking box. So that will help us to drive longer-term revenue growth for our Embedded business.
Ross Seymore
The last segment, and I’ll make it one quick question and hopefully a quick answer, because I want to make sure to wrap up by talking about gross margin, but the gaming business. Semi-Custom has been a huge, great business for you all maybe not as much on the gross margin side but definitely on the operating margin side. But that cycle is getting a little bit dated. Will gaming ever, at least in the next couple of years, grow again or is that kind of in the plateauing and just stating mature cycle stage?
Jean Hu
Yes, it has been a great cycle. It’s probably one of the best gaming cycles if you look back historically. And right now, there are a lot of new release of titles. So even now, the demand is quite good, better as you said, I think this year’s year four, next year will be year five over the cycle, it’s very normal, it’s maturing. We do expect the [Semi-Custom] gaming business declining next year just because it’s year five. In the longer term, I think we continue to drive the next-generation road map to make sure the gaming graphics side continue to grow. So I think overall, it’s a great business, it’s going through the cycles. But once this cycle is down, the next cycle, typically, you will see it ramping up over the next cycle.
Ross Seymore
Got it. So in the last five minutes we have, let’s talk a little bit about the margin side. On the gross margin side, AMD has done a superb job of structurally raising the gross margin of the company up into low to mid-50s and a target higher than that. So in the near term, it’s a little bit lower than that. We’ve talked a little bit about the pressures on your client side of things, and you break out at least the operating margin on that. Has that pressure started to lessen? And where do you think that can normalize versus the kind of low 50s I think you peaked out at the first half of last year?
Jean Hu
Yes. The headwind when you have the PC client business, which is quite cyclical, the headwind created during the down cycle was quite significant. That’s why our client segment gross margin was a headwind for the overall AMD business. I think right now, what we are saying is once you get through the normalization of channel inventory, gross margin is coming back for the client business. The way to think about it is if you look at the second half, we actually guided Q3 gross margin at 51%, and we all know Embedded business is coming down significantly. So the mix actually is less favorable but we were able to guide gross margin to go up 100 basis points sequentially. It’s largely because client business is stabilizing and coming back. We do think in the longer term, client business will continue to improve gross margin. It is a very competitive environment, no doubt. But I think the key thing for us is to continue to drive the leadership and the focus on the premium part of the market so we can continue to improve client side of gross margin. Longer term, for AMD data center, as we talked, it’s the largest incremental revenue driver for the company. So we do think over the longer term, we should continue to progress our gross margin to reflect the value we provide to our customers.
Ross Seymore
So if you look at — let’s leave gaming out for now, but your other three segments, I wouldn’t think Embedded’s gross margin would really change too much. It’s been very, very steady over the years. Client, I think you just talked about. But on the data center side of things, how do you see that moving as the AI side comes up, more cloud specific concentration maybe going into enterprise, some of the data center GPUs versus CPUs. There’s just so many different moving parts. Should we think as investors that the gross margin in your data center segment is higher, lower, the same over the next few years?
Jean Hu
Yes, that’s a great question. You’re absolutely right. We have a very diversified product portfolio within our data center. I think in general, we do expect data center gross margin will continue to improve. I think if you look at the investment we are making as a company, we really continue to pivot the R&D investment into the area we get the most highest return on investment, which we believe is data center. When we invest more there, typically, you end up providing more features and more capabilities for customers. So at a very high level, that’s what we believe will continue to drive that. Of course, at the next level, you are talking about, there are a lot of puts and takes, right? Server side, we’ll continue to see gross margin continue to improve. On the GPU side, because it’s new product introduction, typically, at the beginning, you really need to mature the product line. But in the near term, the gross margin may not be as normal as you get to the mature stage. So that will be a tailwind in the longer term is introduce product ramp and eventually mature the product line to the point the gross margin will be much more healthy. So I do think in the longer term, not only we’ll continue to improve server side of gross margin, then the GPU side of gross margin will continue to go up.
Ross Seymore
The last question, the 30 seconds we have, the OpEx side of things, I think your target is 23% to 24% of revenues, if I recall. Does that need to change given the AI investments or can AMD keep within that band?
Jean Hu
I think that’s definitely our target. We’ll continue to think that’s the right target. You know the model is, once the revenue grow, the leverage is quite significant. So when we look at the opportunities, especially on the GPU side, today, AI GPU side accelerator, today, the revenue we have in our revenue. So the ramp of the GPU business and revenue will help us to leverage our model much more significantly. 23%, 24% is really the right target.
Ross Seymore
Great. Well, we are out of time. But, Jean, thank you very much for joining and kicking us off this morning.
Jean Hu
Okay. Thank you. Thank you, everyone.
Question-and-Answer Session
End of Q&A
Read the full article here