NVIDIA Corporation

NVIDIA Corporation

€128.62
0.44 (0.34%)
Frankfurt Stock Exchange
EUR, US
Semiconductors

NVIDIA Corporation (NVD.DE) Q3 2017 Earnings Call Transcript

Published at 2016-11-10 21:55:35
Executives
Arnab K. Chanda - NVIDIA Corp. Colette M. Kress - NVIDIA Corp. Jen-Hsun Huang - NVIDIA Corp.
Analysts
Mark Lipacis - Jefferies LLC Vivek Arya - Bank of America Merrill Lynch Toshiya Hari - Goldman Sachs & Co. Atif Malik - Citigroup Global Markets, Inc. (Broker) Stephen Chin - UBS Securities LLC Joseph Moore - Morgan Stanley & Co. LLC Craig A. Ellis - B. Riley & Co. LLC Mitch Steves - RBC Capital Markets LLC Harlan Sur - JPMorgan Securities LLC Romit J. Shah - Nomura Securities International, Inc. Matthew D. Ramsay - Canaccord Genuity, Inc. David M. Wong - Wells Fargo Securities LLC
Operator
Good afternoon. My name is Victoria, and I'll be your conference operator today. Welcome you to the NVIDIA Financial Results Conference Call. All lines have been placed on mute. After the speakers' remarks there will be a question-and-answer period. I will now turn the call over to Arnab Chanda, Vice President of Investor Relations at NVIDIA. You may begin your conference. Arnab K. Chanda - NVIDIA Corp.: Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the third quarter of fiscal 2017. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It's also being recorded. You can hear a replay by telephone until the 17 November, 2016. The webcast will be available for replay up until next quarter's conference call to discuss Q4 financial results. The content of today's call is NVIDIA's property. It cannot be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These forward-looking statements are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Form 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All of our statements are made as of today, the 10th of November, 2016 based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary which is posted on our website. With that, let me turn the call over to Colette. Colette M. Kress - NVIDIA Corp.: Thanks, Arnab. Revenue reached a record in the third quarter, exceeding $2 billion for the first time. Driving this was success in our Pascal-based gaming platform and growth in our datacenter platform, reflecting the role of NVIDIA's GPU as the engine of AI computing. Q3 revenue increased 54% from a year earlier to $2 billion and was up 40% from the previous quarter. Strong year-over-year gains were achieved across all four of our platforms: gaming, professional visualization, datacenter and automotive. The GPU business was up 53% to $1.7 billion and the Tegra processor business increased 87% to $241 million. Let's start with our gaming platform. Gaming revenue crossed the $1 billion mark and increased 63% year-on-year to a record $1.24 billion, fueled by our Pascal-based GPUs. Demand was strong in every geographic region across desktop and notebook, and across the full gaming audience, from GTX 1050 to the TITAN X. GeForce gaming PC notebooks recorded significant gains. Our continued growth in the GTX gaming GPUs reflects the unprecedented performance and efficiency gains in the Pascal architecture. It delivers seamless play on games and richly immersive VR experiences. In Q3, for desktops, we launched the GTX 1050 and the 1050 Ti, bringing eSports and VR capabilities at great value. For notebooks, we introduced GTX 1080, 1070 and 1060, giving gamers a major leap forward in performance and efficiency in a mobile experience. The fundamentals of the gaming market remain strong. The production value of blockbuster games continues to increase. Gamers are upgrading to higher-end GPUs to enjoy highly anticipated fall titles like Battlefield 1, Gears of War 3, Call of Duty: Infinite Warfare, and eSports is attracting a new generation of gamers to the PC. League of Legends is played by over 100 million gamers each month and there is now a Twitch audience of more than 300 million who follow eSports. VR and AR will redefine entertainment and gaming. A great experience requires a high performance GPU, and we believe we're still in the early innings of these evolving markets. Pascal represents not only the biggest innovation gains we've made in a single GPU generation in a decade; it's also our best executed product rollout. Moving to professional visualizations, Quadro revenue grew 9% from a year ago to $207 million, driven by growth in the high-end of the market for real-time rendering and mobile workstations. We are seeing strong customer interest in the Pascal-based P6000 among digital entertainment leaders like Pixar, Disney and ILM, architectural, engineering and construction companies like Japan's Shimizu, and automotive companies like Hyundai. Next: datacenter. Revenue nearly tripled from a year ago and was up 59% sequentially to $240 million. Growth was strong across all fronts in AI and supercomputing for hyperscale, as well as for GRID, virtualization and supercomputing. GPU deep learning is revolutionizing AI, and is poised to impact every industry worldwide. Hyperscale companies like Facebook, Microsoft and Baidu are using it to solve problems for their billions of consumers. Cloud GPU computing has shown explosive growth. Amazon Web Services, Microsoft Azure and AliCloud are deploying NVIDIA GPUs for AI, data analytics and HPC. AWS has recently announced its new EC2 P2 instance, which scales up to 16 GPUs to accelerate a wide range of AI applications, including image and video recognition, unstructured data analytics and video transcoding. We saw strong growth in AI training. For AI inference, we announced the Tesla P4 and P40 to serve power efficient and high performance workloads respectively. Shipments began in Q3 for the DGX-1 AI supercomputer. Early users include major universities like Stanford, UC Berkeley and NYU, leading research groups such as OpenAI, the German Institute of Artificial Intelligence, and the Swiss Artificial Intelligence Lab, as well as multinationals like SAP. So far this year, our GPU technology conference program has reached 18,000 developers and ecosystem partners, underscoring the broad enthusiasm for AI. Complementing our major spring event in Silicon Valley, we have organized GTCs in seven cities on four continents. They drew sellout audiences in Beijing, Taipei, Tokyo and Seoul, as well as Amsterdam, Melbourne and Washington D.C., with Mumbai still to come. Along with 400 sessions and labs, we provided training in AI skills to nearly 2,000 individuals through our Deep Learning Institute instruction program. We also have begun partnering with key global companies to enable the adoption of AI. To implement AI in manufacturing, we announced a collaborative with Japan's FANUC, focused on robots and automated factories. And in the transportation sector, more than 80 OEMs, Tier 1s and startups are using our GPUs for their work on self-driving cars. Our GRID graphics virtualization business continues to achieve extremely strong growth. Adoption is accelerating across a variety of industries, particularly manufacturing, automotive, engineering and education. Among customers added this quarter were Johns Hopkins University and GE Global India. And finally, in automotive, revenue increased to a record $127 million, up 61% year-over-year and up 7% sequentially from premium infotainment products. NVIDIA is developing an end-to-end AI computing platform for autonomous driving. This allows carmakers to collect and label data, train their own deep neural networks on the video GPUs in the datacenter, and then process them in the car with DRIVE PX 2. We have also been developing a cloud-to-car HD mapping system with mapping companies all over the world. Two such partnerships were announced this quarter. We're working with Baidu to create a cloud-to-car development platform with HD maps, Level 3 autonomous vehicles and automated parking. We're also partnering with TomTom to develop a AI-based cloud-to-car mapping system that enables real-time in-car localization and mapping. We've developed an integrated, scalable AI platform with capabilities ranging from automated highway driving to fully autonomous driving operation. We are extending the DRIVE PX 2 architecture to scale in performance and power consumption. It will range from DRIVE PX 2 AutoCruise with a single SoC for self-driving on highways, up to multiple DRIVE PX 2 computers capable of enabling fully autonomous driving. We also announced a single-chip AI supercomputer called Xavier with over 7 billion transistors. Xavier incorporates our next GPU architecture, a custom CPU design, and a new computer vision accelerator. Xavier will deliver performance equivalent to today's full DRIVE PX 2 board and its two Parker SoCs and two Pascal GPUs, while only consuming a fraction of the energy. Finally, Tesla Motors announced last month that all its factory produced vehicles, the Model S, the Model X and upcoming Model 3, feature a new AutoPilot system powered by the NVIDIA DRIVE PX 2 platform and will be capable of fully autonomous operation via future software updates. This system delivers over 40 times the processing power of the previous technology and runs a new neural network for vision, sonar and data processing. Beyond our four platforms, our OEM and IP business was $186 million, down 4% year-on-year. Now, turning to the rest of the income statement. GAAP gross margin for Q3 was a record 59% and non-GAAP gross margin was a record 59.2%. These reflect the strength of our GeForce gaming GPUs, the success for our platform approach and strong demand for deep learning. GAAP operating expenses were $544 million, including $66 million in stock-based compensation and other charges. Non-GAAP operating expenses were $478 million, up 11% from a year earlier. This reflects head count-related costs for our growth initiatives as well as investments in sales and marketing. We intend to continue to invest in deep learning to capture this once in a lifetime opportunity. Thus, we would expect the operating expense growth rate to be sustained over the next several quarters. GAAP operating income was $639 million. Non-GAAP operating income more than doubled to $708 million. Non-GAAP operating margins were over 35% this quarter. For fiscal year 2018, we intend to return $1.25 billion to shareholders through ongoing quarterly cash dividends and share repurchases. We also announced a 22% increase in our quarterly cash dividend to $0.14 per share. Now, turning to the outlook for the fourth quarter of fiscal 2017, we expect revenue to be $2.1 billion, plus or minus 2%. Our GAAP and non-GAAP gross margin are expected to be 59% and 59.2%, respectively, plus or minus 50 basis points. GAAP operating expenses are expected to be $572 million. Non-GAAP operating expenses are expected to be approximately $500 million. And GAAP and non-GAAP tax rates for the fourth quarter of fiscal 2017 are both expected to be 20%, plus or minus 1%. With that, operator, I'm going turn it back to you and see if we can take some questions.
Operator
Certainly. Your first question comes from Mark Lipacis from Jefferies. Mark Lipacis - Jefferies LLC: Thanks for taking my questions and congratulations on a great quarter. I think to start out, Jen-Hsun, maybe if you could help us understand. The datacenter business tripled year-over-year. What's going on in that business that's enabling that to happen? If you could maybe talk about – I don't know if it's on the technology side or the end market side. And maybe as part of that, you can help us maybe deconstruct the revenues and what's really driving that growth? And I had a follow-up too. Thanks. Jen-Hsun Huang - NVIDIA Corp.: Sure. A couple things. First of all, GPU computing is more important than ever. There are so many different types of applications that require GPU computing today, and it's permeating all over enterprise. There are several applications that we're really driving. One of them is graphics virtualization, application virtualization. Partnering with VMware and Citrix, we've essentially taken very compute-intensive, very graphics-intensive applications, virtualizing it and putting it into the datacenter. The second is computational sciences; using our GPU for general purpose scientific computing. And scientific computing, as you know, is not just for scientists; it's running equations and using numerics is a tool that is important to a large number of industries. And then, third, one of the most exciting things that we're doing because of deep learning, we've really ignited a wave of AI innovation all over the world. These several applications, graphics application and virtualization, computational science and data science has really driven our opportunity in the datacenter. The thing that made it possible, though, the thing that really made it possible was really the transformation of our company from a graphics processor to a general purpose processor. And then, on top of that, probably the more important part of that, is transforming from a chip company to a platform company. What made application and graphics virtualization possible is a complicated stack of software we call GRID. And you guys have heard me talk about it for several years now. And second, in the area of numerics and computational sciences, CUDA, our rich library of applications and libraries on top of numerics – numerical libraries on top of CUDA and all the tools that we have invested in, the ecosystem we've worked with, all the developers all around the world that now know how to use CUDA to develop applications makes that part of our business possible. And then third, our deep learning toolkit; the NVIDIA deep learning – GPU deep learning tool kit has made it possible for all frameworks in the world to get GPU acceleration. And with GPU acceleration, the benefit is incredible. It's not 20%, it's not 50%, it's 20 times, 50 times. And that translates to, most importantly, for researchers, the ability to gain access to insight much, much faster. Instead of months, it could be days. It's essentially like having a time machine. And secondarily, for IT managers, it translates to lower energy consumption and, most importantly, it translates to a substantial reduction in datacenter cost, whereas you have a rack of servers with GPUs, it replaces an entire basketball court of cluster of off-the-shelf servers, and so a pretty big deal. Great value proposition.
Operator
Your next question comes from the line of Vivek Arya with Bank of America Merrill Lynch. Vivek Arya - Bank of America Merrill Lynch: Thanks for taking my question and congratulations on the consistent growth and execution. Jen-Hsun, one more on the datacenter business. It has obviously grown very strongly this year. But, in the past, it has been lumpy. So, for example, when I go back to your fiscal 2015, it grew 60% to 70% year-on-year. Last year, it grew about 7%. This year it's growing over 100%. How should we think about the diversity of customers and the diversity of applications to help us forecast how the business can grow over the next one or two years? Jen-Hsun Huang - NVIDIA Corp.: Yeah, I think embedded in your question, in fact, are many of the variables that influence our business. Especially in the beginning, several years ago when we started working on GPU computing and bringing this capability into datacenters, we relied on supercomputing centers in the beginning and then we relied on remote workstations, datacenter workstations, if you will; virtualized workstations. And then increasingly, we started relying on – we started seeing demand from hyperscale datacenters as they used our GPUs for deep learning and to develop their networks. And then, now, we're starting to see datacenters take advantage of our new GPUs, P40 and P4, to apply to operate to use the networks for inferencing in a large scale way. And so, I think we're moving, if you will, our datacenter business in multiple trajectories. The first trajectory is the number of applications we can run. Our GPUs now has the ability with one architecture to run all of those applications that I mentioned from graphics virtualization to scientific computing to AI. Second, we used to be in datacenters, but now we're in datacenters, supercomputing centers as well as hyperscale datacenters. And then third, the number of applications, industries that we affect is growing. It used to start with supercomputing. Now, we have supercomputing, we have automotive, we have oil and gas, we have energy discovery, we have financial services industry, we have, of course, one of the largest industries in the world, consumer Internet cloud services. And so we're starting to see applications in all of those different dimensions. And I think that the combination of those three things, the number of applications, the number of platforms and locations by which we have success, and then, of course, the number of industries that we affect, the combination of that should give us more upward directory in a consistent way. But I think the really – the mega point though is really the size of the industries we're now able to engage. In no time in the history of our company have we ever been able to engage industries of this magnitude. And so that's the exciting part I think in the final analysis.
Operator
Your next question comes from the line of Toshiya Hari with Goldman Sachs. Toshiya Hari - Goldman Sachs & Co.: Great. Thanks for taking my question and congratulations on a very strong quarter. Jen-Hsun, you've been on the road quite a bit over the past few months, and I'm sure you've had the opportunity to connect with many of your important customers and partners. Can you maybe share with us what you learned from the multiple trips and how your view on the company's long-term growth trajectory changed, if at all? Jen-Hsun Huang - NVIDIA Corp.: Yeah. Thanks a lot, Toshiya. First of all, the reason why I've been on the road for almost two months solid is because the request and the demand if you will from developers all over the world for a better understanding of GPU computing and getting access to our platform and learning about all of the various applications that GPUs can now accelerate. The demand is just really great. And we no longer could do GTC, which is our developer conference – essentially, our developer conference. We can no longer do GTC just here in Silicon Valley. And so we, this year, decided to take it on the road and we went to China, went to Taiwan, went to Japan, went to Korea. We had one in Australia and also one in India and Washington D.C., and Amsterdam for Europe. And so we pretty much covered the world with our first global developer conference. I would say, probably, the two themes that came out of it is that GPU acceleration, the GPU has really reached a tipping point. That it is so available everywhere; it's available in PCs, it's available from every computer company in the world, it's in the cloud, it's in the datacenter, it's in laptops. GPU is no longer a niche component. It is a large scale, massively available, general purpose computing platform. And so I think people realize now the benefits of GPU and that the incredible speedup or cost reduction, however, basically the opposite sides of a coin that you can get with GPUs, and so GPU computing. Number two is AI; just the incredible enthusiasm around AI. And the reason for that, of course, for everybody who knows already about AI, what I'm going to say is pretty clear, but there's a large number of applications, problems, challenges where a numerical approach is not available. A laws of physics based, equation-based approach is not available. And these problems are very complex. Oftentimes, the information is incomplete and there's no laws of physics around it. For example, what's the laws of physics of what I look like? What's the laws of physics for recommending tonight's movie? And so those kind – there's no laws of physics involved. And so the question is, how do you solve those kind of incomplete problems? There's no laws of physics equation that you can program into a car that causes the car to drive and drive properly. These are artificial intelligence problems. Search is an artificial intelligence problem. Recommendation is an artificial intelligence problem. And so now that GPU deep learning has ignited this capability and it has made it possible for machines to learn from a large amount of data and to determine the features by itself, to compute the features to recognize, GPU deep learning has really ignited this wave of AI revolution. And so I would say, the second thing that is just incredible enthusiasm around the world is learning how to use the GPU deep learning, how to use it to solve AI type problems and to do so in all of the industries that we know, from healthcare to transportation to entertainment to enterprise to you name it.
Operator
Your next question comes from the line Atif Malik with Citigroup. Atif Malik - Citigroup Global Markets, Inc. (Broker): Hi. Thanks for taking my question and congratulation. You mentioned that a Maxwell upgrade was about 30% of your (27:01-27:06) roughly two years (27:10) and should we be thinking about like a two-year time where (27:15-27:20). Jen-Hsun Huang - NVIDIA Corp.: Atif, first of all, there were several places where you cut out and this is one of those artificial intelligence problems. Because I heard incomplete information, but I'm going to infer from some of the important words that I did hear and I'm going to apply artificial – in this case – human intelligence to see if I can predict what it is that you were trying to ask. I think you were – the baseline – the basis of your question was that Maxwell, in the past, in the past, Maxwell, GPU during that generation, we saw an upgrade cycle about every two or three years. And we had an installed base of some 60 million, 80 million gamers during that time and several years have now gone by. And the question is what would be the upgrade cycle for Pascal and what would it look like? And there are several things that have changed that I think is important to note, and that could affect a Pascal upgrade. First of all, the increase in adoption, the number of units has grown and the number of the ASP has grown. And I think the reason for that is several folds. I think, one, the number of gamers in the world is growing. Everybody that is effectively born in the last 10, 15 years are likely to be a gamer. And so long as they have access to electricity and the Internet, they're very likely a gamer. The quality of games has grown significantly. And one of the factors of production value of games that has been possible is because the PC and the two game consoles, Xbox and PlayStation, and in the future – in the near-future, the Nintendo Switch, all of these architectures are common in the sense that they all use modern GPUs, they all use programmable shading and they all have basically similar features. They have very different design points, they have different capabilities, but they have very similar architectural features. As a result of that, game developers can target a much larger installed base with one common code base and, as a result, they can increase the production quality, production value of the games. The second – and one of the things that you might have noticed that recently PlayStation and Xbox both announced 4K versions, basically the Pro versions of their game console. That's really exciting for the game industry. It's really exciting for us, because what's going to happen is the production value of games will amp up and, as a result, it would increase the adoption of higher-end GPUs. So, I think that that's a very important positive. That's probably the second one. The first one being the number of gamers is growing. The second is game production value continues to grow. And then the third is gaming is no longer just about gaming. Gaming is part sports – part gaming, part sports and part social. There are a lot of people who play games just so they can hang out with their other friends who are playing games. And so it's a social phenomenon and then, of course, because games are – the quality of games, the complexity of games in some such as League of Legends, such as StarCraft, the real-time simulation, the real-time strategy component of it, the agility – the hand-eye coordination part of it, the incredible teamwork part of it is so great that it has become sport. And because there are so many people in gaming, because there is – it's a fun thing to do and it's hard to do, so it's hard to master, and the size of the industry is large, it's become a real sporting event. And one of the things that I'll predict is that one of these days I believe that gaming would likely be the world's largest sport industry. And the reason for that is because it's the largest industry. There are more people who play games and now enjoy games and watch other people play games than there are people who play football for example. And so I think it stands to reason that eSports will be the largest sporting industry in the world. And that's just a matter of time before it happens. And so I think all of these factors have been driving both the increase in the size of the market for us as well as the ASP of the GPUs for us.
Operator
Your next question comes from the line of Stephen Chin with UBS. Stephen Chin - UBS Securities LLC: Hi. Thanks for taking my questions. Jen-Hsun, first question if I could on your comments regarding the GRID systems; you mentioned some accelerating demand in the manufacturing and automotive verticals. Just kind of wondering if you had any thoughts on what inning you're currently in, in terms of seeing a strong ramp-up towards a full run rate for those areas and especially for the broader corporate enterprise, end market vertical also? As a quick follow-up on the gaming side, I was wondering if you had any thoughts on whether or not there's still a big gap between the ramp-up of Pascal supply and the pent-up demand for those new products. Thank you. Jen-Hsun Huang - NVIDIA Corp.: Sure. So, I would say that we're probably in the first at bat of the first inning of GRID, and the reason for that is this. We've prepared ourselves. We went to spring training camp. We came up through the – they call it the farm league or something like that. I'm not really a baseball player, but I heard some people talk about it. And so I think we're probably at the first at bat at the first inning. The thing that – the reason why I'm excited about it is because I believe in the future applications are virtualized in the datacenter or in the cloud. On first principles, on first principles, I believe that data applications will be virtualized and that you'll be able to enjoy these applications irrespective of whether you're using a PC, a Chrome notebook, a Mac or a Linux Workstation. It simply won't matter. And yet, on the other hand, I believe that in the future, applications will become increasingly GPU accelerated. And so, how do you put something in the cloud that have no GPUs and how do you GPU accelerate these applications that are increasingly GPU accelerated? And so, the answer is of course putting GPUs in the cloud and putting GPUs in datacenters. And that's what GRID is all about. It's about virtualization, it's about putting GPUs in large scale datacenters and be able to virtualize the applications so that we can enjoy it on any computer, on any device and putting computing closer to the data. So, I think we're just in the beginning of that. And that could explain why GRID is, finally, after a long period of time of building the ecosystem, building the infrastructure, developing all the software, getting the quality of service to be just really exquisite, working with the ecosystem partners, it's really taken off. And I could surely expect to see it continue to grow at the rate that we're seeing for some time. In terms of Pascal, we are still ramping. Production is fully ramped in the sense that all of our products are fully qualified, they're on the market, they have been certified and qualified with OEMs. However, demand is still fairly high. And so we're going to continue to work hard, and our manufacturing partner TSMC is doing a great job for us. The yields are fantastic for 2016 FinFET, and they're just doing a fantastic job supporting us. And so we're just going to keep running at it.
Operator
Your next question comes from the line of Joe Moore with Morgan Stanley. Joseph Moore - Morgan Stanley & Co. LLC: Yeah. Thank you very much. Great quarter by the way; I'm still amazed how good this is. Can you talk a little bit about the size of the inference opportunity? Obviously, you guys have done really well in training. I assume penetrating inference is reasonably early on. But can you talk about how you see GPUs competitively versus FPGAs on that side of it and how big you think that opportunity could become? Thank you. Jen-Hsun Huang - NVIDIA Corp.: Sure. I'll start backwards. I'll start backwards and answer the FPGA question first. FPGA is good in a lot of things, and anything that you could do in an FPGA if the market opportunity is large, you could always – it's always better to develop an ASIC. And FPGA is what you use when the volume is not large. FPGA is what you use when you're not certain about the functionality you want to put into something. FPGA is largely useful when the volume's not large, because you can build an ASIC and build a full custom chip that obviously can deliver more performance, not 20% more performance but 10 times better performance and better energy efficiency than you could using FPGAs. And so I think that's a well-known fact. Our strategy is very different than any of that. Our strategy is really about building a computing platform. Our GPU is not a specific function thing anymore. It's a general purpose parallel processor. CUDA can do molecular dynamics, it could do fluid dynamics, it could do partial differential equations, it could do linear algebra, it could do artificial intelligence, it could be used for seismic analysis, it could be used for computer graphics, even computer graphics, and so our GPU is incredibly flexible. And it's really designed for – it's designed specifically for parallel throughput computing. And by combining it with the CPU, we've created a computing platform that is both good at sequential information, sequential instruction processing as well as very high throughput data processing. And so we've created a computing architecture that's good at both of those things. The reason why we believe that's important is because several things. We want to build a computing platform that is useful to a large industry. You could use it for AI, you could use it for search, you could use it for video transcoding, you could use it for energy discovery, you could use it for health, you could use it for finance, you could use it for robotics, you could use it for all these different things. So, on the first principles, we're trying to build a computing platform. It's a computing architecture and not a dedicated application thingy. And most of the customers that we're calling on, most of the markets that we're addressing and the areas that we've highlighted are all computer users. They need to use and deploy a computing platform and have the benefit of being able to rapidly improve their AI networks. AI is still in the early days. It's the early days of early days. And so GPU deep learning is going through innovations at a very fast clip. Our GPU allows people to learn – to develop new networks and deploy new networks as quickly as possible. And so I think the way to think about it is, is think of our GPU as a computing platform. In terms of the market opportunity, the way I would look at it is this. The way I would look at it is, there are something along the lines of 5 million to 10 million hyperscale datacenter nodes. And I think – and you guys have heard me say this before – I think that tree is a new set of HPC clusters that have been added into these datacenters. And then the next thing that's going to happen is that you're going to see GPUs being added to a lot of these 5 million to 10 million nodes, so that you could accelerate every single query that will likely come into the datacenter will be an AI query in the future. And so, I think GPUs have an opportunity to see a fairly large hyperscale installed base. But, beyond that, there's the enterprise market. Although a lot of computing is done in the cloud, a great deal of computing, especially the type of computing that we're talking about here that requires a lot of data, we're a data throughput machine, the type of computers that we're talking about tends to be one (40:36) enterprise. And I believe a lot of the enterprise market is going to go towards AI and the type of things that we're looking for in the future is to simplify our business processes using AI, to find business intelligence or insight using AI, to optimize our supply chain using AI, to optimize our forecasting using AI, to optimize the way that we find and surprise and delight customers, digital customers or customers in digital, using AI. And so, all of these parts of the business operations of large companies, I think AI can really enhance. And then the third – so hyperscale, enterprise computing, and then the third is something very, very new, it's called IoT. IoT, we're going have a trillion things. We're going to have a trillion things connected to the Internet over time, and they're going be measuring things from vibration to sound to images to temperature to air pressure to you name it. Okay? And so these things are going be all over the world and we're going to measure – we're going to be constantly measuring and monitoring their activity. And using the only thing that we can imagine that can help to add value to that and find insight from that is really AI. Using deep learning, we could have these new types of computers. And they will likely be on premise or near the location of the cluster of things that you have. And monitor all of these devices and prevent them from failing, or adding intelligence to it so that they add more value to what it is that people have them do. So, I think the size of the marketplace that we're addressing is really larger than any time in our history. And probably, the easiest way to think about it is we're now a computing platform company. We're simply a computing platform company and our focus is GPU computing, and one of the major applications is AI.
Operator
Your next question comes from the line of Craig Ellis with B. Riley & Company. Craig A. Ellis - B. Riley & Co. LLC: Thanks for taking the question and congratulations on the stellar execution. Jen-Hsun, I wanted to go back to the automotive business. In the past, the company has mentioned that revenues consist of display and then on the autopilot side, both consulting and product revenues, but I think much more intensively on the consulting side for now. But as we look ahead to Xavier and the announcement that you had made inter-quarter that that's coming late next year, how should we expect that the revenue mix would evolve, not just from consulting to product but from Parker towards Xavier? Jen-Hsun Huang - NVIDIA Corp.: Yeah, that's – I don't know that I have really granular breakdowns for you, Craig, partly because I'm just not sure. But I think the dynamics are that self-driving cars is probably the most single most disruptive event – the most disruptive dynamic that's happening in the automotive industry. It's almost impossible for me to imagine that in five years' time, a reasonably capable car will not have autonomous capability at some level, and a very significant level at that. And I think what Tesla has done by launching and having on the road in the very near-future here, a full autonomous driving capability using AI, that has sent a shock wave through the automotive industry. It's basically five years ahead. Anybody who's talking about 2021 and that's just a non-starter anymore. And I think that that's probably the most significant bit in the automotive industry. I just don't – anybody who is talking about autonomous capabilities in 2020 and 2021 is at the moment re-evaluating in a very significant way. And so I think that, of course, will change how our business profile ultimately looks. It depends on those factors. Our autonomous vehicle strategy is relatively clear, but let me explain it anyways. Number one, we believe that autonomous vehicles is not a detection problem. It's an AI computing problem. That it's not just about detecting objects, it's about perception of the environment around you, it's about reasoning, reasoning about what to do, what is happening and what to do and to take action based on that reasoning, and to be continuously learning. And so I think that AI computing requires a fair amount of computation. And anybody who thought that it would take only 1 watt or 2 watt, basically, the amount of energy of – well, I'm not even – one-third the energy of a cell phone, I think it's just unfortunate and it's not going to happen any time soon. And so I think people now recognize that AI computing is a very software-rich problem and it is a supremely exciting AI problem, and that deep learning and GPUs could add a lot of value. And it's going happen in 2017. It's not going to happen in 2021. And so I think number one. Number two, our strategy is to deploy a one architecture platform that is open that car companies could work on to leverage our software stack and create their network, their artificial intelligence network. And then we would address everything from highway cruising, excellent highway cruising to all the way to full autonomous to trucks to shuttles. And using one computing architecture, we could apply it for radar-based systems or radar plus cameras, radar plus cameras plus lidars, we could use it for all kinds of sensor fusion environments. And so I think – our strategy, I think, is really resonating well with the industry as people now realize that we need the computation capability five years earlier. That we – that it's not a detection problem, but it's an AI computing problem and that software is really intensive. That these three observations, I think, has put us in a really good position.
Operator
And your next question comes from Mitch Steves with RBC Capital Markets. Mitch Steves - RBC Capital Markets LLC: Hey, guys. Thanks for taking my question. Great quarter across the board. I did want to return to the automotive segment, because the datacenter segment is talked about at length. With the new DRIVE PX platform increasing potentially the ASPs, how do we just think about ASPs for automotive going forward? And if I recall, you guys had about $30 million in backlog in terms of cars, I'm not sure, if possible maybe you can update there as well. Jen-Hsun Huang - NVIDIA Corp.: Let's see. I guess our architecture for DRIVE PX, Mitch, is at scalable. And so you could start from one Parker SoC and that allows you to have surround camera. It allows you to use AI for highway cruising. And if you would like to have even more cameras, so that your functionality could be used more frequently in more conditions, you could always add more processors. And so we go from one to four processors. And if it's a fully autonomous driverless car, a driverless taxi, for example, you might need more than even four of our processors. You might need eight processors. You might need 12 processors. And the reason for that is because you need to reduce the circumstance by which autopilot doesn't work – doesn't turn on, excuse me, doesn't engage, and because you don't have a driver in the car at all. And so I think that depending on the application that you have, we'll have a different configuration and it's scalable. And it ranges from a few hundred dollars to a few thousand dollars. And so I think it just depends on what configuration people are trying to deploy. Now, for a few thousand dollars, the productivity of that vehicle is incredible as you can simply do the math. It's much more available, the cost of operations is reduced, and a few thousand dollars is surely almost nothing in the context of that use case.
Operator
Your next question comes from the line of Harlan Sur with JPMorgan. Harlan Sur - JPMorgan Securities LLC: Good afternoon. Congratulations on the solid execution and growth. Looking at some of your cloud customers, new services offerings, you guys mentioned AWS, the EC2 P2 platform, you have Microsoft Azure Cloud Services platforms, it's interesting because they're ramping new instances primarily using your K80 Accelerator platform, which means that the Maxwell-based and, the recently introduced, Pascal-based adoption curves are still way ahead of the team, which obviously is a great setup as it relates to the continued strong growth going forward. Can you just help us understand why the long design end-cycle times for these accelerators? And when do you expect the adoption curve for the Maxwell-based accelerators to start to kick in with some of your cloud customers? Jen-Hsun Huang - NVIDIA Corp.: Yeah, Harlan, good question. And it's exactly the reason why having started almost five years ago in working with all of these large scale datacenters is what it takes. And the reason for that is because several things has to happen. Applications has to be developed. Their hyperscale, which is their enterprise – their datacenter level software has to accommodate this new computing platform. The neural networks have to be developed and trained and ready for deployment. The GPUs have to be tested against every single datacenter and every single server configuration that they have. And it takes that type of time to deploy at the scales that we're talking about. And so I think that that's number one. The good news is, is that between Kepler and Maxwell and Pascal, the architecture is identical, even though the underlying architecture has been improved dramatically and the performance increases dramatically, the software layer is the same. And so that's – the adoption rate of our future generation is going be much, much faster and you'll see that. But it takes that long to integrate our software and our architecture and our GPUs into all of the datacenters around the world. It takes a lot of work. It takes a long time.
Operator
Your next question comes from the line of Romit Shah with Nomura. Romit J. Shah - Nomura Securities International, Inc.: Yes. Thank you, Jen-Hsun. I just wanted to ask regarding the AutoPilot win, we know that you guys displaced Mobileye, and I was just curious if you could talk about why Tesla chose your GPU and what you can sort of give us in terms of the ramp and timing. And how does this – how would a ramp like this affect automotive gross margin? Jen-Hsun Huang - NVIDIA Corp.: I think there are three things that we offer today. The first thing is that it's not a detection problem it's an AI computing problem. And a computer has processors, and the architecture is coherent and you can program it, you can write software, you can compile to it. It's an AI computing problem. And our GPU computing architecture has the benefit of 10 years of refinement. In fact, this year is the 10-year anniversary of our first GPU, our first CUDA GPU called G80. And we've been working on this for 10 years. And so the number one is autonomous driving. Autonomous vehicles is a AI computing problem. It's not a detection problem. Second, car companies realize that they need to deliver, ultimately, a service, that the service is a network of cars by which they continuously improve. It's like phones. It's like phones. It's like set-top boxes. You have to maintain and serve that customer because they're interested in the service of autonomous driving. It's not a functionality. Autonomous driving is always being improved with better maps and better driving behavior and better perception capability and better AI. And so the software component of it, the software component of it and the ability for car companies to own their own software once they develop it on our platform is a real positive. And real positive to the point where it's enabling or it's essential for the future of the driving fleet. And then the third – to be able to continue to do OTA on. And third is simply the performance and energy level. I don't believe it's actually possible at this moment in time to deliver an AI computing platform of the performance level that is required to do autonomous driving and an energy efficiency level that is possible in a car and to put all that functionality together in a reasonable way. I believe DRIVE PX 2 is the only viable solution on the planet today. And so I – because Tesla had a great intention to deliver this level of capability to the world five years ahead of anybody else, we were a great partner for them. Okay? So those are probably the three reasons.
Operator
And your next question comes from the line of Matt Ramsay with Canaccord Genuity. Matthew D. Ramsay - Canaccord Genuity, Inc.: Thank you very much. Good afternoon. Jen-Hsun, I make an interesting observation about your commentary that your company has gone from a sort of a graphic accelerator company to a computing platform company, and I think that's fantastic. One of the things that I wonder as maybe AI and deep learning acceleration sort of standardize on your platform, what you're seeing and hearing in the Valley about startup activity and folks that are trying to innovate around the platform that you're bringing up both complementary to what you're doing and potentially really long-term competitive to what you're doing. I'd just love to hear your perspectives on that. Thanks. Jen-Hsun Huang - NVIDIA Corp.: Yeah, Matthew. I really appreciate that. We see just a large number of AI startups around the world. There's a very large number here in the United States, of course. There's quite a significant number in China. There is a very large number in Europe. There's a large number in Canada. It's pretty much a global event. The number of software companies that have now jumped onto using GPU deep learning and taking advantage of the computing platform that we've taken almost seven years to build and is really quite amazing. We're tracking about 1,500. We have a program called Inception. And Inception is our startup support program, if you will. They can get access to our early technology, they can get access to our expertise, our computing platform, and all that we've learned about deep learning we can share with many of these startups. As they're trying to use deep learning in industries from cyber security to genomics to consumer applications, computational finance to IoT, robotics, self-driving cars, the number of startups out there is really quite amazing. And so our deep learning platform is a real unique advantage for them because it's available in a PC. So you can – almost anybody with even a couple hundred dollars of spending money can get a startup going with a NVIDIA GPU that can do deep learning. It's available from system builders and server OEMs all over the world, HP, Dell, Cisco, IBM, system builders, small system builders, local system builders all over the world. And very importantly, it's available in cloud datacenters all over the world. So, the Amazon AWS, Microsoft's Azure cloud has a really fantastic implementation ready to scale out. You've got the IBM Cloud, you've got Alibaba Cloud. So, if you have a few dollars an hour for computing, you pretty much can get a company started and use the NVIDIA platform in all of these different places. And so it's an incredibly productive platform because of its performance. It works with every framework in the world. It's available basically everywhere. And so, as a result of that, we've given artificial intelligence startups anywhere on the planet the ability to jump on and create something. And so our – the availability, if you will, the marketization of deep learning, NVIDIA's GPU deep learning, is really quite enabling for startups.
Operator
And your last question comes from the line of David Wong with Wells Fargo. David M. Wong - Wells Fargo Securities LLC: Thanks very much. It was really impressive that 60% growth in your gaming revenues. So does this imply that there was a 60% jump in cards that are being sold by online retailers and retail stores or does the growth reflect new channels through which NVIDIA gaming products are getting to customers? Jen-Hsun Huang - NVIDIA Corp.: It's largely the same channels. Our channel has been pretty stable for some time. And we have a large network. I appreciate the question. It's one of our great strengths, if you will. We cultivated over two decades a network of partners who take the GeForce platform out to the world. And you could access our GPUs, you can access GeForce and be part of the GeForce PC gaming platform from literally anywhere on the planet. And so that's a real advantage and we're really proud of them. I guess you could also say that Nintendo contributed a fair amount to that growth. And over the next – as you know, the Nintendo architecture and the company tends to stick with an architecture for a very long time. And so we've worked with them now for almost two years. Several hundred engineering years have gone into the development of this incredible game console. I really believe when everybody sees it and enjoy it, they're going be amazed by it. It's really like nothing they've ever played with before. And of course, the brand, their franchise and their game content is incredible. And so I think this is a relationship that will likely last two decades and I'm super excited about it.
Operator
We have no more time for questions. Jen-Hsun Huang - NVIDIA Corp.: Well, thank you very much for joining us today. I would leave you with several thoughts that, first, we're seeing growth across all of our platforms from gaming to pro-graphics to cars to datacenters. The transformation of our company from a chip company to a computing platform company is really gaining traction. Now you could see that the results of our work as a result of things like GameWorks and GFE and DriveWorks, all of the AI that goes on top of that, our graphics virtualization remoting platform called GRID to the NVIDIA GPU deep learning toolkit, are just really, really examples of how we've transformed a company from a chip to a computing platform company. In no time in the history of our company have we enjoyed and addressed an exciting large markets as we have today, whether it's artificial intelligence, self-driving cars, the gaming market, as it continues to grow and evolve and virtual reality. And of course, we all know now very well that GPU deep learning has ignited a wave of AI innovation all over the world. And our strategy and the thing that we've been working on for the last seven years is building an end-to-end AI computing platform, an end-to-end AI computing platform, starting from GPUs that we have optimized and evolved and enhanced for deep learning to system architectures, to algorithms for deep learning, to tools necessary for developers, to frameworks and the work that we do with all of the framework developers and AI researchers around the world, to servers, to cloud datacenters, to ecosystems and working with ISVs and startups, and all the way to evangelizing and teaching people how to use deep learning to revolutionize the software that they build. And we call that the Deep Learning Institute, the NVIDIA DLI. And so these are some of the high level points that I hope that you got, and I look forward to talking to you again next quarter.
Operator
This concludes today's conference call. You may now disconnect. We thank you for your participation.