NVIDIA Corporation

NVIDIA Corporation

€136.84
3.72 (2.79%)
Frankfurt Stock Exchange
EUR, US
Semiconductors

NVIDIA Corporation (NVD.F) Q4 2018 Earnings Call Transcript

Published at 2018-02-08 22:34:04
Executives
Simona Jankowski - VP, IR Jen-Hsun Huang - President & CEO Colette Kress - EVP & CFO
Analysts
C. J. Muse - Evercore Mark Lopasif - Jefferies Vivek Arya - Bank of America Stacy Rasgon - Bernstein Research Mitch Steves - RBC Toshiya Hari - Goldman Sachs Blayne Curtis - Barclays Harlan Sur - JP Morgan Joe Moore - Morgan Stanley Chris Rolland - Susquehanna Craig Ellis - B. Riley William Stein - SunTrust
Operator
My name is Victoria and I will be your conference operator for today. Welcome to NVIDIA's Financial Results Conference Call. The phone lines have been placed on mute to preven background noise. After the speakers' remarks, there will be question-and-answer period. [Operator Instructions] Thank you. I'll now turn the call over to Simona Jankowski, Vice President of Investor Relations, to begin your conference.
Simona Jankowski
Thank you. Good afternoon everyone and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2018. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It is also being recorded. You can hear a replay by telephone until February 16, 2018. The webcast will be available for replay up until next quarter's conference call to discuss our fiscal first quarter financial results. The contents of today's call is NVIDIA's property, it can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, February 8, 2018, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO Commentary, which is posted on our website. With that, I'll turn the call over to Colette.
Colette Kress
Thanks, Simona. We had an excellent quarter and fiscal 2018 led by strong growth in our Gaming and Data Center businesses. Q4 revenue reached $2.91 billion, up 34% year-on-year, up 10% sequentially and well above our outlook of $2.65 billion. All measures of profitability set records. They also hit important milestones. For the first time, gross margins strongly exceeded 60%, non-GAAP operating margins exceeded 40% and net income exceeded $1 billion. Fiscal 2018 revenue was $9.71 billion, up 41% or $2.8 billion above the previous year. These short platforms posted record full year revenue with data center growing triple digit. From a reporting segment perspective, Q4 GPU revenue grew 33% from last year to 2.46 billion. Tegra Processor revenue rose 75% to $450 million. Let’s start with our Gaming business. Q4 revenue was $1.74 billion, up 29% year-on-year and up 11% sequentially with growth across all regions, driving GPU demand for a number of great titles during the holiday season, including Players Battleground, PUBG, Destiny 2, Call of Duty, World War II, Star Wars: Battlefront 2. PUBG continued its remarkable run reaching almost 30 million players and recording more than 3 million concurrent players. These games delivered stunning, visible effects that requires strong graphics performance, which is driven a shift toward the higher end of our gaming portfolio and adoption of our Pascal architecture. eSports continues to grow, expanding the overall industry and our business. And one sign of their popularity Activision Overwatch League launched in January and reached 10 million viewers globally in its first week. We had a busy start to the year with a number of announcements at the Annual Consumer Electronics Show in Las Vegas. We introduced NVIDIA BFGDs, Big Format Gaming Displays in a partnership with ACER, ASUS and HP. The high-end 65-inch 4K displays enable ultralow latency gaming and interrogate our shield streaming devices, offering popular apps such Netflix, Gaming Video, YouTube and Hulu, the BFGD 19 best of show awards from various publications. We expanded the free beta of GeForce now beyond Macs to Window based PCs and we enhanced GeForce's experience with new features, including NVIDIA Freestyle for customizing gameplay with various filters and updated NVIDIA Ansel photo mode and support for new titles with ShadowPlay Highlights for capturing gaming achievements. Additionally, the Nintendo Switch gaming console contributed to our growth, as it became the fastest selling console of all time in the U.S. Strong demand in the cryptocurrency market exceeded our expectations. We met some of this demand with a dedicated board in our OEM business and some was not with our gaming GPUs. This contributed to lower than historical channel inventory levels of our gaming GPUs throughout the quarter. While the overall contribution of cryptocurrency to our business remains difficult to quantify, we believe it was a higher percentage of revenue than the prior quarter. That said, our main focus remains on our core gaming market, as cryptocurrency trends will likely remain volatile. Moving to data center, revenue of $606 million was up 105% year-on-year and up 20% sequentially. This excellent performance reflected strong adoption of Tesla V100 GPUs based on our Volta architecture, which began shipping in Q2 and continued to ramp in Q3 and Q4. V100 are available through every major computer maker and have been chosen by every major cloud provider to deliver AI and high performance computing. Hyperscale and cloud customers adopting the V100 include Alibaba, Amazon Web Services, Baidu, Google, IBM, Microsoft Azure, Oracle and Samsung. We continued our leadership in AI training markets where our GPUs remain the platform of choice for training, learning networks. During the quarter, Japan's preferred networks trends ResNet-503 network for image classification in a record of 15 minute by using 1,024 Tesla P100 GPUs. Our newer generation V100 delivered even higher performance with the Volta architecture offering 10 times the deep learning performance of Pascal. We also saw a growing traction in the AI inference market where NVIDIA's platform can improve performance and efficiency by orders of magnitude over CPUs. We continue to view AI inference as a significant new opportunity for our data center GPUs. Hyperscale inference applications that runs on GPUs includes speech recognition, image and video analytics, recommender system, translations, search and a natural language processing. The data center business also benefited from strong growth and high performance computing. The HPC community has increasingly moved to accelerated computing in recent years as Moore’s law has begun to level off indeed more than 500 HPC applications are now GPU accelerated including all of the top 15. NVIDIA added a record 34 new GPU accelerated system to the latest top 500 supercomputer lift, bringing our total to 87 systems. We increased our total petaflops of list by 28% and we captured 14 of the top 20 slots on the Green500 list of the world’s most energy efficient supercomputers. During the quarter, we continue to support the build out of major next generation supercomputers among them that the U.S. Department of Energy's Summit System expected to be the world’s most powerful supercomputer when it comes online later this year. We also announced new wins such as Japan’s fastest AI supercomputer, the ABCI system, which leverages more than 4,000 Tesla V100 GPUs. Importantly, we are starting to see the conversions of HPC and AI as scientists embrace AI to solve problems faster. Modern supercomputers will need to support multiprocessing computation for applying deep learning together with simulation and testing. By combining AI with HPC, supercomputers can deliver increased performance that is orders of magnitude greater in computations ranging from particle physics to drug discovery to astrophysics. We are also seeing traction for AI in a growing number of vertical industries such as transportation, energy, manufacturing, smart cities, and healthcare. We announced engagements with GE Health and Nuance in medical imaging, Baker Hughes, a GE company in oil and gas and Japan Komatsu in construction and mining. Moving to professional visualization, fourth quarter revenue grew to a record 254 million, up 13% from a year ago, up 6% sequentially, driven by demand for real-time rendering as well as emerging applications like AI and VR. These emerging applications now represent approximately 30% of pro visualization sales. We saw strength across several key industries including defense, manufacturing, energy, healthcare and internet service providers among key customers high end quality products are being used by GlaxoSmithKline for AI and by Pemex oil and gas for seismic processing and visualization. Turning to automotive. In automotive for fourth quarter, revenue grew 3% on year to 132 million and was down 8% sequentially. The sequential decline reflects our condition from infotainment which is becoming commoditized to next generation AI cockpits systems and complete top to bottom self-driving vehicle platforms built on NVIDIA hardware and software. At CES, we demonstrated our leadership position in autonomous vehicles with several key milestones and new partnership that point to AI self-driving cars, moving from deployment to production. In a scanning room only key note that drew nearly 8,000 attendees, Jensen announced that DRIVE Xavier the world's first autonomous machine processor will be available to customers this quarter with more than 9 billion transistors DRIVE Xavier is the most complex system on ever created. We also announced in NVIDIA Drive is the world's first functionally saved AI self-driving platform, enabling automakers to create autonomous vehicles they can operate safely and necessary ingredient for going to market. Additionally, we announced a number of collaboration of CES including with Uber, which has been using in video technology for the AI continuing system and its fleet of self-driving cars and freight trucks. We announced that ZF and Baidu are using NVIDIA Drive self-driving technology to create a production ready AI autonomous vehicle platform for China, the world's largest automotive market. Production vehicles utilizing this technology including those from chariots expected on the road by 2020. We also announced partnership with Aurora which is working to create a modular scalable Level 4 and Level 5 self-driving hardware platform incorporating the NVIDIA Drive Xavier processor. Jen-Hsun Huang joined on stage by Volkswagen CEO, Herbert Diess, they announced the new generation of intelligently VW vehicles for using NVIDIA Drive intelligent experience or drive IX, platform to create the new AI infused cockpit experiences and improved safety. Later CES, Mercedes-Benz announced that MBUX its new AI based smart cockpit uses NVIDIA graphics in the AI technologies. The MBUX user experience which includes beautiful touch screen displays and a new voice activated assistant, they viewed it last week at Mercedes-Benz A-Class compact car and we will shift this frame. And earlier this week, we announced a partnership with Continental to build AI self-driving vehicle systems from enhanced Level 2 to Level 5 for production in 2021. There are now more than 320 companies and research institutions using the NVIDIA Drive platform that’s up 50% from a year ago and encompasses virtually every car marker, truck maker, robo-taxi company, mapping company, center manufacture and self starter in the autonomous vehicle ecosystem. With this growing momentum, we remain excited about the inter-meet to long-term opportunities for autonomous driving. Now turning to the rest of the P&L. Q4 GAAP gross margins was 61.9% and non-GAAP was 62.1%, record that reflect continued growth in our value-added platforms. GAAP operating expenses were $728 million and non-GAAP operating expenses were $607 million up 28% and 22% year-over-year respectively. We continue to invest in the key platforms driving our long-term growth including Gaming, AI and automotive. GAAP EPS was $1.78, up 80% from a year earlier, some of the upside was driven by a lower than expected tax rate, as a result of U.S. tax reform and excess tax benefits related to stock based compensation. Our fourth quarter GAAP effective tax rate was a benefit of 3.7% compared with our expectation of a tax rate of 17.5%. Non-GAAP EPS was a $1.72, up 52% from a year ago, reflecting a quarterly tax rate of 10.5% compared with our expectation of 17.5%. We returned $1.25 billion to shareholders in the fiscal year through a combination of quarterly dividends and share repurchases. Our quarterly cash from operations reached record levels at $1.36 billion, bringing our fiscal year total to a record $3.5 billion. Capital expenditures were $469 million for the four quarter, inclusive of $335 million associated with the purchase of our previously financed Santa Clara campus building. Let me take a moment to provide a bit more detail on the impact of U.S. corporate tax reform on the quarter and our go forward financials. In Q4, we reported a GAAP only one-time net tax benefit of $133 million or $0.21 per diluted share. This is primarily related to provisional tax amounts for the transition tax on accumulated foreign earnings and re-measurement of certain deferred tax assets and liabilities associated with Tax Cuts and Jobs Act. We previously accrued for taxes on a portion of foreign earnings in excess of the provisional tax amount recorded for the transition tax hence the one-time benefit. For fiscal 2019, we expect our GAAP and non-GAAP tax rates to be around 12% which is down from approximately 17% previously. This does not take into effect the excess tax benefit from stock based compensation, which depending stock price investing schedule could increase or decrease our tax rate in GAAP in a given quarter. In terms of our capital allocation priorities, we continue to focus first and foremost on investing in our business as we see significant opportunities ahead. Our lower tax rate strengthens our ability to invest in both OpEx such as adding engineering talent as well as CapEx such as investing in supercomputers for internal AI development. In addition, we remain committed to returning cash to shareholders, with our plan remaining at $1.25 billion for fiscal 2019. With that, let me turn to the outlook for the first quarter of fiscal 2019. We expect revenue to be $2.9 billion plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 62.7% and 63% respectively plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $770 million and $645 million respectively. GAAP and non-GAAP OI&Es are both expected to be nominal. GAAP and non-GAAP tax rates are both expected to be 12% plus or minus 1% excluding discrete items. For the full fiscal year 2019, we expect our operating expenses to grow at a similar pace as in Q1. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, I like to highlight a few upcoming events for the financial community will be presenting at the Goldman Sachs Technology and Internet Conference on February 13th and at the Morgan Stanley Technology, Media and Telecom Conference on February 26th. We will also be hosting our Annual Investor Day on March 27th in Sam Hose on the sidelines of our annual GPU Technology Conference, which we are very excited about. We will now open the call for questions. Operator, will you pole for questions please.
Operator
[Operator Instructions] Your first question comes from the line of C. J. Muse from Evercore. C. J. Muse: I guess first question when I think about normal seasonality for gaming that would imply data center potentially more than $700 million plus into the coming quarter. And so, curious if I am thinking about that right whether crypto has been modeled more conservatively by you guys, and so would love to hear your thoughts there? Jen-Hsun Huang: Which way is more conservatively? C. J. Muse: Yes, sorry. Jen-Hsun Huang: When you say conservatively, which direction are you saying was are you implying up or down? C. J. Muse: Well curiously to your thoughts there. Jen-Hsun Huang: We model crypto approximately flat. C. J. Muse: Okay. And then I guess as part of the larger question. How are you thinking about seasonality for gaming into the quarter? Jen-Hsun Huang: Well, there are a lot of dynamics going on in gaming, what dynamic of course is that there is a fairly sizable pent up demand going into this quarter. But I think a larger dynamics that are happening relate to just a really amazing games that are out right now. PUBG is doing incredibly well as you might have known and it’s become a global phenomenon and whether it’s here in United States or Europe or in China, in Asia. PUBG is just doing incredibly well and we expect other developers that come with up similar genre like PUBG that can become in the near future and I am super excited about these games. And there is of course Call of Duty, there is Star Wars that, there are just only so many great games around the market like today. Overwatch and League of Legends are still doing well. They just catalyst number of great franchises that are out in the marketplace and the gaming market is growing and production valley is going up and that's driving increased unit sales of GPUs as well as APSs of GPUs. And so, I think those are that’s probably the larger dynamic of gaming.
Operator
Your next question comes from the line of Mark Lopasif with Jefferies. Q -: The first question, the checks we've done indicate that the Tensor Core you put into Volta give it a huge advantage in neural network applications in the data center. I am wondering whether the Tensor Core might also have a similar kind of utility in the gaming market? Jen-Hsun Huang: Yes, first of all I appreciate you are asking a Tensor Core question. It is probably the single biggest innovation we had last year in data centers. Our GPU the equivalent performance to one of our GPUs, and one of our multi-GPUs would take something along the lines of 20 plus GPUs or 10 plus notes. And so one GPU alone would do deep learning so fast that it would 10 plus CPU powered server notes to keep it with. And then Tensor Core comes along last year and we increased the throughput of deep learning, increased the computation throughput deep learning by another factor of eight. And so Tensor Core really illustrates the power of GPU, it's very unlike a CPU when the instruction sets remain locked for a long-term and its hard, it's difficult to advance. In the case of our GPUs and with could that’s one of its fundamental advantages, we can continue to year in and year out, continue to add new facilities to it. And so Tensor Core boost of the original great performance of our GPU is really raised the bar last year. And as ColetteI said earlier, our Volta GPU has now been adopted all over the world whether it's in China with Alibaba, Tencent and Baidu, iFlytek too. Here in United States Amazon and Facebook and Google and Microsoft and IBM and Oracle. And in Europe and Japan, the number of cloud service provides that have adopted Volta has been terrific. And I think I already really appreciated the work that we did with Tensor Core and although the updates they are now coming out from the frameworks, Tensor Core is the new instruction fit and new architecture and the deep learning developers have really jumped on it and almost every deep learning frame work is being optimized to take advantage of Tensor Core. On the inference side, on the inference side and that’s where it would play a role in video games. You could use deep learning now to synthesize and to generate new art, and we been demonstrating some of that as you could see, if you could you seen some of that whether it improve the quality of textures, generating artificial, characters, animating characters, whether its facial animation with for speech or body animation. The type of work that you could do with deep learning for video games is growing. And that’s where Tensor Core to take up could be a real advantage. If you take a look at the computational that we have in Tensor Core compare to a non optimized GPU or even a CPU, it's now to plus orders of magnitude on greater competition of throughput. And that allows us to do things like synthesize images in real time and synthesize virtual world and make characters and make faces, bringing a new level of virtual reality and artificial intelligence to the video games.
Operator
Your next question comes from line of Vivek Arya from Bank of America.
Vivek Arya
Jen-Hsun just a near and longer term question on the data center. Near-term you would have a number of strong quarters in data center, how is the utilization of these GPU? And how do you measure whether you're over or under from a supply prospective? And then longer them, there seems to be a lot of money going into startup developing silicon for deep learning. Is there any advantage they’ve been taking a clean sheet approach? Or is GPU the most optimal answer? Like, if you were starting a new company looking at AI today, would you make another GPU or would you make another ASIC or some other format? Just any color would be helpful? Jen-Hsun Huang: Sure. In the near-term, the best way to measure customers that are already using our GPUs for deep learning is repeat customers. When they come back another quarter, another quarter and they continue to buy GPUs that would suggest that the workflows continue to increase. The -- with existing customers that already have a very deep penetration, another opportunity for us would be using our GPUs for inference and that’s an untapped growth opportunity for our company that's really, really exciting, we're traction there. For companies that are not at the forefront, the absolute forefront of deep learning, which -- with the exception of one or two or three hyperscalers, almost everybody else I’ll put in this category. And their deployment, their adoption of deep learning, applying deep learning to all of their applications, is still ongoing. And so I think the second wave of our customers is just showing up. And then there is a third wave of customer which is, they're not hyperscalers, they are internet service -- service applications, internet applications for consumers. They have enormous customer basis and that they could apply to artificial intelligence to, but they run their application in hyperscale clouds. That third phase of growth is now really spiking and I'm excited about that. And so that's kind of the way to think about it. The pioneers and the first phase are the training customers. Then there is the second phase that’s now ramping. The third phase, that’s now ramping. And then for everybody, we have an opportunity to apply our GPUs for inference. If I had all the money in the world, and I had for example billions and billions of dollars of R&D, I would give it to NVIDIA’s GPU team, which is exactly what I do. And the reason for that is because GPU was already inherently the world's best high throughput computational processor. A high throughput processor is a lot more complicated than a linear algebra done that you instantiate from a synopsys tool stores. It's not quite that easy. The computation throughput keeping everything moving through your chip with supreme levels of energy efficiency, with all of the software that is -- that’s needed to keep the data flowing with all of the optimizations and you do with each and every one of the frameworks. The amount of complexity there is just really enormous. The networks are changing all the time, it started out with just basically CNNs and then all kinds of versions of CNNs, now it started out with RNNs and simple RNNs and now there is kinds of LSTMs and gated RNNs and all kinds of interesting networks they're growing. It’s started out with just eight layers and now its 152 layers, going to a 1,000 layers. It started with mostly recognition and now it’s moving to synthesis with GANs. And there's, so many versions of GANs. And so, all of these different types of networks are really, really hard to nail down and we're still at the beginning of AI. So, the ability for our GPUs to be programmable to all of these different architectures and networks, it's just an enormous advantage. You don't ever have to guess whether NVIDIA GPUs. It could be used for one particular network and other. And so you can buy our GPUs at well and know that every single GPU that you buy gives an opportunity to reduce the number of servers and you data center by 22 nodes, by 10 nodes, 22 CPUs. And so the more GPU you buy, the more money you save. So, I think that capability is really quite unique and like I just give one example from last year or from previous year, we introduced a 16-bit mixed precision, we introduced 8-bit imager, we introduced -- the year before this last year. This last year, we introduced Tensor core which increased by another factor of nearly 10. Meanwhile our GPUs get more complex, imager efficiency gets better and better every single year and the software origin gets more amazing. And so, it’s a much harder problem than just a multiply accumulator. Artificial intelligence is the single most complex mono software that world is ever known that’s the reason why it’s taken so long to get here, and these high performance supercomputers is an essential ingredient, an essential instrument in advancing AI. And so, I don’t think it’s nearly as simple as liner algebra. But you might have all the money in the world, I would invest it in the team that we have.
Operator
Your next question comes from the line of Stacy Rasgon with Bernstein Research.
Stacy Rasgon
I have a question for Colette. So if I correct for the switch revenue growth in the quarter, it means that the gaming business X which was at maybe $140 million or $150 million. In your Q3 commentary, you did not call out crypto was a driver. You are calling out it in Q4. Is it fair to say that like that incremental growth is all crypto? And I guess going forward, you mentioned pent up demand, normally your seasonality for gaming will be down probably double digit. Do you think that pent up demand is enough to reverse that normal seasonal pattern or normally down? And frankly, do you think gamers can even find GPU at retail at this point to buy in order to satisfy that pent up demand?
Colette Kress
So, let me comment on first one. We did talk about our overall crypto business last quarter as well. We indicated how much we had in OEM boards and we also indicated that there was definitely some with our GTX business. Keep in mind that’s very difficult for us to quantify down to the end customers view. But yes, there is also some in our Q3 and we did comment all that. So here, we are commenting in terms of what we saw in terms of Q4. It’s up a bit from what we saw in Q3 and we do again expect probably going forward. Although, Jen-Hsun answered regarding the demand for gamers as we move forward. Jen-Hsun Huang: Yes, so one way to think about the Tensor demand as we typically have somewhere between six to eight weeks for inventory in the channel, and I think you would ascertain that globally right now the channels relatively lean. We're working really hard to get GPU down to the marketplace for the gamers and we’re doing everything to advise retailers and system builders to serve the gamers. And so, we’re doing everything we can, but I think the most important thing is we just got to catching for supply.
Operator
Your next question comes from the line of Mitch Steves with RBC.
Mitch Steves
And I just want to circle back with autos and so with at CES, so it's kind of on track for towards calendar year '19 in that what we see the autonomous kind of ASP uplift. And just the clarify the expected ASP uplift, all around $1,000, is that all right? Jen-Hsun Huang: Yes, it just depends on mix. I think for autonomous vehicle that still have drivers passenger cars, branded cars, ASP anywhere from $500 to $1,000 makes sense. For robot taxies where they are driverless, they are not autonomous vehicles they are actually drivers less vehicles, the ASP will be several $1,000. And in terms of timing, I think that you’re going to see a larger and larger deployment starting this year, and I'm going through next year for sure especially with robust taxies. And then with autonomous vehicles, cars that have autonomous driving capability, automotive driving capability start late 2019, you could see a lot more in 2020 and just almost every creating car by 2022 will have autonomous automatic driving capabilities.
Operator
Your next question comes from Toshiya Hari with Goldman Sachs.
Toshiya Hari
Jen-Hsun, I was hoping to ask a little bit about inferencing. How big was inferencing within data center in Q4 or fiscal '18? And more importantly, how do you expect that the trend over the next 12 to 18 months? Jen-Hsun Huang: First of all just the comment about inference, the way that it works as you take the output of these frameworks and the output of these frameworks is a really complex large computational graph. And when you think about these neural networks and they have millions of parameters, millions of parameters, millions of anything is very complex. And these parameters are ways in activation layers and activation functions and they are millions of them. And it's million of them composes consist of this computational graph. And this computational graph has all kind of interesting and complicated layers. And so you take this computation graphic that comes out of each one of these frameworks and they all are different, they're in different formats, they're in different styles, they're different architectures and they are all different. And you take these computational graphs and you have to find a way to compile it, to optimize this graph, to rationalize all of the things that you could combine and fold, reduce the amount of conflict across all of the recourses that are in your GPU or your processor. And these conflicts could be on to memory and register follows, data paths and it could be the fabric, it could be the framework of interface, it could be the amount of memory. But you got this computers are really complicated across all these different processors, and the interconnect between GPU, the network connects multiple notes, and so you kind figure out what all these different complex are, resources are and compile and optimize to take advantage of to keep it moving all the time. And so, TensorRT is basically a very sophisticated optimizing graph compilation, graph complier. And it targets each one of our processors, the way it targets Xavier is different than way it targets Volta and the way it targets our inference, the way it targets for low energy for different precisions. Now, all of that targeting is a little bit is different. And so first of all TensorRT, the software of inference that's really where the magic is. Then, the second thing that we do, we optimize our GPUs for extremely high throughput and to support different precisions because some networks afford to have 8 bit imager or even less, some really can barely get by with 16 bit floating point and some you really would like to keep it at 32 bit floating point. So, that you don't have to expect and guess about any precisions that you lost along the way. And so, we created an architecture that consists of this optimizing graph, computational graph complier to processors that are very high throughout that makes precision. Okay. So that’s kind of the background. We start -- we've been sampling our Tesla P4, which is our data center inference processor. And we’re seeing just really existing response. And this quarter, we started shipping, we’re looking outwards. My sense is that the inference market is probably about as large in the data centers as training, and the wonderful thing is everything that you train our processor will inference wonderfully in our processors as well. And the data centers are really awakening to the observation that the more GPUs they buy for uploading inference and training, the more money they save. And the amount of money they save is not 20% or 50%, its factors of 10%. The money savings for all these data centers that are becoming increasingly capital constrained is really quite dramatic. And then the other inference opportunity for us is autonomous machines, which is self-driving cars. TensorRT also targets Xavier, TensorRT targets our Pegasus, a robot taxi computer and they all have to inference incredibly efficiently, so that we can sustain real-time, keeping energy level low and keep the cost low, keep the cost low for core companies. Okay. So I think inference is a very important work for us. It is very complicated work and we’re making great progress.
Operator
Your next question comes from the line of Blayne Curtis with Barclays.
Blayne Curtis
Just kind of curious as you look at the gaming business, I have kind of lost track with seasonality as you really have a big ramp ahead. And I’m just kind of curious as we think about Pascal of our seasonality ahead of Volta, if you could just kind of extrapolate as you look out into April and maybe July? Jen-Hsun Huang: Well, we don't -- we haven’t announced 2018 for April or July. And so the best way to think about that is, Pascal is the best gaming platform on the planet. It is the most future feature rich software. The most energy efficient and from $99 to $1,000 you can buy the world’s best GPU, the most advanced GPUs and you buy Pascal you know you got the best. Seasonality is a good question and increasingly because gaming is a global market and because people play games every day. It’s just part of their life. There is no -- I don’t think there is much seasonality in the TV or books or music, people just whenever new titles comes out that’s when the new season starts. And so in China, there is iCafes and there is Singles' Day November 11, there is Back to School in the United States, there is Christmas, there is Chinese New Year. Boy, that there are so many seasons that is kind of hard to imagine what exactly seasonality is anymore. So hopefully overtime, it becomes most of a matter. But most appointing is that, we expect Pascal to continue to be world’s best gaming platform for foreseeable future.
Operator
Your next question comes from the line of Harlan Sur with JP Morgan.
Harlan Sur
I know somebody asked a question about inferencing for the data center market, but on inferencing embedded in Edge application on the software and firmware side. You talked about TensorRT framework, on the hardware side you got the Jetson TX platform embedded and Edge inferencing applications things like drones and factory automation and transportation. What else is the team doing in the embedded market to capture more of the same opportunities that are going forward? Jen-Hsun Huang: Thanks a lot, Harlan. The video Tensor RT is really the only optimizing inference compiler in the world today and it targets all our platforms. And we do inference in the data center that I mentioned earlier. In the embedded world, the first embedded platform we’re targeting is self-driving cars. In order to drive the cars, you are basically inference or try to predict your perceive what's around you all the time and that’s a very complicated inference matter. It could be extremely easy like taking the car in front of you and applying the breaks or could be critically hard which is trying to figure out whether you should stop in intersection or not. If you look at most intersections, you can't just look at the lights to determine and where do you stop there were few lines. And so using scene understanding than using deep learning, we have the ability to recognize where to stop and where not to stop. And then for Jetson, we have a platform called Metropolis and Metropolis is used for a very large scale smart cities where cameras are deployed all over to keep city safe. We’ve been very successful in smart cities just about every major smart city provider and what's called intelligent video analysis company whether almost all over the world as using NVIDIA’s video platform to do inference at the Edge, AI at the edge. And then we’ve announced recently success with FANUC, the largest manufacturing robotics company in the world, Komatsu, one of the largest constructions equipments company in the world to apply AI at the Edge for autonomous machines. Drones, we’ve several industrial drones that are inspecting pipelines and expecting power lines, flying over large spans of farms to figure out where to spay insecticides more accurately. There is all kinds of applications. So, you're absolutely right that inference at the Edge or AI at the edge is a very large market opportunity for us and that’s exactly why TensorRT was created.
Operator
Your next question comes from the line of Joe Moore with Morgan Stanley.
Joe Moore
You mentioned how lean the channel is in terms of gaming cards. There has been an absorbable increase in prices at retail. And I am just curious, is that a broad-based phenomenal? And is there any economic ramification to you? Or is that just sort of retailers bringing prices up in a shortage environment? Jen-Hsun Huang: We don’t set prices at the end of the market, and the best way for us to solve this problem is, work and supply. The demand is great and it's very likely the demand will remain great as we look through this quarter. And so, we just have to keep working on increasingly supply. We have our suppliers are world best and a largest semiconductor manufacturers in the world and they’re responding incorrectly and I'm really grateful for everything they are doing, we dot got a catch up to that demand, which is such really great.
Operator
Your next question comes from the line of Chris Rolland with Susquehanna.
Chris Rolland
Just to clarify in terms of pent up demand, one of your GPU competitors basically said that constraints was memory. I just want to make sure that was correct? And then in the CFO commentary, you mentioned opportunities for professional biz, like AI and deep learning. Can you talk about that in more kind of application you would use Quadro versus Volta or GeForce Specs? Jen-Hsun Huang: We’re just constraints. Obviously, we're ten times larges of a GPU supplier than the competition and so we have a lot more suppliers supporting us and a lot more distributors taking approach to market and lot more partners distributing our products all over the world. And so, I don’t know how to explain this right from demand is just really great. And so we just got to keep our nose to it and catch up the demand. With respect to Quadro, Quadro is a work station processor. The entire software stack is designed for all of the applications that the work station industry uses. And it’s the quality of the rendering is of course world-class because NVIDIA, but the entire software stack has been designed so that mission critical applications or long-life industrial applications and companies that are enormous and gigantic manufacturing and industrial companies in the world could rely on an entire platform, which consist of processors and systems and software and middleware and all the integrations into all of the CAD tools in the world. To know that that the supplier is going to be here and can be trusted for the entire life of the use of that product which could be several years, but the data that is generated from it, has to be accountable for a couple of decades. You need to be able to pull up an entire design of a plane or train or a car, a couple decades after it was sent to production to make sure that is still on complaint, and if there is question about it that it could be pulled up. NVIDIA's entire platform was designed to be professional class, professional grade low-lived. Now, the thing that's really exciting about some artificial intelligence is, we now can use AI to improve images. For example, you could fix a photograph using AI. You could fill in on damaged parts of a photograph or parts of the image that hasn't been rendered yet. You want to use AI to fill in the dots, predict the future, rendering results, which we announced and which we demonstrated at GTC recently. You can use that to generate designs. You sketch up a few strokes of what you want to car to look like, and based on the inventory, safety, physics, you could -- it has learned how to fill in the rest of it. Okay. Design the rest of chasse on your behalf. It’s called generative design. We’re going to see generative design in product design and building design and just like everything. The last, if you will, 90% of the work is after the initial installation network, the conceptual design has done. That part of it could be highly automated through AI. And so, Quadro can be used as a platform that designs as well as generatively designs. And then lastly a lot of people are using our workstations to also train their neural networks for the generative designs. And so you could train and develop your own networks and then apply it in the applications. Okay. So, AI, think of AI really as, in the final analysis, the future way of developing software is a brand new capability where computers can write its own software and the software that's written is so complex and so capable that no humans could write it ourselves. And so, you can teach, you can use data to teach a software to figure out how to write the software by itself. And then when you're done developing the software, you can use it to do all kinds of stuff, including design products. And so for work stations that’s how it’s used.
Operator
Your next question comes from the line of Craig Ellis with B. Riley. Q -: A lot of near-term items here on gaming. So, I’ll switch it to longer term. Jen-Hsun at CES, I think you said that there are now 200 million GeForce users globally, and if my math is correct then that would be up about 2X over the last three to four years. So the question is, is there anything that you can see that would preclude that kind of growth over a similar period? And given the recent demand dynamics, I think we've seen that INVIDIA’s direct channels have been very good sources for GPUs of the prices that you intend. So as we look ahead, should we expect any change in channel management from the Company? Jen-Hsun Huang: Yes. Thanks a lot, Craig. In the last several years, several dynamics happened at the same time. And all of that were the favorable contributions to today. First of all, gaming became global market and China became one of the largest gaming markets in the world. The second, because the market became so big, developers could invest extraordinary amounts into the production value of the videogame. They could invest a few hundred million dollars and though they are going to get return on. Back when the videogame industry was quite small or PC game was small, it was too risky for developer to invest that much. And so now, an investor a developer could invest hundreds of millions of dollars and create something that is just completely for the realistic and emerged that just beautiful. And so when the production value goes up, the GPU technology that’s needed to run it well goes up, it’s very different than music, it’s very different than watching movies, everything in videogames is emphasized in real time. And so when the production value goes up, the ASP with a technology has to go up. And then lastly, the size of the market and people wonder how big the videogame market is going to be and I am always believe that videogame market is going to be literally everyone. In 10 years time, 15 years time, there can be another billion people on earth and those people are going to be gamers, we see more and more gamers. And not to mention that almost every single sport could be the virtual reality sport. So videogame is every sport so this sport can be any sport and every sport and every type of sport. And so I think when you consider this and put that in your mind I think the opportunity for videogames is going to be quite large and that’s essentially what we’re seeing.
Operator
Your next question comes from the line of William Stein with SunTrust.
William Stein
I am hoping we can touch on automotive a little bit more. In particular, I think in the past you’ve talked about expecting sort of a low in revenue growth in this market and till roughly the 2020 timeframe when autonomous driving kicks in more meaningful way. But of course you have the AI co-pilot that seems to be potentially ramping sooner, and you have at least one marquee customer that is ramping now I guess that volumes aren’t quite that large on the autonomous driving side. So any guidance as to when we might see these two factors start to accelerate revenue in that end market? Jen-Hsun Huang: Yes, thanks a lot Will. I wish I have more precision for you, but here some of the dynamics that I believe in. I believe that autonomous capability, autonomous driving is the single greatest dynamic next to EVs in the automotive industry. And transportation is a $10 trillion industry between cars and shuttles and buses, delivery vehicles. I mean it just an extraordinary market and everything that’s kind of move in the future will be autonomous that’s for sure, and it will be autonomous fully or it will be autonomous partly. The size of this marketplace is quite large. In the near term, our path to that future which I believe starts in 2019 and 2020, but starts very strongly in 2022. I believe the path to that in our case has several elements. The first element is that in work for all these companies, whether they are Tier 1s or start up or OEMs or taxi companies or ride hailing companies or tractor companies or shuttle companies or piece of delivery on the shuttles. In order to deliver -- in order to create there autonomous driving capability, the first thing I have to do is train in your network. And we created a platform we called the NVIDIA GTX that allows everybody to train a neural networks as quickly as possible. So that first, it’s the development of the AI requires GPU, and we benefit first from that. And the second is, which will start this year and next year is development platforms for the cars themselves, for the vehicles themselves. And finally, Xavier here, we have a first kind of Xavier, the most complex SoC that the world ever made. And we’re super excited about the state of Xavier and we're going to be sampling in Q1. And so now, we will be able to help everybody create development systems and will be 1000s and 10,000s of quite expensive development systems based on Xavier and based on that Pegasus that world is going to need. And so that’s the second element, the third element in the near-term will be development agreements, each one of these projects are engineering intensive. And there is a development agreement that goes along way with it. And so these three elements, these three components are in the near-term, and then hopefully starting from 2019 going forward and very strongly going from 2022 and beyond at the actual car revenues and economics will show up. I appreciate the question and I think this is our last question. Well, we have a record quarter ramping up with record year. We've strong momentum and our gaming, AI, data center and self-driving car businesses. It's great to see adoption of the NVIDIA's GPU computing platform increasing in so many industries. We accomplished a great deal this last year and we have big plans for this coming year. Next month, the brightest minds and AI and scientific world will come together at our GPU technology conference at San Jose. GTC has grown tenfold in the last five years. This year we expect more than 8,000 attendees. GTC has a place to be if you're an AI researchers or doing any field of science where computing is your essential instrument, there will be over 500 powers of talks of recent breakthroughs and discoveries by leaders in the field just Google, Amazon, Facebook, Microsoft and many others. Developers from industries ranging from healthcare to transportation to manufacturing entertainment will come together and share state-of-the-art and AI. This is going to be a big GTC. I hope to see all of you there.
Operator
This concludes today's conference call. You may now disconnect. Thank you for your participation.