How Nvidia Won AI

419,965
0
Published 2022-02-20
When we last left Nvidia, the company had emerged victorious in the brutal graphics card Battle Royale throughout the 1990s.

Very impressive. But as the company entered the 2000s, they embarked on a journey to do more. Moving towards an entirely new kind of microprocessor - and the multi-billion dollar market it would unlock.

In this video, we are going to look at how Nvidia turned the humble graphics card into a platform that dominates one of tech’s most important fields: Artificial Intelligence.

Links:
- The Asianometry Newsletter: asianometry.com/
- Patreon: www.patreon.com/Asianometry
- The Podcast: anchor.fm/asianometry
- Twitter: twitter.com/asianometry

All Comments (21)
  • @okemeko
    From what a professor in my university told me, they didn't only "work closely" with researchers. They straight up gifted some cards so research centers in some cases. This way, not only did nvidia provide a good platform, but all the software made was naturally made for CUDA
  • @0MoTheG
    CUDA was originally not targeted at machine learning or deep neural networks, but molecular dynamics, fluid dynamics, financial monte carlo, financial pattern search, MRI reconstruction, deconvolution and very large systems of linear equations in general. A.I. is a recent addition.
  • @e2rqey
    Nvidia does a really good job of identifying new burgeoning industries where their products could be leveraged, then integrating themselves into the industry from so early on that as the industry matures Nvida's product become essential to the functioning of that industry. I remember visiting a certain self driving car company in California about 4 years ago and seeing a literal wall of Nvidia 1080Ti GPUs. They had at least a couple hundred of them. Apparently they had all been gifted to them by Nvidia. I've heard Nvidia will also send their engineers out to work with companies and help them optimize their software or whatever they are doing, to get the maximum performance out of the GPU for whatever purpose they are using them for.
  • @scottfranco1962
    Nvidia is a real success story. The only blemish is (as illustrated by Linus Torvald's famous giving the middle finger to them) is their completely proprietary stance on development. Imagine if Microsoft had arranged so that only their C/C# compilers could be used to develop programs for Windows. CUDA is a closed shop, as are the graphics drivers for Nvidia's cards.
  • Despite both Google and Ali cloud developed their own NPU for AI acceration. They are still buying large quantities of Nvidia Delta HGX GPUs as their own AI development platform. Programming for CUDA are far easier then their own proprietary hardware and SDK. Nvidia really put a lot of effort in the CUDA sdk and make it to be industry's standard.
  • @deusexaethera
    Ahh, the time-honored winning formula: 1) Make a good product. 2) Get it to market quickly. 3) Don't crush people who tinker with it and find new uses for it.
  • @mimimimeow
    I think it's worth mentioning that the a lot of recent advances in GPU computing (Turing, Ampere, RDNA, mesh shaders, DX12U) can be traced to the PlayStation 2's programmable VU0+VU1 architecture and PlayStation 3's Cell SPUs. Researchers did crazy stuff with these, like real time ray tracing, distributed supercomputing for disease mechanism research and USAF's space monitoring. PS3 F@H program reached 8 Petaflops at one point! Sony and Toshiba would've been like Nvidia today if they provided proper dev support to make use of these chips' capability and continued developing, than just throwing the chip to game devs and said "deal with it". I feel like Sony concentrated too much on selling gaming systems and didn't realize what monsters they actually created. Nvidia won by actually providing a good dev ecosystem with CUDA.
  • @PhilJohn1980
    Ah, geometry stages with matrices - I remember my Comp Sci computer graphics class in the 90's where our final assignment was to, by hand, do all the maths and plot out a simple 3D model on paper. Each student had the same 3D model defined, but different viewport definitions. Fun times.
  • Nvidia didn't invent the graphics pipeline. It was invented by Sillicon Graphics or SGI. SGI developed the language OpenGL as far as 1992. They mainly used to target cinema and scientific visualizations market. They used to manufacture entire work station with their own OS (IRIX) and other specialized servers. What Nvidia did was to target the personal entertainment market. This made Nvidia competent because of decreased overall unit cost. Later OS such as linux were able to run these GPUs in cluster and thus here too SGI loosed. SGI could easily be like Nvidia if they were on right track. SGI is now reduced to a conference known as SigGraph. And mainly is research based peer program. And still contributes to computer graphics especially through OpenGL and Vulkan API specification!
  • @BaldyMacbeard
    The secret of their success for many years was: working closely with developers/customers to gain advantage over their competitors. For instance, Nvidia would give free cards to game developers and send out evangelists to help optimize the game engines. Obviouly resulting in a strong developer bias towards Nvidia cards. Which is how and why they were outperforming AMD for many years. In the machine learning space, they are being extremely generous in their public relations to academia, once again giving away tons of free GPUs and helping developers out. It's a fairly common tactic to try and bring students on board so once they graduate and go on to work in tech companies, they bring a strong bias towards software & hardware they're familiar with. In the server market, Nvidia has been collaborating closely with most manufacturers while offering their DGX systems in parallel. They also have a collaboration with IBM that solders Nvidia GPUs onto their Power8 machines, giving a ginormous boost to bandwidth between GPU and CPU and also PS5-like storage access. And don't forget about the Jetson boards. Those things are pretty amazing for edge computing use cases like object recognition in video and such. They dominate like they do by not trying to sell a single product, but offering tons of solutions for every single market out there.
  • @Quxxy
    I don't think you're right about what "clipping" means at 2:56. Occlusion (hiding things behind other things) is done with a Z-buffer*. As far as I recall, clipping refers to clipping triangles to the edge of the screen to avoid rasterising triangles that fall outside of the visible area, either partially or fully. As far as I'm aware, no one ever did occlusion geometrically on a per-triangle basis. The closest would be in some engines that will rasterise a simplified version of a scene to generate an occlusion buffer**, but that's not handled by the geometry engine, it's just regular rasterisation. *Except on tile-based rasterisers like the PowerVR lineage used in the Dreamcast and some smartphones, notably the iPhone. (Not a graphics programmer or expert, just an interested gamer.) *Edit*: Also, for 7:46 about the fixed function pipeline being totally gone: from what I remember this is not entirely true. GPUs still contain dedicated units for some of the fixed functionality; from memory, that includes texture lookups and blending. Reminds me of an old story from someone who worked on the Larrabee project who mentioned that one of the reasons it failed to produce a usable GPU was that they tried to do all the texturing work in software, and it just couldn't compete with dedicated hardware.
  • This is literally the best breakdown on YouTube when it comes to Nvidia's dominance of the AI space. Love your work!
  • @hgbugalou
    I would buy a shirt that says "but first, let me talk about the Asianometry newsletter".
  • @CarthagoMike
    Oh nice, a new Asianometry video! Time to get a cup of tea, sit back, and watch.
  • A small mistake, right at the beginning - Geforce 256 had hit the market in 1999, not 1996. In the mid-90's, Nvidia was, more or less, just another contender, chipping away at the market dominance of the legendary 3dfx. :)
  • @conradwiebe7919
    Long time viewer and newsletter reader, love your videos. I just wanted to mention that the truncated graph @ 15:28 is a mistake, especially when you then put it next to a non-truncated graph a little later. The difference between datacenter and gaming revenue is greatly exaggerated due to this choice of graph. I feel it actually diminished your point that datacenter is rapidly catching up to gaming.
  • @Doomlaser
    As a game developer, I've been waiting for a video like this. Good work
  • @BenLJackson
    I felt some nestalgia, good vid 👍 deciphering all this back in the day was so much fun. Also I love your explanation of AI and what it really is.
  • I don't want to know how much effort it was to put all this information together. Thanks and thumbs up. P.S.: At Nvidia they are insane. Just try to find out which GPU you have and how it compares to others or if they are CUDA capable ..... You will end up digging through lists of hundreds or thousands of cards.