【极度硬核】AI时代,英伟达如何成为最大赢家?【CC】 |老石谈芯

36,052
0
Publicado 2022-05-11
从芯片硬件的角度,深度梳理GPU在人工智能时代的进化之路。
GPU如何推动人工智能发展?
英伟达如何抓住机遇?
GPU未来如何发展?
普通人应该如何把握并从中受益?

0:00 AI时代,英伟达如何成为最大赢家?
1:23 AI与GPU,究竟谁成就了谁?
5:58 霍普架构,地表最强AI GPU?
11:05 GPU的未来发展如何把握?

本期视频提到的一些论文在下面:
AlexNet:
proceedings.neurips.cc/paper/2012/file/c399862d3b9…

Transformer (Attention is All You Need)
arxiv.org/abs/1706.03762

A3
taejunham.github.io/data/a3_hpca2020.pdf

Spatten
spatten.mit.edu/
------------------------------------------------------------------------------------------------------------------------------------------------
关于我:
大家好,我是芯片工程师老石,欢迎关注我的频道“老石谈芯”。
欢迎订阅👉reurl.cc/8ob9Ej

敲响小铃铛🔔,可以第一时间收到我的频道动态。

我会持续带来更多专注于芯片的科普、硬核知识、以及偶尔极度硬核的技术分析与解读。我也会不断分享高效工作和学习的方法与技巧。更多内容,也欢迎关注我的微信公众号、微博。也欢迎加入我的知识星球,和我进一步交流。

🎵 我使用的音乐/BGM:go.shilicon.com/epidemicsound
📚 我写的书:go.shilicon.com/book
📝 个人网站:www.shilicon.com/
📚 公众号:老石谈芯
🏮微博:老石谈芯的老石
🌍知识星球:老石谈芯 - 进阶版

#英伟达NVIDIA #人工智能计算 #GPU #科技

Todos los comentarios (21)
  • @jjason71995
    本人長年在AI領域研究,不得不說老石這集內容講得非常的精準到位,就連AI的論文引用都非常的恰到好處,乾貨滿滿
  • @lyhourtte
    感谢这个视频,目前我在写毕业论文,我用 half-precision 把模型的体积减小,只知道float32变成float16了,又没人告诉我为什么,无意间看了这期视频,解答了我的疑问。
  • @user-qd8br4ug2p
    感謝石博士用心整理,希望能看到後續關於CUDA的解說
  • @kotime42
    一年后再来看这个视频,真是好预测!
  • @user-bs8qd9hj2s
    我做过GPU的AI算子也做过DSA的AI编译器,结论是GPU只是并行度相较cpu高很多有一些向量的指令集,但是相较DSA那种以cube为单位进行计算的专用AI芯片无论性能还是功耗都是完败,所以NV也有DAS的AI加速卡也有做进GPU里边的专门做超分辨率(DLSS)的专用加速单元,结论就是未来是异构的时代GPU做GPU该做的事NPU做NPU该做的事
  • @hongyangsun6170
    期待cuda的讲解!我是搞NLP的,平时了解硬件不多,看博主的视频获益匪浅
  • @oyjl616
    点赞!蹲一期后续软件和cuda的介绍😆
  • @stex5026
    Like Randy Pausch said in his famous Last Lecture: opportunity + preparation = luck. There are a few salient points worth mentioning: 1. nVidia has been steadily growing and cultivating CUDA, a money pit of a project for the longest time. But it also shows nVidia's belief/vision in GPU for HPC. 2. Alex Krizhevsky and Illya Sutskever were programming wizards who managed to wrangle CUDA into submission so they could run their DL training job on nVidia GPU's, and the rest is history... 3. The coupling of CUDA w/ DL is now de facto. All the major DL frameworks like Tensorflow, PyTorch, etc. all rely on CUDA for GPU acceleration. What about OpenCL, you said? Yeah, you don't hear about it, do you? :) 4. Most of the nVidia's recent commercial success was also due to crytop mining, not from AI/DL A few years ago, a spade of start-ups tried to unseat nVidia in the DL training accelerator market, but can't even find them anymore on Google. Let's see how Jim Keller's Tenstorrent do when it comes out!
  • @kaijundeng6975
    很好,希望下期可以看到cuda,不断学习,努力加油
  • @zhizunbao333
    TSP travelling salesperson problem是个NP-complete问题 用dynamic programming可以把O(n!)算法复杂度降低到O(2^n)
  • @OC10LIN
    能談一下GPU、AI的市場狀況嗎?AMD趕上nVidia了嗎?Intel沒戲了嗎?謝謝您!