The A.I. Dilemma - March 9, 2023

3,390,320
2,999
Published 2023-04-05
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.

This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.

We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails.

For the podcast version, please visit: www.humanetech.com/podcast/the-ai-dilemma

------

Citations:

2022 Expert Survey on Progress in AI: aiimpacts.org/2022-expert-survey-on-progress-in-ai…

Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding: arxiv.org/abs/2211.06956

High-resolution image reconstruction with latent diffusion models from human brain activity: www.biorxiv.org/content/10.1101/2022.11.18.517004v…

Semantic reconstruction of continuous language from non-invasive brain recordings: www.biorxiv.org/content/10.1101/2022.09.29.509744v…

Sit Up Straight: Wi-Fi Signals Can Be Used to Detect Your Body Position: www.pcmag.com/news/sit-up-straight-wi-fi-signals-c…

They thought loved ones were calling for help. It was an AI scam: www.washingtonpost.com/technology/2023/03/05/ai-vo…

Theory of Mind Emerges in Artificial Intelligence: www.sciencetimes.com/articles/42488/20230220/theor…

Emergent Abilities of Large Language Models: arxiv.org/abs/2206.07682

Is GPT-3 all you need for low-data discovery in chemistry? chemrxiv.org/engage/chemrxiv/article-details/63eb5…

Paper: arxiv.org/abs/2210.11610

Forecasting: AI solving competition-level mathematics with 80%+ accuracy: bounded-regret.ghost.io/ai-forecasting

ChatGPT reaching 100M users compared with other major tech companies: twitter.com/kylelf_/status/1623679176246185985

Snap: www.washingtonpost.com/technology/2023/03/14/snapc…

Percent of large-scale AI results coming from academia: twitter.com/johnjnay/status/1618692328524496897?la…

How Satya Nadella describes the pace at which the company is releasing AI: www.nytimes.com/2023/02/23/opinion/microsoft-bing-…

The Day After film: en.wikipedia.org/wiki/The_Day_After

China’s view on chatbots:
foreignpolicy.com/2023/03/03/china-censors-chatbot…

Facebook’s LLM leaks online:
www.vice.com/en/article/xgwqgw/facebooks-powerful-…

Intro music video: "Submarines" by Zia Cora
   • Zia Cora - Submarines  

TikTok Filter: Bella Weems Lambert www.tiktok.com/@bellaweemslambert/video/7204472367…

------

Subscribe to our podcast: humanetech.com/YourUndividedAttention
Take our free course on ethical technology: humanetech.com/course

All Comments (21)
  • @TheLionrazor
    Hey all, manually went through the whole vid to summarize good quality chapter heads to click on. This info is too important. If anyone wants to condense further from here, you're welcome! Introduction and Talk start 0:49 Introduction: Steve Wozniak Introduces Tristan Harris and Aza Raskin 1:30 Talk begins: The Rubber band effect 3:16 Preface: What does responsible rollout look like? 4:03 Oppenheimer Manhattan project analogy 4:49 Survey results on the probability of human extinction 3 Rules of Technology 5:36 1. New tech, A New Class of Responsibilities 6:42 2. If a Tech confers power, it starts race 6:47 3. If you don't coordinate, the race ends in tragedy First contact with AI: 'Curation AI' and the Engagement Monster 7:02 First contact moment with curation AI: Unintended consequences 8:22 Second contact with creation AI 8:50 The Engagement Monster: Social media and the race to the bottom Second contact with AI: 'Creation AI' 11:23 Entanglement of AI with society 12:48 Not here to talk about the AGI apocalypse 14:13 Understanding the exponential improvement of AI and Machine Learning 15:13 Impact of Language models on AI Gollem-class AIs 17:09 GLLMM: Generative Large Language Multi-Modal Model (Gollem AIs) 18:12 Multiple Examples: Models demonstrating complex understanding of the world 22:54 Security vulnerability exploits using current AI models, and identity verification concerns 27:34 Total decoding and synthesizing of reality: 2024 will be the last human election Emergent Capabilities of GLLMMs: 29:55 Sudden breakthroughs in multiple fields and theory of mind 33:03 Potential shortcoming of current alignment methods against a sufficiently advanced AI 34:50 Gollem-class AI can make themselves stronger AI can feed itself 37:53 Nukes don't make stronger nukes: AI makes stronger AI 38:40 Exponentials are difficult to understand 39:58 AI is beating tests as fast as they are made Race to deploy AI 42:01 Potential harms of 2nd contact AI 43:50 AlphaPersuade 44:51 Race to intimacy 46:03 At least we're slowly deploying Gollems to the public to test it safely? 47:07 But we would never actively put this in front of our children? 49:30 But at least there are lots of safety researchers? 50:23 At least the smartest AI safety people think there's a way to do it safely? 51:21 Pause, take a breath How do we choose the future we want? 51:43 Challenge of talking about AI 52:45 We can still choose the future we want 53:51 Success moments against existential challenges 56:18 Don't onboard humanity onto the plane without democratic dialogue 58:40 We can selectively slow down the public deployment of GLLMM AIs 59:10 Presume public deployments are unsafe 59:48 But won't we just lose to China? How do we close the gap? 1:02:28 What else can we do to close the gap between what is happening and what needs to happen? 1:03:30 Even bigger AI developments are coming. And faster. 1:03:54 Let's not make the same mistake we made with social media 1:03:54 Recap and Call to action
  • @daniellee9181
    GPT4 was released 5 days after this presentation. AI is moving so fast that some of the things in this presentation became dated in less than one week. This is the exactly one of the main concerns these speakers are trying to get us to understand.
  • @handiman7143
    What scares me the most, is that a lot of people won't watch videos like these simply because of the time frame. Have tried to show it to a lot of people, but they don't think that they have the time to watch one hour educational videos on YouTube even though they do it every day on Netflix. How on earth are you to compete with short dopamine seeking content?
  • @deanjoynwa16
    I wish we could have listened to the QnA. Given the calibre of attendees it would have been brilliant to guage their reactions to this presentation!
  • @sandswan
    How does this not have viewership in the millions.... 45K likes!? SHARE IT PEOPLE!
  • @samhblackmore
    Considering the gravity of this topic, I really appreciate the calm and respectful nature of this presentation. No overt fear mongering (although the material speaks for itself), just trying to bring this to people's attention and help us process it. Even admitting that it will be hard to process and preparing us for that. And as a side note, you don't often see a presentation having 2 speakers but it worked really well. They really complemented each other and made it more engaging with the back and forth riffing on shared experiences
  • @joesdailybeat
    I've enjoyed being a human with you all thus far. However, it's a wrap. Cheers and hug someone you care about today!
  • @dotsona07
    This is a great talk. I initially thought people were overly worried, but now I get it.
  • @andreawiatrek94
    We are so grateful for you. Please continue to try to get this regulated. Integrity and honesty are what we need today. Thank you for your concern for humanity. Many of us will stand behind you and support what you are trying to accomplish.
  • @kmlund42
    We need this broadcasted on every major news network all over the world.
  • @JohnDrummondVA
    The rubber band was really intense when I first started exploring this stuff. Almost to the point that when I'd get out of the AI-world-headspace I was pleasantly surprised to see grass and trees and my house and my family and the normal world. People have said "What a time to be alive" ironically a zillion times, but hooooooly frak. "The Future" always seemed vaguely benign and ever distant, and now it is here and I still don't know how I feel about it.
  • @EllieGonz
    I am commenting to help boost the video. Thank you for your service Tristan and Aza.
  • @darknewt9959
    I've been following AI for 30 years and this is the most powerful and considered hour of exposition I've seen in that entire time. Huge respect and it's given me a whole raft of material to take back to my corporate board.
  • @KurtvonLaven0
    I have been avidly researching AI safety, and this is the best primer I have found on the subject for a general audience. Thank you so much for this wonderful presentation.
  • @Pistolpete0122
    It’s so shocking to me that Cambridge analytica is not ever talked about. This should be shown at every school
  • I've been showing this in my middle school classes. Trying to get them to eat the spinach. Some are choking on it but they seem to understand they need to be part of the answer for the potential blessing of AI or be a part of the curse of AI. Thanks Tristian and team for letting us know the bridge is out ahead or another bridge is ahead and we have never seen a bridge like this before and where it is taking us. With our permission or without. Buckle up.
  • I keep trying to talk about this stuff to anyone who is interested, but it’s tough to know how to explain what’s going on. Thank God people like this are putting this out there
  • @KosmicAura
    While this presentation was expertly and eloquently delivered, I can’t help but think about what a small number of people will actually be able to get the ball rolling with this information. For 99% of us, this is excellent content for awareness. There will be a very small number of people who not only grasp the gravity of the issue but are capable and willing to implement the necessary institutions to address the safety concerns.
  • @the_artisan
    Point 1 is understating the case to a considerable degree. It actually misses something really important. I would state it like this: "When you invent a new technology, you alter the old reality and eliminate the possibility of returning to it. Certain things become literally unthinkable." An ecosystem with rabbits introduced into it isn't just ecosystem+rabbits, but radically different ecosystem.