Mustafa Suleyman & Yuval Noah Harari -FULL DEBATE- What does the AI revolution mean for our future?

857,229
0
Published 2023-09-17
How will AI impact our immediate and near future? Can the technology be controlled, and does it have agency? Watch DeepMind co-founder Mustafa Suleyman and Yuval Noah Harari debate these questions, with The Economist Editor-in-Chief Zanny Minton-Beddoes.

Filmed on 11 September 2023 in London, in collaboration with The Economist.

To read the transcript, head to: econ.st/3EGURGR

Don't forget to subscribe to Yuval's Channel, where you can find more captivating content!
@YuvalNoahHarari

Stay connected with Yuval Noah Harari through his social media platforms and website:
Twitter: twitter.com/harari_yuval
Instagram: www.instagram.com/yuval_noah_...
Facebook: www.facebook.com/Prof.Yuval.N...
YouTube: @YuvalNoahHarari
Website: www.ynharari.com/

Yuval Noah Harari is a historian, philosopher, and the bestselling author of 'Sapiens: A Brief History of Humankind' (2014), 'Homo Deus: A Brief History of Tomorrow' (2016), '21 Lessons for the 21st Century' (2018), the graphic novel series ‘Sapiens: A Graphic History’ (launched in 2020, co-authored with David Vandermeulen and Daniel Casanave), and the children’s series ‘Unstoppable Us’, (launched 2022).

Yuval Noah Harari and his husband, Itzik Yahav, are the co-founders of Sapienship: a social impact company specializing in content and production, with projects in the fields of education and entertainment. Sapienship’s main goal is to focus the public conversation on the most important global challenges facing the world today.
Learn more about Sapienship: www.sapienship.co/

Yuval Noah Harari speaks internationally and teaches at the Hebrew University of Jerusalem. On this channel you can see his interviews, lectures, and public conversations with prominent leaders and influencers, — including Mark Zuckerberg, Natalie Portman, Christine Lagarde, Chancellor Kurz of Austria, Jay Shetty, and Russell Brand.

All Comments (21)
  • @jordantaylor.
    I've spent the past 2 decades in the IT and fintech sectors, I frequently use LLMs and Transformers at a Fortune 100 company. My concern with this discussion is that Yuval appears to be the only one grounded with the consequences of economic disruptions and fallout from AI advancements. Those who are financially secure often lack genuine concern and can't or won't empathize, even when engaged in such dialogues. We should focus on taking action to safeguard humanity's well-being rather than waiting for a crisis to validate Yuval's warnings.
  • @FinalB055
    This is a debate. No vile language, no hate, a lot of respect, and a wealth of knowledge.
  • @onwardatlast
    One word: hubris. I’m with Yuval. Privately owned AI is frightening because entrepreneurs have proven many times over they are willing to hurt their customers for profit. The development of AI should be a public good fully regulated and administered by strict democratic processes.
  • @craighodges2447
    Great point made by Yuval at 40:25 when he uses the example of an educated financial elite who developed CDOs that created wealth for a few but put millions at risk. AI could indeed present a similar threat because few if anyone would understand how AI financial models work.
  • @kyneticist
    AI based disasters are clearly going to be a question of when, not if.
  • @DanTeo.
    🎯 Key Takeaways for quick navigation: 00:00 🧐 Introduction to the AI Debate 02:04 🤖 The Future of AI in 2028 06:01 🌍 The Profound Shift in Human History 07:48 🌟 The Positive Potential of AI 12:24 💼 Job Disruption and Global Impact 17:47 🗳️ AI's Threat to Political Systems 22:14 🤖 Concerns about technology's impact on conversation and trust 23:35 🛡️ Short-term impact of AI on elections and politics 24:17 💼 Challenges facing nation-states in AI regulation 25:11 🚧 Self-organizing initiatives and precautionary principles 26:24 📊 Balancing benefits and risks in AI development 27:04 🔐 Mustafa Suleyman's 10-point plan for AI safety 29:13 🌐 Creating new institutions for AI oversight 30:50 🏁 Challenges of containing AI proliferation 33:09 🌍 Geopolitical tensions and AI containment 37:22 🌟 The unpredictability of AI development M
  • 39:40 Yuval is absolutely right and their laughter is only meant to disarm the audience from the subtle terror Yuval evokes here. This is the biggest line in the entire conversation.
  • @cecilia3695
    "we invest so much in developing AI" "Our own minds also have a huge scope for development. Also as humanity we haven't seen our full potentiality yet, and if we invest, for every dollar and minute that we invest in AI, we invest another dollar and minute developing our own consciousness and mind I think we would be okey, but I don't see this happening. I don't see this kind of investment in human beings that we are seeing in machines" - Yuval Noah Harari / I agree with Yuval, Thank you for very interesting debate.
  • THANK you Economist for making this public available! Finally some really smart conversation and commentry on AI.
  • @gocciadisapone
    I’m a simple man, I see a new video with Prof Harari, I click and watch till the end no matter what I’m doing
  • @Seanontube1
    Fantastic conversation. Save this video and watch it again in 5 years. Let's see if we think it's as insightful and informative as we do now.
  • @silberlinie
    Yuval Noah's last comment is noteworthy. The historical gospel of the industrialized world is profit maximization at all costs. Yuval Noah remarked that we as a society must invest the same amount that we invest in the development of the new entities, the AI, in the development of our own consciousness and mind. The fact that this is not happening clearly shows us the self-destructive power contained in the explosion in the number of human individuals.
  • @the_good_citizen
    i have just started watching this and let me commend the interviewer. She's quick, she's clear, to the point and doesnt talk or gesture unnecessarily. I'm sure this is going to be exciting to hear.
  • @abelbelete8003
    A special discussion between two different discipline intellectual with amazing respect and exemplary debate, Thank you all.
  • Mustafa's presentation disappointed. I appreciate his suggestions of limits & roles for companies, governments/regulators, & researchers to try to contain risks. However he punted short, and at the end ignored serious risks he acknowledged will arise, in his estimate in 30+ yrs. Both agreed AI cannot be contained, so eventually it will become recursively self-improving & autonomous (becoming ASI), posing existential risks to humanity. Mustafa is unconcerned of or doesn't understand the severity of risks. Yuval rightly sees looming catastrophe. Thank you to The Economist, Dr. Harari, and Dr. Suleyman for this important, enlightening discussion.
  • @bushiS
    Yuval: don't eat that apple. Suleyman: but, it's so red and we are so hungry😢 ....story doesn't repeat it self, but it rhymes.
  • @Adam-nw1vy
    Thanks a million for uploading this 🙏 I was about to get a paid subscription for The Economist just to see this.
  • @joelface
    I'm right around the 24 minute mark into the video, and Harari discusses an interesting phenomenon, which is that our democracy seems to be in danger of collapsing because of the breakdown of the conversation between the people of the Country. He mentions that we can't even all agree who won the last election. My perception is that this phenomenon is occurring because of the incredible splintering of News. People get their news from more and more diverse sources, from YouTube, to Instagram, etc. etc. This allows for more and more isolating echo chambers, where ideas churn between like-minded folks across the globe. Everyone has their own specific menu of news, and thus, a completely different idea of what's actually happening. I also think that media literacy is an increasingly important subject -- because knowing what actually makes a news source reputable seems like less than an afterthought for many folks... and that doesn't help. With respect to AI, I can see ways that it can help and ways that it could hinder that process.
  • @ttrihe10
    Mustafa is like the guy out of Thank You For Smoking - never bats an eye, never gets defensive, on the front foot with an answer for everything - Smoking is great :) and his ten point plan on what not to do is exactly what they are going to do