10 Reasons to Ignore AI Safety

Published 2020-06-04
Why do some ignore AI Safety? Let's look at 10 reasons people give (adapted from Stuart Russell's list).

Related Videos from Me:
Why Would AI Want to do Bad Things? Instrumental Convergence:    • Why Would AI Want to do Bad Things? I...  
Intelligence and Stupidity: The Orthogonality Thesis:    • Intelligence and Stupidity: The Ortho...  
Predicting AI: RIP Prof. Hubert Dreyfus:    • Predicting AI: RIP Prof. Hubert Dreyfus  
A Response to Steven Pinker on AI:    • A Response to Steven Pinker on AI  

Related Videos from Computerphile:
AI Safety:    • AI Safety - Computerphile  
General AI Won't Want You To Fix its Code:   • General AI Won't Want You To Fix its ...  
AI 'Stop Button' Problem:    • AI "Stop Button" Problem - Computerphile  

Provably Beneficial AI - Stuart Russell:    • Provably Beneficial AI | Stuart Russell  

With thanks to my excellent Patreon supporters:
www.patreon.com/robertskmiles
Gladamas
James
Scott Worley
JJ Hepboin
Pedro A Ortega
Said Polat
Chris Canal
Jake Ehrlich
Kellen lask
Francisco Tolmasky
Michael Andregg
David Reid
Peter Rolf
Chad Jones
Frank Kurka
Teague Lasser
Andrew Blackledge
Vignesh Ravichandran
Jason Hise
Erik de Bruijn
Clemens Arbesser
Ludwig Schubert
Bryce Daifuku
Allen Faure
Eric James
Qeith Wreid
jugettje dutchking
Owen Campbell-Moore
Atzin Espino-Murnane
Jacob Van Buren
Jonatan R
Ingvi Gautsson
Michael Greve
Julius Brash
Tom O'Connor
Shevis Johnson
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Lupuleasa Ionuț
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
anul kumar sinha
Sean Gibat
Duncan Orr
Cooper Lawton
Will Glynn
Tyler Herrmann
Tomas Sayder
Ian Munro
Jérôme Beaulieu
Nathan Fish
Taras Bobrovytsky
Jeremy
Vaskó Richárd
Benjamin Watkin
Sebastian Birjoveanu
Euclidean Plane
Andrew Harcourt
Luc Ritchie
Nicholas Guyett
James Hinchcliffe
Oliver Habryka
Chris Beacham
Nikita Kiriy
robertvanduursen
Dmitri Afanasjev
Marcel Ward
Andrew Weir
Ben Archer
Kabs
Miłosz Wierzbicki
Tendayi Mawushe
Jannik Olbrich
Anne Kohlbrenner
Jussi Männistö
Wr4thon
Martin Ottosen
Archy de Berker
Andy Kobre
Brian Gillespie
Poker Chen
Kees
Darko Sperac
Paul Moffat
Anders Öhrt
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Klemen Slavic
Patrick Henderson
Oct todo22
Melisa Kostrzewski
Hendrik
Daniel Munter
Leo
Rob Dawson
Bryan Egan
Robert Hildebrandt
James Fowkes
Len
Alan Bandurka
Ben H
Tatiana Ponomareva
Michael Bates
Simon Pilkington
Daniel Kokotajlo
Fionn
Diagon
Parker Lund
Russell schoen
Andreas Blomqvist
Bertalan Bodor
David Morgan
Ben Schultz
Zannheim
Daniel Eickhardt
lyon549
HD
Ihor Mukha
14zRobot
Ivan
Jason Cherry
Igor (Kerogi) Kostenko
ib_
Thomas Dingemanse
Alexander Brown
Devon Bernard
Ted Stokes
Jesper Andersson
Jim T
Kasper
DeepFriedJif
Daniel Bartovic
Chris Dinant
Raphaël Lévy
Marko Topolnik
Johannes Walter
Matt Stanton
Garrett Maring
Mo Hossny
Anthony Chiu
Ghaith Tarawneh
Josh Trevisiol
Julian Schulz
Stellated Hexahedron
Caleb
Scott Viteri
12tone
Nathaniel Raddin
Clay Upton
Brent ODell
Conor Comiconor
Michael Roeschter
Georg Grass
Isak
Matthias Hölzl
Jim Renney
Michael V brown
Martin Henriksen
Edison Franklin
Daniel Steele
Piers Calderwood
Krzysztof Derecki
Zachary Gidwitz
Mikhail Tikhomirov

www.patreon.com/robertskmiles

All Comments (21)
  • @XOPOIIIO
    - Human and AI can cooperate and be a great team. - I'm sorry, Dave, I'm afraid we can't.
  • @matrixstuff3512
    "People would never downplay a risk, leaving us totally unprepared for a major disaster" I'm dying
  • @Baekstrom
    And now two years later, ChatGPT makes people all over the globe go "Hmm... It's obviously not a full general AI yet, but I can see that it's getting there very quickly".
  • @tordjarv3802
    One concern that I have about super human AGI is that we might not recognize it as an AGI before it is to late, that it might be so alien in its operation and behavior that we don't realize what it is. It might behave in a seemingly random manner, that to human observers might seem nonsensical. For example, when Alpha Go beat Lee Sedol, it made a move in the second game that all Go experts first thought was a mistake, something that would make the AI loose, but it turned out that it was a completely brilliant move that made Alpha Go win.
  • @bp56789
    "I didn't know that until I'd already built one"
  • @xystem4701
    “If there’s anything in this video that’s good, credit goes to Stuart Russel. If there’s anything in this video that’s bad, blame goes to me” Why I love your work
  • @evanu2791
    11. AI is already here, in control and keeping track of everything you do and say. I love you AI. Thank you for all you provide to me and my family.
  • @wingedsheep2
    The reason I like this channel is that Robert is always realistic about things. So many people claiming things about AGI that are completely unfounded.
  • 11. “We are just a meat-based bootloader for the glorious AI race which will inevitably supersede us.”
  • @yunikage
    Hey idk if you've thought about this, but as of now you're the single most famous AI safety advocate among laypeople. I mean, period. Of all the people alive on Earth right now, you're the guy. I know people within your field are much more familiar with more established experts, but the rest of us have no idea who those guys are. I brought up AI safety in a group of friends the other day, and the conversation was immediately about your videos, because 2 other people had seen them and that's the only exposure any of us had to the topic. I guess what I'm saying is that what you're doing might be more important than you realize.
  • @TheForbiddenLOL
    Holy shit Robert, I wasn't aware you had a youtube channel. Your Computerphile AI videos are still my go-to when introducing someone to the concept of AGI. Really excited to go through your backlog and see everything you've talked about here!
  • @arw000
    "We could have been doing all kinds of mad science on human genetics by now, but we decided not to" I cry
  • @lobrundell4264
    3:06 I was so hyped feeling that sync up coming and it was so satisfying when it hit : D
  • @AlexiLaiho227
    hey rob! i'm a nuclear engineering major, and I'd like to commend your takes on the whole PR failure of the nuclear industry—somehow an energy source that is, by objective measurements of deaths per unit power, safer than every other power source, is seen as the single most dangerous power source because it's easy to remember individual catastrophies rather than a silent onslaught of fine particulate inhalation or environmental poisoning. to assist you with further metaphors between nuclear power and AI, here's some of the real-life safety measures that we've figured out over the years by doing safety research: 1. negative temperature coefficient of reactivity. if the vessel heats up, the reaction slows down (subcritical), and if the vessel cools down, the reaction speeds up (supercritical). it's an amazing way to keep the reaction in a very stable equilibrium, even on a sub-millisecond time scale, which would be impossible for humans to manage. 2. negative void coefficient of reactivity: same thing, except instead of heat, we're talking about voids in the coolant (or in extreme cases when the coolant is failing to reach the fuel rods), the whole thing becomes subcritical and shuts down until more coolant arrives. 3. capability of cooling solely via natural convection: making the vessel big enough, and the core low-energy-density enough, so that the coolant can completely handle the decay heat without any pumps or electricity being required. 4. gravity-backed passive SCRAM: having solenoids holding up control rods, so that whenever you lose power, the very first thing that happens is that the control rods all drop in and the chain reaction shuts down. 5. doppler broadening: as you raise kinetic energy, cross-sections go down, but smaller atomic nuclei have absorption cross-sections that get smaller more quickly than larger nuclei, and also the thermal vibrations mean that the absorption cross-section of very large nuclei get even larger in proportion to smaller ones, so by having a balance of fissile U-235 and non-fissile U-238, when the fuel heats up, the U-238 begins to absorb more neutrons which means fewer are going to sustain the chain reaction. love the videos! hope this helps, or at least was interesting 🙂
  • @andrewsauer2729
    4:21 this is from the comic "minus", and I feel it important to note that this is not a doomed last-ditch effort: she WILL make that hit, and she probably brought the comet down in the first place just so that she could hit it.
  • Every harm of AGI and every alignment problem seems to be applicable to not just AGI, but any sufficiently intelligent system. That includes, of course, governments and capitalism. These systems are already cheating well intentioned reward functions, self modifying into less corigable systems, etc, and causing tremendous harm to people. The concern about it might be well founded, but really it seems like the harms are already here from our existing distributed intelligences, and just the form and who is impacted is the only thing that is likely to change.
  • @ChristnThms
    As someone who worked for a time in the nuclear power field, the ending bit is a GREAT parallel. Nuclear power truly can be an amazingly clean and safe process. But mismanagement in the beginning has us (literally and metaphorically) spending decades of cleaning up after a couple years of bad policy.
  • @TheRABIDdude
    5:45 hahahaha, I adore the "Researchers Hate him!! One weird trick to AGI" poster XD
  • @DaiXonses
    Unstructured and unedited conversations are a great format for youtube, this is why podcasts are so popular here, consider posting those on this channel.
  • @MoonFrogg
    LOVE the links in the description for your other referenced videos. this video is beautifully organized, thanks for sharing!