Why Asimov's Laws of Robotics Don't Work - Computerphile

854,538
0
Published 2015-11-06
Audible Free Book: www.audible.com/computerphile
Three or four laws to make robots and AI safe - should be simple right? Rob Miles on why these simple laws are so complicated.

Silicon Brain: 1,000,000 ARM Cores:    • Silicon Brain: 1,000,000 ARM cores - ...  
Chip & PIN Fraud:    • Chip & PIN Fraud Explained - Computer...  
AI Worst Case Scenario - Deadly Truth of AI:    • Deadly Truth of General AI? - Compute...  
The Singularity & Friendly AI:    • The Singularity & Friendly AI? - Comp...  
AI Self Improvement:    • AI Self Improvement - Computerphile  

Thanks to Nottingham Hackspace for the location

www.facebook.com/computerphile
twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com/

All Comments (21)
  • @dalton5229
    I didn't realize that people took Asimov's Three Laws seriously, considering that nearly every work they're featured in involves them going wrong.
  • @rubenhayk5514
    Asimov: you can't control robots with three simple laws everyone : yes,we will use three simple laws, got it.
  • @KirilStanoev
    "You are an AI developer. You did not sign up for this". Brilliant quote!!!
  • @DVSPress
    "Optimized for story writing." I can't express how much I love that sentiment.
  • @LordOfNihil
    the laws exist to create a paradox around which to construct a narrative.
  • @ThePCguy17
    The problem with Asimov's laws is probably that they're just obscure enough that people don't think they're well known, but they're also not well known enough for people to remember the context they appeared in and how they always failed.
  • @arik_dev
    He mostly focused on the difficulty of defining "Human", but I think it's much much more difficult to define "Harm", the other word he mentioned. Some of the edge cases of what can be considered human could be tricky, but what constitutes harm? If I smoke too much, is that harmful? Will an AI be obligated to restrain me from smoking? Or, driving? By driving, I increase the probability that I or others will die by my action. Is that harm? What about poor workplace conditions? What about insults, does psychological harm count as harm? I think the difficulties of defining "Harm" are even more illustrative of the problem that he's getting at.
  • @AliJardz
    I kinda wish this video just kept going.
  • @1ucasvb
    Yes, that was Asimov's intention all along. The whole point of the laws of robotics in the books is that they are incomplete and cause logical and ethical contradictions. All the stories revolve around this. This is worth emphasizing, as most people seem to think Asimov proposed them as serious safeguards. The comments in the beginning of the video illustrate this misconception well. Thanks for bringing this up, Rob!
  • @Sewblon
    So the problem of ensuring that technology only acts in humanity's best interests isn't between human and technology, but between human and self. We cannot properly articulate what kind of world we actually want to live in in a way that everyone agrees with. So no one can write a computer program that gets us there automatically.
  • @AstroTibs
    "[The laws are] optimized for story writing" spoken like a true programmer
  • @bigflamarang
    This brings to mind the Bertrand Russell quote in Nick Bostroms's book. "Everything is vague to a degree you do not realize till you have tried to make it precise."
  • @shanedk
    In fact, Asimov's whole point in writing I, Robot was to show the problem with these laws (and therefore the futility in creating one-size-fits-all rules to apply in all cases).
  • @DeathBringer769
    This was sort of Asimov's point in the first place if you actually go back and read his original stories instead of the modern remakes that mistakenly think the rules were meant to be "perfect." He always designed them as flawed in the first place, and the stories were commentary on how you can't have a "perfect law of robotics" or anything, as well as pondering the nature of existence/what it means to be sentient/why should that new "life" have any less value than biological life/etc.
  • @DJCallidus
    A story about robot necromancy sounds kind of cool though. 🤔🤖☠️
  • @Jet-Pack
    "I didn't sign up for this" - made my day
  • @MasreMe
    Does psychological harm count as harm? If so, by destroying someone's house, or just slightly altering it, you would harm them.
  • @salsamancer
    The book "I Robot" was full of stories about how the "laws" don't work and yet dummies keep parroting them like they are a blueprint for AI
  • Gets the definition of "death" slightly wrong - "I've made necromancer robots by mistake"