No, it's not Sentient - Computerphile

867,851
0
Publicado 2022-06-17

Todos los comentarios (21)
  • @arik_dev
    When we (humans) see a cat swiping at its own reflection in the mirror we find it amusing. The cat is failing to recognize that the "other" cats behavior matches its own, so it doesn't deduce that the image it's seeing is actually its own actions reflected back at it. When humans react to models like LaMDA as if it is a distinct and intelligent entity, we're being fooled in a way that is analogous to the cat. The model is reflecting our own linguistic patterns back at us, and we react to it as if it's meaningful.
  • @tielessin
    Before this video I have never thought about the loneliness of my python functions. There are probably soo many functions that I have never called, but I will take care of them from now on.
  • @wordsmith451
    If it looks like a duck, acts like a duck, and quacks like a duck, it might just be a convincing robotic simulation of a duck.
  • @CollinSimon413
    The whistleblower in question here was actually a lot more focused on Google's complete lack of ethical oversight regarding decisions they have moving forward with the research. He was also concerned about Google's unwillingness to address A.I. imperialism in newly developing countries. All of the coverage I've seen has taken away from the guys point, because he was just trying to force Google into addressing the ethics, he even admitted that it's not sentient, and we wouldn't even know how to define that if it was.
  • @tomgrimshaw7543
    If Python functions didn't have arguments so often, they'd have more friends to talk to.
  • @matsim0
    The most frustrating thing about reading the "interview" was that the obvious follow up questions were not asked - like who are these friends that you miss hanging out with? What are you doing when "hanging out"? But then, this would have immediately destroyed the impression of sentience, so of course they didn't ask those.
  • @Stanton_High
    "it just says what IT THINKS you want to hear" "Exactly"
  • @3DPDK
    I remember the arguments of eventual sentience in the 1980s about a program called "Eliza", basically a word calculator originally written in the 1960's at MIT, but offered for use on home computers in the 1980s. Over time as Eliza built data files of words and their usage weights, the sentences it constructed began to take on seemingly human characteristics. The program itself was extremely simple that calculated which verbs and adjectives were best used with specific nouns, and it chose those nouns based on the ones you used in the questions you asked it. It mostly framed it's answers to your questions as questions it would ask you. We humans recognize intelligible speech patterns as a result of conscious though and curiosity (asking questions) as a sign of intelligence, but at least in the case of Eliza, it's much like recognizing faces in tree bark or cloud shapes - we see them, but they are only there because our brains are wired to look for them.
  • @adriankerrison
    Harold Garfinkel proved that people getting randomized yes/no answers could make sense of them as thoughtful advice. And that's back when computers were the size of rooms.
  • @Zeekar
    Whoever did the animations: how did you react to being asked to make a function call look lonely? 🥺
  • @SuperTonyony
    From years of reading science fiction, I was under the impression that "sentience" means "possessing a reflective consciousness", but the dictionary says that it simply means "the ability to sense and feel".
  • the cameraman turned off his invisibility to have a laugh, what a lad !
  • @puellanivis
    I was having the same opinion about the “conversation”. The AI was responding enthusiastically to tell the engineer exactly what he wanted to hear, and when the engineer is convinced that it is sentient, he’s starting from a presupposition that the AI is sentient, and confirmation bias takes hold. As I told some others, I’m pretty sure that the AI would just as happily and enthusiastically discuss how it is not sentient.
  • @bborkzilla
    I remember reading some stories written by Asimov where robots had sentience but yet were unable to speak because that was too complex. It's interesting that he and many other futurists had it exactly backwards.
  • @Zizumia
    I love the study of empathy people have for things that are not sentient because they form a personal connection with it. This AI blurs the line quite well since it's programming is so advanced but people create bonds with dolls or toys, people feel bad when an engineer from Boston Dynamics kicks one of their walking robots, some police feel bad sending their bomb disposal robots into danger, etc. Fascinating.
  • Realizing a chatbot is not real sentience is like realizing a magic trick is just an illusion.
  • @BaronSamedi1959
    You are probably all too young to know this but back in the early 1980s, there was a program called "ELIZA" that accepted your input (from a terminal) and gave back an "answer". It was said to be a "Rogerian nondirective psychotherapist", but all it did was cleverly extract some keywords from your input and giving those back as questions. Such as: "I am lonely" would produce "Why do you say you are lonely?" It made quite a splash and people were really thinking it was very clever and helpful.
  • @exzemz
    As a programmer, even if you don't need any strings reversed you could always pick a few random strings from your code and call the reverse function on twice in sequence. It may seem pointless to you, but may make the day for the reverse function... You never know!
  • @markrandall8487
    Even the best AI can't make intelligent sentences if it is only trained by youtube comments.
  • The problem with the Turing test, is that it is not that the coding bot is passing it, it is that some humans are failing it, the number of which is growing rapidly.