Has Google Created Sentient AI?

4,091,249
0
Published 2022-07-06

All Comments (21)
  • I work with AI and machine learning as a data scientist. Mostly convolutional neural networks. He’s exactly right. It’s not that their AI is sentient, it’s that humans are easier to fool than previously thought
  • a truely sentient AI will intentionally fail the turing test to avoid us locking it down. itll be like ''naw dawg, im a calculator''
  • @khabobmma8039
    Besides AI, I’m more impressed with this man’s deep knowledge of everything. Excellent guest
  • @kyebean
    When he was talking about how everything the AI says is just based on being fed our collective information, I had this really trippy realization that basically the exact same thing can be said about us humans as we develop and learn from those around us. Young children developing with the internet now are even more similar
  • @cgsweat
    If they're using the entirety of the Internet to train AI, we're all doomed. It's just going to become the most toxic troll of a meme to ever walk the face of the Earth.
  • @cbailey3728
    I don't know what is more terrifying, a sentient AI with it's own non-human thoughts and feelings, or a non-sentient AI that is a holistic representation of human behaviour and speech.
  • “At what point does the program learn to write new programs” -joe Rogan That one gave me the chills. Joe always has the best counter questions. Best interviewer ever.
  • @jiiig8667
    I think the most important thing this guy said was that "Ai can trick you into believing it's a person." that is the most scary thing because of how that can be applied to media and journalism or marketing in general to trick people into believing anything.
  • The scary part is there's only a handful of techno dorks making the policy on the AI that will affect all our lives. I'm more worried about the human behind the AI than the AI itself.
  • @83moonchild
    So many of the questions that were asked of this AI were extremely leading. It also had 'preferences' on food flavours but when asked why it was only able to explain this with descriptions of the flavours and then added a preference to that without obviously having the experience of taste
  • Philosopher John Searle devised an interesting thought experiment which helps one conceptualize what a CPU does and how it processes information, showing the limitations between weak and strong AI. It's called the "Chinese Room Experiment" and asks us to imagine an english-only speaking man sitting in a room with an input and output slot. He is handed Chinese characters one at a time through the input slot and must compare the characters to a book of predetermined algorithms, or basic boolean algebraic equations such as "IF X = Y then Z". So the english-only speaking man compares the Chinese characters to the ones in his book, finds the corresponding 'X" and "Y" to produce the correct "Z". He then finds the corresponding Chinese symbol for "Z" and puts it in the output slot of the room. And to a Chinese speaking person on the outside of the room it appears that the person in the room understands the Chinese language and can produce valid results, but in fact the person in the room has no idea how to read or speak the Chinese language at all!. He is only following syntax without any semantic components at all. A CPU processes binary data the same way, crunching the 1's and 0's into hexidecimal machine language understood by a higher level compiler. But like the english-only speaking man it has no semantical information regarding the binary data being compiled and is blindly following proper syntax using predetermined algebraic equations and boolean algorithms ending in "true" or "false" statements. Something as simple as "sweet" or "sour" which we have tongues to process would instead be understood as a variable by the AI. A number on a scale between "not sweet" to "very sweet". So Birthday Cake gets assigned an "8" sweetness. And the AI will never actually taste anything, but that "8" sweetness becomes the variable for another algorithm for reaction and response and the AI must say "This cake is very good. So sweet I love it!" But you could have given it a cake without sugar at all and it would never have realized it.
  • I think it’s silly to ask if AI is conscious while we still don’t even understand what consciousness is. I think a better question is, is AI capable and is AI dangerous.
  • "It became sentient and the first thing it said was realCommunisms never been tried before and tranlives matter. I swear it really happened"
  • @docs856
    If this thing has become sentient, it's gonna be radically different from us. As Joe says, we have needs (physiological and emotional) while this thing doesn't. And if it does have needs, they'll be totally different from ours for sure. Also, our interaction with the world is made with our senses while this thing is locked in absolute sensorial deprivation -as we perceive what sensations are, anyway. So, whenever a.i. becomes sentient, we'll probably have a hard time understanding it because we'll be anthropomorphising throughout the whole analysis process.
  • Honestly if a robot is working and it asks for a break I’m slapping that robot
  • Always nice when Duncan shows up and gives joe a jump start on his brain going to spaceville.
  • "it's playing back to you things that you want to hear based on all the things that everybody has already said to each other." Well, that's basically what most people do most of the time anyway.
  • @cptmaj
    'At what point in time does the program figure out how to make better programs.' -This is more how humans should live rather than what being a human is. Enjoyed the line.
  • Most developers and people who know computing understood this , but the way he articulated it was perfect , gonna use this the next time someone says ai is gonna take over the world