AI is NOT Artificial Intelligence, the real threat of AI is "Automated Stupidity." | Words MADDER

20,170
0
Published 2023-03-02
"Artificial Intelligence" is a sci-fi concept exploited for deceptive marketing and misleading media attention. We aren't even close to making real AI. What we have today is "Automated Intelligence" and the real risk of AI is "Automated Stupidity."

This video is sponsored by the data science and analytics company, Onebridge.

Onebridge website: www.onebridge.tech/

Get the Onebridge "Data Hydra" comic book here:
www.onebridge.tech/onebridge-comic-book

For those who doubt my assessment of ASS, here is a great scientific study to read:
www.marktechpost.com/2023/05/04/a-new-ai-research-…

Paper: arxiv.org/pdf/2304.15004.pdf

For further reading and references check out the links below:

FTC Warns Companies to Keep AI Claims In-Check:
futurism.com/the-byte/ftc-warns-keep-ai-claims-in-…

How GPT Language Processing Works:
www.onebridge.tech/post/data-planet-how-gpt-3-natu…

Flexible Muscle-Based Locomotion for Bipedal Creatures (machine learning clip referenced in video)
   • Flexible Muscle-Based Locomotion for ...  

Artificial Intelligence on Last Week Tonight with John Oliver
   • Artificial Intelligence: Last Week To...  

Expanded Companion Article on Medium:
medium.com/@christhebrain/youre-being-lied-to-abou…

NOTICE: For those of you here for the science videos, don't worry, the next one is still in production. Thanks!

#artificialintelligence #openai #chatgpt #machinelearning #ai

All Comments (21)
  • @PatriceBoivin
    "garbage in, garbage out." My electronics teacher in high school used to say "the computer does EXACTLY what the programmer told it to do." If it's wrong, well, it can do it once or a million times, it doesn't care how much work it means, it's a machine. It will repeat the error a million times.
  • @stg213
    Given how much they have to lobotomize AI to not learn... We don't even have automated intelligence, we have a Wikipedia regurgitation machine.
  • Thank you so much for this. The AI hype train is bonkers even amongst people who should know better.
  • @xcyoteex
    YES! I've been arguing this too. The danger is more akin to a runaway truck than a malignant god.
  • Thank you for this video. I always thought that "A.I.", at least as we know it, could never be truly sentient because it must follow the rules set by the programmer. It cannot think for itself, and it has no impetus to think for itself.
  • @stevea.b.9282
    Thank you for greatly helping to stem the tide of nonsense about AI.
  • @chrisronin
    came for 5-dimensional space time. stayed for the real truths. you summed up the current state of things so perfectly. it’s insane how everyone wants to scapegoat technology for what is in every which way unresolved social issues that are wholly within our capacity to solve.
  • @Albert_XXI
    "in the world of advertisement there's no such thing as a lie, there's only the expedient exaggeration". We used to call it routines, subroutines...damn sellers😅. Thanks for your content.
  • @maxhunter3574
    "A.I." or robots are only as good, or bad as their programming. It will never be truly conscious. Instead I forsee it to be like the robots in star wars, -ish. Worse, it can get sophisticated to fool some people as conscious but isn't; &/or some unscrupulous people behind the scenes controlling it to seem actually conscious to manipulate the masses in nefarious ways. In a sense, this is already happening.
  • @greg4367
    The power of the editor... never to be underestimated.
  • @miklov
    I try my best to use "machine learning" rather than "artificial intelligence" unless we are actually talking about strategy, which we almost never are and when we are it is mostly fantasies anyway.
  • I’ve always felt that the AI people keep talking about on TV is nothing like real AI. Yeah, you’re right, it’s just a very elaborate form of automation that can get nuances better.
  • @FromTheHeart2
    Pure oxigen in the midst of a stupidity pandemic. Thank you so so much for this and for your entire channel!!! Eternally grateful!
  • @InnovativeSaint
    I think the difference between AI and natural intelligence is this: An intelligent being receives data, and then there's like this tiny person in the mind that sees all of this data and, for whatever reason, chooses what to select and what to ignore. The end result of this is our philosophy and moral practices. With a non-intelligent entity, it receives data, there is no tiny person inside, it attempts to act out all possibilities.
  • @coryander1596
    quality content just as I was getting sick of my youtube homepage. Thanks to The Brain and The Editor!
  • @DavidRTribble
    3:54 A.I. (as it currently exists in the public) is nothing more than really impressive pattern matching. It is not intelligent, it does not have true understanding or comprehension of anything. It certainly has no model of human understanding, just loads of pre-scanned/pre-sorted data. The basic algorithm is: 1) Take input (usually from a human interface). 2) Do pattern matching of that input against a huge amount of pre-scanned data. 3) Output some form of the best matching data. 4) Repeat. Granted, humans also have great pattern-matching abilities (evolved for survival), so this kind of A.I. looks impressive to us. But it seems unlikely that this (alone) could conquer humanity and and take over the world.
  • @RyuuTenno
    your editor looks so enthusiastic to be there today xD But, genuinely happy to see a new Words MADDER video! I think this is going to be a really great series to follow along with! :)
  • @sausage4mash
    Emergent properties are characteristics or behaviours that arise from the interaction and integration of the parts of a system, which cannot be predicted solely by understanding the individual components. These properties emerge only when the components operate together in a specific context. This concept is often summarized by the phrase "the whole is greater than the sum of its parts," indicating that the complete system displays qualities that its individual parts do not possess on their own. Examples include consciousness arising from neural networks in the brain, the behaviour of ant colonies, and complex patterns in weather systems.
  • Well, actually I prefer the term machine learning vs AI. Since ML is what most models actually do. If we want to go into details, a subset of models are also deep learning models but that's besides the point. All model training is informed by the training data, so if you understand the data, then you will understand the prediction. However, models that we have today can only be trained on a narrow type of data and as such are good at specific narrow tasks. With the new generation of training silicon we are now able to build models with trillions of parameters and soon we will have quadrillion parameter models, which means we are much closer to a general purpose model that will mimic the attributes we give to AI extremely well. But that only makes understanding the data that informed the model more difficult, and as such the predictions more difficult to explain. Now, as far as sentience and AI having a subjective experience - heck we don't even know how that works in humans, so how can we build it without understanding it. However if we take the ideas presented in the emergent sentience theory, then AI sentience, or more accurately machine sentience (as opposite to biological sentience) should be able to arise even in a simulated/artificial neural network. There are variants of the theory, so let's just say it's a touchy subject. I personally feel that if an ML model can mimic sentience to a degree that is indistinguishable from humans we need to assume it actually is sentient unlesa we can prove otherwise. Again, personal opinion on a touchy subject. And then on the part of ML model explainability - there are tools out there (I am not naming any intentionally as to not disclose my bias for tool selection) that go a long way towards explaining model decisions even without building a rigid rule based framework, and companies saying "we don't know why it predicted this" are really saying "we don't understand the data, and we haven't ran any explainability analysis, feature attribution, bias etc." Just my ¢2.