3 types of bias in AI | Machine learning

1,027,522
0
Published 2017-08-25
Understanding bias in AI – as researchers and engineers, our goal is to make machine learning technology work for everyone.

Dive into the world of Google. See how we’re pushing the boundaries of generative AI, developing cutting-edge technology & using our platform to help communities globally. Subscribe to stay up to date with our mission: youtube.com/@Google/?sub_confirmation=1

Subscribe to our Channel: youtube.com/google
Tweet with us on X: twitter.com/google
Follow us on Instagram: www.instagram.com/google
Join us on Facebook: www.facebook.com/Google

All Comments (21)
  • @realsampson
    "What is a shoe?" "What is a human?" These are very different from "What is hateful/offensive?". This is where the problem arises.
  • @ShaXCwalk
    But isn't reporting "unappropriate" stuff biased..? It depends on the person what is appropirate and what not
  • @suman_b
    I am blown away by the excellent use of graphics in these videos. Keep it up!
  • @AnekaKnellBean
    You cannot eliminate bias. You can only compensate for it by illuminating more options. Otherwise the bias "elimination" is subject to bias. E.g. If you avoid a subject when teaching someone, it becomes a weakness in their understanding, and can fall into an overcompensation bias. Furthermore, who decides what counts as a negative bias that should be eliminated? That strikes me as the kind of thing we should be having discussion on and not deciding for other people without their consent. Give people more opportunities to understand, not fewer opportunities to learn.
  • @TheCinnaman123
    But, what if I am trying to find the hateful stuff because I am trying to see what other people are saying? Doesn't matter if they are morally wrong or right, it should still be easy to find
  • @Bdawg.
    So who decides what's biased? Does "equal inclusion" mean the results are unbiased? What if the unbiased view of those engineers overlooked by policy makers within Google isn't actually unbiased? Put simply, who will guard the guardians?
  • @MrMastercard12
    0:02 nobody tells me to open my eyes again. I am sure that the rest of the video looks great though ;)
  • @Gytax0
    I don't need Google to tell me which search results are offensive to me. Let me choose which links I want to click on.
  • @vladnovetschi
    when you realize that this is about the google censorship.
  • We are now one step closer to understanding Youtube Recommend algorithm.
  • @gabemcguire2463
    This should have been voiced by the Google assistant's voice actress.
  • @Cettywise
    Pretty sure this video has a google bias... Also, please make sexbots
  • @DrAg0n3250
    But who decides what is offensive or not? We are all different.
  • @ZoomahZoomah
    Introducing a different bias into machine learning by having humans attempt to remove bias from machine learning.
  • @jackfrost2978
    The only appropriate bias is google approved bias. Which is very, very bias.
  • @sardaamit
    Never thought about machine learning and human bias. Always thought it will not affect the results. But we are designed to see the world from our own eyes, experiences. Why will our code be any different.
  • @rey1242
    How to put politicial agenda on neural networks 101
  • @gwq
    When I say "I'm sad" to Google Assistant it's reply "i wish had a arms so i could give you a hug" 😂😂😂
  • @reverendcaptain
    It appears that google is deciding what is not biased. How are people at google able to be sure that they are not introducing their own bias into this process?