3 types of bias in AI | Machine learning
1,027,522
Published 2017-08-25
Dive into the world of Google. See how we’re pushing the boundaries of generative AI, developing cutting-edge technology & using our platform to help communities globally. Subscribe to stay up to date with our mission: youtube.com/@Google/?sub_confirmation=1
Subscribe to our Channel: youtube.com/google
Tweet with us on X: twitter.com/google
Follow us on Instagram: www.instagram.com/google
Join us on Facebook: www.facebook.com/Google
All Comments (21)
-
"What is a shoe?" "What is a human?" These are very different from "What is hateful/offensive?". This is where the problem arises.
-
But isn't reporting "unappropriate" stuff biased..? It depends on the person what is appropirate and what not
-
So why did you fire James Damore?
-
I am blown away by the excellent use of graphics in these videos. Keep it up!
-
You cannot eliminate bias. You can only compensate for it by illuminating more options. Otherwise the bias "elimination" is subject to bias. E.g. If you avoid a subject when teaching someone, it becomes a weakness in their understanding, and can fall into an overcompensation bias. Furthermore, who decides what counts as a negative bias that should be eliminated? That strikes me as the kind of thing we should be having discussion on and not deciding for other people without their consent. Give people more opportunities to understand, not fewer opportunities to learn.
-
But, what if I am trying to find the hateful stuff because I am trying to see what other people are saying? Doesn't matter if they are morally wrong or right, it should still be easy to find
-
So who decides what's biased? Does "equal inclusion" mean the results are unbiased? What if the unbiased view of those engineers overlooked by policy makers within Google isn't actually unbiased? Put simply, who will guard the guardians?
-
0:02 nobody tells me to open my eyes again. I am sure that the rest of the video looks great though ;)
-
I don't need Google to tell me which search results are offensive to me. Let me choose which links I want to click on.
-
when you realize that this is about the google censorship.
-
We are now one step closer to understanding Youtube Recommend algorithm.
-
This should have been voiced by the Google assistant's voice actress.
-
Pretty sure this video has a google bias... Also, please make sexbots
-
But who decides what is offensive or not? We are all different.
-
Introducing a different bias into machine learning by having humans attempt to remove bias from machine learning.
-
The only appropriate bias is google approved bias. Which is very, very bias.
-
Never thought about machine learning and human bias. Always thought it will not affect the results. But we are designed to see the world from our own eyes, experiences. Why will our code be any different.
-
How to put politicial agenda on neural networks 101
-
When I say "I'm sad" to Google Assistant it's reply "i wish had a arms so i could give you a hug" 😂😂😂
-
It appears that google is deciding what is not biased. How are people at google able to be sure that they are not introducing their own bias into this process?