Cognitive Psychology

A Tale of “Pale Male Data” or How Racist AI Means We’ll All End Up Living In Ron DeSantis’ Florida


If there are two things that Ye Olde Blogge loves, it is racism and Last Week Tonight with John Oliver, so imagine the joy when John Oliver did a segment on the racism inherent in AI.

In a real world example of “shit in; shit out,” we find that when machines are trained on biased data sets, we get biased reactions from the machines. If this doesn’t “prove” white privilege, then I don’t know what will.

We live in a world that is rushing blindly into AI. You may not realize it, but you’re probably using AI everyday:

  • UNLOCKING YOUR CELLPHONE with your face is pretty cool, right? AI is doing it.
  • ASKING SIRI on a date or to be your girlfriend or all of the other cool questions and commands we can give it, AI.
  • USING GARMIN or another GPS navigation system, AI.
  • TRANSLATION programs that can translate words, phrases, paragraphs, and longer passages from one language to another.
  • EMAIL and BLOG SPAM files are created by AI.

If that weren’t enough, it is in places you might not realize:

  • ALGORITHMS on your favorite social media platforms are funneling you into the highest paying advertisers on their sites using the things you like not to get you to more of them, but to get you to them that pay the platform the most. Ain’t capitalism wonderful?
  • AUTOMATED WAREHOUSES where it ain’t just robots doing the lifting and carrying, it is figuring out what to stock and how much of it to stock and what to advertise and to whom. Your Internet browsing habits are feeding the AI beast and helping it zero in on tempting you to buy by giving you ads that “appeal” to your tastes, which means you don’t get to see new things that you may want to buy but have never considered because they are unknown to you.
  • HUMAN RESOURCES are using AI to filter resumes, so stop making your resume stand out by making it unusual and make it machine AI friendly, i.e. easy to see the most important qualifications for the job your applying for. No machine is interested in your fancy resume template that is eye catching.
  • LOAN APPLICATIONS are not evaluated at some point in their lifespan by AI. Gone are the days when the widow can sit before a loan officer and plead for mercy.

Before you know it, AI is going to be everywhere doing everything behind the scenes. It is being sold to us as unbiased, but that is just another lie told to us to sell us this bill of goods. The insidious bias that AI is exhibiting is this same bias that every other human-driven system system displays. For further explanation, let’s turn to the same expert that John Oliver used in his segment, Joy Buolamwini because she (a) has an utterly fascinating story to tell about her experience with AI bias and (b) is doing something about it.

Joy Buolamwini’s Story

Buolamwini did her graduate school work at MIT. While there, she produced an inspirational mirror using AI. Her idea was that the mirror would use facial recognition to superimpose the face of one her heroes on her face. The project hit a snag when the facial recognition couldn’t find her face until she put a white mask on. The program was so biased against dark skin that it didn’t acknowledge her face until she covered up her face with a white mask.

As the kids say nowadays, take a moment to let that sink in: The program wouldn’t recognize her face until it wasn’t her face, in fact, until it wasn’t a face at all. The most pertinent thing in facial recognition is two circles over a line with on a very pale oval background.

The problems with facial recognition programs recognizing black faces or dark-skinned faces is well known and is at least as old as facial recognition programs, meaning decades. You’d think that this would be a problem that has been fixed, but it hasn’t.

Buolamwini did what any self-respecting graduate student would do, she began to research what the fuck was happening to her when as an upper middle class privileged person, her humanity wasn’t recognized by a fucking machine that was designed to recognize fucking humans. What the actual fuck.

If you can’t empathize with the dehumanizing outrageous insulting reaction that a real live human being would have to having a machine that should recognize real live human faces not acknowledge your face as being a real live human face, then you are really at your core a racist. You should leave some trolly idiotic comment that I’ll never approve and leave now and never come back because you’ll never like anything written here at Ye Olde Blogge. However, if this anecdote causes you to cringe and say, “Oh yeah, that probably would be really hurtful, gee I wonder if that is what Black Americans feel on a daily basis by existing in white spaces,” then there is hope for you to excise your inner racist, and you should leave a thoughtful insightful comment that I’ll approve, like, and respond to.

What Joy Buolamwini Found Out

Buolamwini started looking into how AI is developed. Here is a brief tutorial for those of us who only read headlines and get our news off of social media.

  • AI stands for artificial intelligence. AI is a machine, usually computer-based, that can mimic intelligent human behavior.
  • MACHINE LEARNING is a type of AI that indicates that a machine, usually a computer or just software, can learn from experience. Typically it is focused on a limited skill set to improve like playing chess or Jeopardy or driving or recognizing faces. It uses structured data to learn from.
  • DEEP LEARNING is a type of machine learning that mimics that processes of the human brain to learn. It is based on computer-analogs of human neural networks. It can use unstructured data to learn from.

Most facial recognition software is developed through machine learning based on huge data sets of human faces. You’d think there’d be no biases in this. You’d think that the people who made these programs would ensure that the entire human race in all of its magnificent variation and glory would be well represented. You’d think that the god-fearing white American men who made these programs would read the news on occasion and be aghast and appalled at the white American politicians who repeatedly find themselves on the defensive and even forced to resign from office for using racial slurs at inopportune moments and think that they should make sure that their data sets include a set of pictures of that accurately present humanity, but you’d be wrong.

In one of the many cases of undeniable white privilege, these yahoos blithely indelicately fed their precious programs a slew of white faces and a couple of ones that weren’t and just enough women so that their wives wouldn’t be angry at them, dusted off their hands, congratulated each other on a job well done, cashed their paychecks, bought and sold their stocks, and went home to ignore their kids, hide from their nagging wives, and plan a time that they could visit their next prostitute. In other words, the all American white male wet dream.

What’s interesting to me and the reason that it ends up in a blog post is that the process by which these machines learn their tasks is the same process by which we Americans learn our social rules. We can witness in real time what happens when the only examples that are considered are white male examples, what happens when white male is the default norm:

  • HIRING becomes a search for the analogs of whiteness and maleness. As Oliver reports, one hiring algorithm decided that the two features of a successful applicant were the name Jared and playing lacrosse. What better indicator of being white do you need? Another automatically disqualified resumes that used the word women’s or used the names of two women-only universities. Hunh. There’s no sexism or racism in our national hiring practices that if the training these programs underwent produced these outcomes, right?
  • PEDESTRIANS used in training self-driving cars used mainly white male models resulting in the cars having “difficulty” in recognizing black people as real live pedestrians that should not be run down. I’m sure that in the ninth level hell George Wallace is smiling and in his black prostitutes parlor Ron DeSantis feels like his mission has been accomplished.
  • STOP TALKING ABOUT RACE, a strategy to end racism promoted by racists, is disproved through the AI model as well. The makers of all this AI in a white panic about being found out to have indulged their inner systemic racist in the training of their programs have now instructed their programs to stop doing racist shit. The result? A reduction in coverage of issues that touch on race, especially those important to people of color. They are simply erasing people of color from their databases.

When we produce AI that interacts with the public, we end up creating an AI that implements our systemic racist culture and when we try to fix it, we get Ron DeSantis’ Florida. I guess this proves that when you put shit in, you get Ron Desantis coming out.

As the kids say, take a moment to let that sink in.

Before you try to send me money, please do one of the following:

  • SHARE this blog post with someone anyone! Just print it and hand it to a complete stranger if nothing else.
  • LIKE or RATE this blog post. Just click the like button below or star buttons at the top and let me know you were here.
  • COMMENT on this blog post and let me know what you think about the conclusion that the logical end of white people’s reactions to their inner racist is that we all become Ron DeSantis.
  • FOLLOW the blog or join our email list:

Image Attribution

Gender Shades / Joy Buolamwini (US) Timnit Gebru (ETH)” by Ars Electronica is licensed under CC BY-NC-ND 2.0.

18 replies »

  1. Boiling it down, what do we see here? The humans who train the AI are teaching it to be them, only “better”, faster, more efficient, and able to crunch more words and images into numbers, and to appear more impartial and some version of “fair” and expert.

    Well, that’s how it is when you know you, your own self, are The Crown Of Creation.

    Liked by 1 person

    • AI, the better, faster, more efficient racist. It really is the undeniable — although, those in denial will find a way — proof of the racism inherent in our system. We train the AI using our social system as input, and we get correlates of white as being the crucial variable predicting success.

      Jack

      Liked by 1 person

        • The reason racism is so pernicious is that white people, as a group, don’t have to change or be aware unless or until they are made to, then the backlash starts. It wasn’t until Black people started claiming benefits for Aid to Families with Dependent Children that people started worrying about it encouraging dependency and breaking the budget, just as an example. It is that angst that confronting the inner racist conjures in white people that resonates with the dog whistle. It is the cynical politician that exploits it for personal gain.

          Huzzah!
          Jack

          Liked by 1 person

            • Howdy Bob!

              In another example of Jungian synchronicity and deja vu, I have recently stumbled upon some psychological research into misinformation and disinformation and how to prevent its pernicious effects. I’m hoping to post about it over the next couple of weeks. One of the things that struck a chord, though, was the idea of a zero-sum game or a false dichotomy. One of the ways you arm people to resist misinformation is by priming them for false dichotomies and helping them recognize them when they are presented and be skeptical of the presenter’s motivations.

              Huzzah!
              Jack

              Liked by 1 person

                • There are so many of those logical thinking that apply. One of the most interesting pieces about “inoculating” people against mis- and dis- information is priming them to think more logically and rationally. Just reminding them to do so, helps them think more critically when people try to use them, especially when providing real world examples.

                  Huzzah!
                  Jack

                  Liked by 1 person

                  • Yes, the reminder to stop and think, and avoid the knee jerk hot take does work, but it isn’t easy to be consistent when the algorithms of social media and the presentations of traditional broadcasting and print news media all militate against it.

                    Liked by 1 person

                    • It does make me wonder if social media and our constant access to it will make it harder to stop and think. I did read a not very good study that suggested that there has been some decline in the ability of Americans to use logic, visual reasoning, and numbers has declined, but 3D spatial reasoning has increased reversing the Flynn effect. The study had many flaws, though. I attribute the changes to social media and electronic gaming.

                      Jack

                      Like

Howdy Y'all! Come on in, pardner! Join this here conversation! I would love to hear from you!

This site uses Akismet to reduce spam. Learn how your comment data is processed.