On today’s episode of “Can’t Make It Up, Don’t Have To”…
Twitter has reportedly been looking into why some of the artificial intelligence it uses — a neural network — apparently opts to display white people’s faces more frequently than black people’s faces.
Several users pointed out the issue over the weekend, posting examples of posts that contained a black person’s face and a white person’s face, only to have Twitter previews display the white faces more often.
A faculty member has been asking how to stop Zoom from removing his head when he uses a virtual background. We suggested the usual plain background, good lighting etc, but it didn’t work. I was in a meeting with him today when I realized why it was happening.
— Colin Madland (@colinmadland) September 19, 2020
— Colin Madland (@colinmadland) September 19, 2020
any guesses? pic.twitter.com/9aIZY4rSCX
— Colin Madland (@colinmadland) September 19, 2020
As reported by The Verge, Twitter uses a technology called a neural network to create the cropped previews of photos users see as they scroll through their feeds.
Programmer Tony “Abolish (Pol)ICE” Arcieri (how cute) demonstrated the problem with an image of Mitch McConnell and Barack Obama.
Several Twitter previews of the photo only showed McConnell’s face, even when Arcieri switched the position of their headshots and the color of their ties. Tony smelled a (racist) rat, of course.
Trying a horrible experiment…
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020
“It’s the red tie! Clearly the algorithm has a preference for red ties!”
Well let’s see… pic.twitter.com/l7qySd5sRW
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020
Let’s try inverting the colors… (h/t @KnabeWolf) pic.twitter.com/5hW4owmej2
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020
And Arcieri’s conclusion? Not only is Twitter’s AI racist, so are the algorithms used by the healthcare industry and the nation’s law enforcement agencies. Wait — we already knew the police are “racist.”
Twitter is just one example of racism manifesting in machine learning algorithms. Other examples:
Millions of Black people affected by racist healthcare algorithms: https://t.co/l9Mm69zc39
Racist predictive policing algorithms: https://t.co/NMl7YjIRmq
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 20, 2020
Several users did tests with cartoon characters from “The Simpsons”; Lenny, who is white, and Carl, who is black. Twitter cropped out Carl and only showed Lenny in the preview.
I wonder if Twitter does this to fictional characters too.
Lenny Carl pic.twitter.com/fmJMWkkYEf
— Jordan Simonovski (@_jsimonovski) September 20, 2020
Another example with two men in suits showed the white man in the preview and cut out the black man. Twitter chief design officer Dantley Davis tweeted that other factors, such as background color, might have played a role, but a user who also tested the image called him out.
Here’s another example of what I’ve experimented with. It’s not a scientific test as it’s an isolated example, but it points to some variables that we need to look into. Both men now have the same suits and I covered their hands. We’re still investigating the NN. pic.twitter.com/06BhFgDkyA
— Dantley 🔥✊🏾💙 (@dantley) September 20, 2020
Twitter spokesperson Liz Kelley tweeted on Sunday that the social media giant looked into the phenomenon, but didn’t find evidence of racial or gender bias.
“We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. We’ll open source our work so others can review and replicate.”
thanks to everyone who raised this. we tested for bias before shipping the model and didn't find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. we'll open source our work so others can review and replicate. https://t.co/E6sZV3xboH
— liz kelley (@lizkelley) September 20, 2020
Finally, Twitter chief technology officer Parag Agrawal tweeted that the model needed “continuous improvement,” adding he was “eager to learn” from the experiments, in response to a user who tweeted:
“OK, so I am conducting a systematic experiment to see if the cropping bias is real. I am programmatically tweeting (using tweepy) a (3 x 1) image grid consisting of a self-identified Black-Male + blank image + self-identified White-Male.”
This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement.
Love this public, open, and rigorous test — and eager to learn from this. https://t.co/E8Y71qSLXa
— Parag Agrawal (@paraga) September 20, 2020
The nerdfest surrounding the Twitter issue is no doubt continuing “as we speak.” Being the non-nerd I am, I’m hardly qualified to speculate about what’s going on in the images above, and others.
But to declare algorithms, neural networks, and such to be racist, is yet another example of where we find ourselves in 2020 America.
Join the conversation as a VIP Member