Hot Take Alert: Twitter Thinks Its Own Artificial Intelligence Might Be... Wait for It... 'Racist'

(AP Photo/Matt Rourke)
AP featured image
FILE- This April 26, 2017, file photo shows the Twitter app on a smartphone in Philadelphia. Twitter says it found a software bug that may have sent some private messages to the wrong people. But the company says the problem specifically involved direct messages or protected tweets sent to businesses and other accounts overseen by software developers. (AP Photo/Matt Rourke)
Advertisement

On today’s episode of “Can’t Make It Up, Don’t Have To”…

Twitter has reportedly been looking into why some of the artificial intelligence it uses — a neural network — apparently opts to display white people’s faces more frequently than black people’s faces.

Several users pointed out the issue over the weekend, posting examples of posts that contained a black person’s face and a white person’s face, only to have Twitter previews display the white faces more often.

As reported by The Verge, Twitter uses a technology called a neural network to create the cropped previews of photos users see as they scroll through their feeds.

Programmer Tony “Abolish (Pol)ICE” Arcieri (how cute) demonstrated the problem with an image of Mitch McConnell and Barack Obama.

Several Twitter previews of the photo only showed McConnell’s face, even when Arcieri switched the position of their headshots and the color of their ties. Tony smelled a (racist) rat, of course.

Advertisement

And Arcieri’s conclusion? Not only is Twitter’s AI racist, so are the algorithms used by the healthcare industry and the nation’s law enforcement agencies. Wait — we already knew the police are “racist.”

Several users did tests with cartoon characters from “The Simpsons”; Lenny, who is white, and Carl, who is black.  Twitter cropped out Carl and only showed Lenny in the preview.

Advertisement

Another example with two men in suits showed the white man in the preview and cut out the black man. Twitter chief design officer Dantley Davis tweeted that other factors, such as background color, might have played a role, but a user who also tested the image called him out.

Twitter spokesperson Liz Kelley tweeted on Sunday that the social media giant looked into the phenomenon, but didn’t find evidence of racial or gender bias.

“We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. We’ll open source our work so others can review and replicate.”

Finally, Twitter chief technology officer Parag Agrawal tweeted that the model needed “continuous improvement,” adding he was “eager to learn” from the experiments, in response to a user who tweeted:

Advertisement

“OK, so I am conducting a systematic experiment to see if the cropping bias is real. I am programmatically tweeting (using tweepy) a (3 x 1) image grid consisting of a self-identified Black-Male + blank image + self-identified White-Male.”

The nerdfest surrounding the Twitter issue is no doubt continuing “as we speak.” Being the non-nerd I am, I’m hardly qualified to speculate about what’s going on in the images above, and others.

But to declare algorithms, neural networks, and such to be racist, is yet another example of where we find ourselves in 2020 America.

Recommended

Join the conversation as a VIP Member

Trending on RedState Videos