Your Baby's a Racist Jerk, Just Like You. So Is Emerging Technology. And Science.

There was a tweet floating around yesterday — my apologies for not including it here. I looked rather earnestly for it today but couldn’t find it — that basically said something along the lines of “science cannot be trusted because it’s inherently racist.” (If you know the one I mean, by all means, leave it in the comments.) It stuck out to me because it hasn’t been that long ago I had a rather heated exchange with a liberal friend who took great exception to my assertion that Wired Magazine tended to have a leftward bias. “How can you say that?!” he fired at me. “It’s science and facts. Facts don’t have a bias!” Well apparently, according to the new narrative, they can and do because Science is “eurocentric.” At least according to some student activists in South Africa last year (whose theories are probably influencing the twitter user I mentioned). What’s fascinating is what that means for stories like this one: “Your baby is a little bit racist, science says.” Here’s the gist:


The first study had infants listen to either happy or sad music, and then look at pictures of adult faces. Infants between 6 and 9 months looked at faces of their own race for longer after listening to happy music, and faces of other races for longer after listening to sad music. It’s not clear why the infants made these associations.

Cleary, racist babies. Despite the article acknowledging researchers really have no idea why the babies made these associations (it’s right there in the quote). Nonetheless, those little black-hearted babies are exhibiting a “racial bias.” And not only are we unwittingly training babies through our very own genetic material to be racists, we’re also teaching computers to judge based on culture and skin color, too. If an artificial intelligence program is based on human language, it will incorporate the same positive or negative word associations into its “decision making”, just like racist humanity does.

In other words, programs that learn from human language do get “a very accurate representation of the world and culture,” Caliskan said, even if that culture — like stereotypes and prejudice — is problematic.

This article begrudgingly acknowledges that “[in] humans, implicit attitudes actually don’t correlate very strongly with explicit attitudes about social groups. Psychologists have argued about why this is: Are people just keeping mum about their prejudices to avoid stigma?” In other words, the only reason people may not talk like racists is because they don’t want everyone to know, not because they just aren’t racists. No, prejudice exists, even in babies, and so you’re a racist even if you never say it. And so is your baby. And so is your smartphone. But here’s the question I’m left with: if, as our mystery tweeter suggested and the kids from the South African university asserted, science itself is racist, then wouldn’t these stories of racist babies and artificial intelligence, based, as they proudly proclaim, on scientific research and data, also be…racist? How can we trust the biased agenda of racist science when it’s telling us what is and isn’t racist? That rabbit hole gets deep quick. And the real tragedy of this narrative-driving, attempting as it does to marginalize, is it mostly manages to do little more than water down real examples of bias because they get harder and harder to recognize. If everything is racist, nothing is.



Join the conversation as a VIP Member

Trending on RedState Videos