getty facial face recognition
Image Credits:Stegerphoto / Getty Images

Twitter and Zoom’s algorithmic bias issues

Both Zoom and Twitter found themselves under fire this weekend for their respective issues with algorithmic bias. On Zoom, it’s an issue with the video conferencing service’s virtual backgrounds and on Twitter, it’s an issue with the site’s photo cropping tool.

It started when Ph.D. student Colin Madland tweeted about a Black faculty member’s issues with Zoom. According to Madland, whenever said faculty member would use a virtual background, Zoom would remove his head.

“We have reached out directly to the user to investigate this issue,” a Zoom spokesperson told TechCrunch. “We’re committed to providing a platform that is inclusive for all.”

 

Techcrunch event

Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.

Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.

San Francisco | October 27-29, 2025

When discussing that issue on Twitter, however, the problems with algorithmic bias compounded when Twitter’s mobile app defaulted to only showing the image of Madland, the white guy, in preview.

“Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,” a Twitter spokesperson said in a statement to TechCrunch. “But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate.”

Twitter pointed to a tweet from its chief design officer, Dantley Davis, who ran some of his own experiments. Davis posited Madland’s facial hair affected the result, so he removed his facial hair and the Black faculty member appeared in the cropped preview. In a later tweet, Davis said he’s “as irritated about this as everyone else. However, I’m in a position to fix it and I will.”

Twitter also pointed to an independent analysis from Vinay Prabhu, chief scientist at Carnegie Mellon. In his experiment, he sought to see if “the cropping bias is real.”

https://twitter.com/vinayprabhu/status/1307497736191635458

In response to the experiment, Twitter CTO Parag Agrawal said addressing the question of whether cropping bias is real is “a very important question.” In short, sometimes Twitter does crop out Black people and sometimes it doesn’t. But the fact that Twitter does it at all, even once, is enough for it to be problematic.

It also speaks to the bigger issue of the prevalence of bad algorithms. These same types of algorithms are what leads to biased arrests and imprisonment of Black people. They’re also the same kind of algorithms that Google used to label photos of Black people as gorillas and that Microsoft’s Tay bot used to become a white supremacist.

Algorithmic accountability

 

Topics

, , , , ,
Loading the next article
Error loading the next article