Facial analysis programs appear to like some faces more than others.
The error rates for determining the gender of light-skinned men were just 0.8% versus 34% for darker-skinned women, according to researchers from the Massachusetts Institute of Technology and Stanford University who examined three different facial-analysis programs. The error rates were higher for females than males, and for darker-skinned versus lighter-skinned individuals.
See: How facial recognition technology is turning people into human bar codes
Researchers, led by Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media group, assembled images of 1,200 people on a scale of skin tone, from light to dark, as determined by dermatologists, and then applied those images to three commercial facial-analysis programs from major technology companies. One of these program touted a 97% accuracy rate, but they found the images the program used were 77% male and more than 83% white.
Facial recognition is used in everything from law enforcement to smartphone security. Facebook
and other social media sites use facial recognition to tag friends, and major retailers, including Amazon
are even using this technology to improve their marketing.
Problems with facial recognition apps and race have been surfacing lately. Google’s
new Art & Culture app analyzes facial features and matches them to historical artwork found in museums around the world. But many users were not pleased with the results. The app tended to match faces to euro-centric art featuring white faces. Asian and African-American people who tried the app were found the results reinforced stereotypes.
Also see: Video appears to show child bypassing Face ID to unlock his mother’s iPhone X
In 2015, Google’s algorithms tagged black people as “gorillas” on its photo app. “We’re appalled and genuinely sorry that this happened,” a Google spokeswoman said at the time. “There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”
Algorithms can become biased, by gender or race, because of the way developers train them using groups of images. When gender, age or race is underrepresented in these images, the algorithm won’t be able to identify them in future images used for the program, according to a 2016 article in MIT Technology Review.