Humans can be biased towards a particular race or a gender. Technology, on the other hand, is free from such prejudices and serves everyone with the same diligence– not anymore!
According to a new research, facial recognition technology is not only racist but also gender biased. While the technology recognises white men with accuracy, it’s not as reliable when it comes to dark-skinned women.
Dark side of face-recognition technology
MIT Media Lab’s Joy Buolamwini, in her paper co-authored by Microsoft researcher Timnit Gebru, built a dataset of 1,270 faces from various backgrounds and ethnicities. They sourced faces of lawmakers from African and Nordic countries. Then they used this dataset to test the accuracy of face-recognition systems developed by IBM, Microsoft and China’s Megvii, as their code was publicly available for testing.
In their study, they found out that while the face-recognition technology is able to recognise a white man with an accuracy of 99.7 percent (with an error margin varying between 0.8-0.3 percent), the accuracy decreased to 94 percent when the user was a white woman.
On the other hand, accuracy further decreased to 88 percent when the user was a dark male. The performance of these systems deteriorated further with dark women as the error margin increased up to 34 percent with an accuracy of just 65.3 percent.
Buolamwini’s research confirmed the previous allegations that face recognition technology is biased towards white men. “We found that all classifiers performed best for lighter individuals and males overall. The classifiers performed worst for darker females,” she wrote in her paper.
But why the bias?
According to Buolamwini, the fault lies not in the technology but the tools that are employed in this technology. Face recognition technology uses artificial intelligence (AI) to identify faces. And AI is as smart as the data that is used to train it. In simple words, if there are more white men in the training dataset than dark women, the AI will be bad at identifying dark women.
“Many AI systems, e.g. face recognition tools, rely on machine learning algorithms that are trained with labeled data. It has recently been shown that algorithms trained with biased data have resulted in algorithmic discrimination,” she wrote in her paper.
Buolamwini’s paper is aimed at improving the dataset composition of face recognition systems so that error margins in neural networks can be decreased. She has already shared her data with IBM, Microsoft, and Megvii, and now she is working with the Institute of Electrical and Electronics Engineers to create transparency in the facial recognition system software, The NewYork Times reported.