News

The false narrative about bias in face recognition

Face recognition algorithms are something you probably know if your have paid attention to academic research and the press in recent years. They are biased against women, and racial minority groups. They are, in fact, racists as you have probably already heard. Everyone agrees, from Motherboard and the MIT Technology Review to the ACLU as well as congressional Democrats.

This consensus has one problem. It’s wrong. It’s wrong and it has serious consequences. This is causing laws to be distorted across the country, and giving the world’s leading position in a new important technology to Russian and Chinese competitors.

This doesn’t mean that face recognition has never been a challenge when it comes to minorities and women. In identifying minorities or women a decade back, the technology was much more accurate than it was ten years ago.

Two agencies that I know well—the Transportation Security Administration and Customs and Border Protection (CBP)—depend heavily on identity-based screening of travelers. They reported the findings as they implemented algorithmic facial recognition. They found that face recognition tools had made significant improvements in the time period between the 2017 pilot and 2019 operations. These changes seriously undermined the myth of race or gender bias in face detection. CBP doesn’t have data about traveler’s race but it has access to information on the country where they live. CBP used this proxy to find that CBP’s face matching accuracy had been affected by their race. Although it did discover some performance gaps based on gender and age, these had decreased significantly due to operational factors such as illumination. This study revealed that there was an improvement in operational factors like illumination, which led to a significant decrease in initial differences in matchmaking for different ages and genders. In 2019, women’s error rates were 0.2 percent. That is better than men’s error rates and better than women’s error rate of 1.7 per cent.

The evidence of facial recognition bias evokes Peggy Lee’s refrain, “Is this all there is?” The answer to that question is no. For all the intense press and academic focus on the risk of bias in algorithmic face recognition, it turns out to be a tool that is very good and getting better, with errors attributable to race and gender that are small and getting smaller—and that can be rendered insignificant by the simple expedient of having people double check the machine’s results by using their own eyes and asking a few questions.

It is possible to hope this will mean that all the fuss over facial recognition bias will soon fade. The cost of panic caused by face recognition bias is high. In a state of moral panic, governments are refusing to recognize the benefits that face recognition algorithms provide. A variety of municipalities and states have passed legislation banning state agencies using facial recognition.

Even worse, attaching the technology to racism charges has made it toxic for big, responsible technology companies. This is driving them out the market. IBM is discontinuing all its research. Facebook’s most popular use of facial recognition has been discontinued. Amazon and Microsoft have also stopped selling face recognition products to law enforcement.

The market has been dominated by Russian and Chinese firms since these departures. NIST’s 2019 one-to-one search test revealed that Russian and Chinese companies topped the list, with six of their top positions. NIST revealed that Chinese and Russian businesses dominated the rankings in December 2021. Clearview AI was the highest-ranking U.S. business, and its practices were widely condemned in Western countries.

Due to network effects, it is possible that the United States might have forever given the face recognition market away to those companies it does not trust. This is a steep price for academics and journalists who want to impose moral standards on the development of technology.