AI in 2020: Facial recognition discriminates against race and gender

Reading Time: 3 minutes

One thing we learned from 2020 is not to trust AI facial recognition –it’s biased against race and gender.

Racial inequality is a big deal in 2020.

You might recall the Black Lives Matter protests that brought millions of people out into the streets over the summer.

Large tech companies IBM, Amazon, and Microsoft all stopped selling their facial recognition to law enforcement this year after the nationwide protests called for changes in policing.

“Facial recognition in law enforcement is a big deal,” AI expert Steve Shwartz told me recently.

“We can recognize terrorists but the problem is that the facial recognition software is by and large biased against minorities,” he said.

(Image courtesy Adobe Stock.)

Wrongful arrests, predictive software and government oppression

Imagine you’re driving to work, get pulled over, and told you’re being arrested for a felony. This happened to Michael Oliver. Police profiled him from one grainy cellphone video that didn’t look like him at all. His case was dismissed.

Harrisburg University researchers said they developed software that could predict if someone is a criminal, based solely on a picture of their face, and is intended to help law enforcement prevent crime. Critics said it wasn’t based on accurate science and 1700 academics signed an open letter demanding the research remain unpublished. The press release about the software has since been deleted from the university website.

“In China they’ve basically hooked up the feeds from almost every camera in the country and they’re monitoring them centrally using AI computers and technology to bring about what’s effectively a big brother scenario from George Orwell’s 1984,” said Shwartz.

“China uses facial recognition to profile Uyghur individuals, classify them on the basis of their ethnicity, and single them out for tracking, mistreatment, and detention,” a bipartisan group of seventeen senators said in a letter to Secretary of State Mike Pompeo in March 2020.

Studies show disturbing results

An ACLU study tested Amazon’s face surveillance technology and found it incorrectly matched 28 members of Congress as people who had been arrested for a crime.

“The thing that’s really scary is that of those 28 people, 40 percent were people of color, but congress only has 20 percent people of color, so it’s identifying twice as many people of color incorrectly as criminals,” said Shwartz.

The errors in facial recognition don’t just affect one group.

“With domestic law enforcement images, the highest false positives are in American Indians, with elevated rates in African American and Asian populations; the relative ordering depends on sex and varies with algorithm,” said a 2019 report by the National Institute of Standards and Technology Interagency.

“We found false positives to be higher in women than men, and this is consistent across algorithms and datasets,” said the report.

Hope for 2021

It looks like big business and local and state governments are seeing the risks and taking action this year.

Some cities have already banned the use of facial recognition technology, like Boston and San Francisco, both cities seeing the potential abuse of the software.

Facial recognition software grew twenty times better at searching a database to find a matching photograph between 2014 and 2018, but many big companies have pulled out of contracts with law enforcement this year, including Amazon, Microsoft, and IBM.

A new law will take effect on July 1, 2021 in the state of Washington, which places safeguards on the “unconstrained use of facial recognition services by state and local government agencies which poses broad social ramifications.”

“One thing we’ll start seeing is companies emerging to help people who develop AI software make the software not discriminate, make sure it’s compliant with laws, and to analyze the risks involved in rolling out AI software,” said Shwartz.

It’s an emerging technology which in the long run could benefit society, but it has flaws that absolutely need to be fixed before mass implementation, and it’s a good thing we haven’t adopted it here like in China, which has a type of government rule most Americans wouldn’t want to live under.

With government oversight, including strong legislation to protect individual rights and in cooperation with the tech companies that develop facial recognition, this technology could be used to benefit the general public, but we’re not there yet.

Edited by Charles Hand / Melody Friedenthal

MetaStellar news editor Alex Korolov is also a freelance technology writer who covers AI, cybersecurity, and enterprise virtual reality. His stories have also been published at CIO magazine, Network World, Data Center Knowledge, and Hypergrid Business. Find him on Twitter at @KorolovAlex and on LinkedIn at Alex Korolov.

Leave a Comment

Your email address will not be published. Required fields are marked *