IBM and Amazon will stop selling facial recognition technology over racial bias concerns
In the midst of nationwide protests around police violence towards black Americans, two tech giants have announced that they will no longer sell facial recognition technology to law enforcement after repeated calls from privacy and civil rights groups alleging that the technology disproportionately affects darker skinned individuals and contributes to racial profiling.
IBM’s CEO, Arvind Krishna, announced on June 8 that the company would no longer be developing facial recognition technology, stating in a letter to Congress that “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency.”
Krishna went on to say that “Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”
On Wednesday, Amazon announced that it would be implementing a “one-year moratorium on police use of Rekognition,” Amazon’s facial recognition technology. However, the company will continue to supply the technology to organizations focused on rescuing victims of human trafficking and reuniting missing children with their families.
“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology,” the company stated. “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
Increased adoption of the software
Facial recognition technology can be used to identify people in real time, as well as in photos and videos. The technology has been used by the US military and intelligence agencies to identify fugitives and possible terrorist suspects for many years, but its use by local law enforcement agencies to detect and prevent crime has significantly increased.
Facial recognition software can even identify those who don’t have criminal records.
According to a report from Georgetown Law, an estimated 117 million American adults are in the facial recognition networks used by law enforcement.
The report also estimates that at least 26, and as many as 30, states across the US allow law enforcement to run or request facial recognition searches against their databases of driver’s licenses and photo IDs.
The technology has also been adopted elsewhere, including for use in airports to verify travelers’ identities, in banking and finance to counter fraud and in smartphones to unlock phones, sign into mobile apps and to verify payments.
The increased adoption of facial recognition, particularly by law enforcement, has led to a rapid evolution and expansion of the technology, driven by better artificial intelligence algorithms and more training data.
After evaluating 127 software algorithms from 39 different developers, the National Institute of Standards and Technologies (NIST) reported that between 2014 and 2018 algorithms got 20 times better at searching databases for matching photographs.
According to a report from Grand View Research, the global facial recognition market was valued at US$3.4 billion in 2019 and was predicted to grow at an annual rate of 14.5% from 2020 to 2027.
Bias in the technology
Even before the protests, facial recognition in law enforcement was already a divisive issue.
Fears over the privacy implications of the technology have collided with those of racial bias.
According to the Georgetown Law report, due to disproportionately high arrest rates, African Americans are more likely to be singled out by systems that rely on mug shot databases.
Another concern is regarding the accuracy of the technology, which sees variations made across gender and race.
A study led by Joy Buolamwini, a researcher at the MIT Media Lab, tested facial recognition software against curated data featuring different genders and skin tones.
The results varied, showing that software was better at identifying males, in particular lighter-skinned males, with an error rate of less than 1% for lighter-skinned males.
But significant errors did arise when attempting to identify darker-skinned women. Software sold by Microsoft showed a 21% error rate while software sold by IBM returned a nearly 35% error rate.
In 2018, the American Civil Liberties Union (ACLU) highlighted facial recognition’s shortcomings in identifying people of color. The ACLU conducted a study using Amazon’s Rekognition software, which incorrectly matched photographs of 28 members of congress with mugshots of people who have been arrested for committing a crime.
These inconsistencies in identifying people of color could further lead to false arrests or instances of police brutality that are already all too common for people of color in the US.
Calls to ban facial recognition for law enforcement
The ACLU has called on other tech companies to follow Amazon and IBM’s lead and stop the sale of facial recognition technology to law enforcement. The ACLU particularly singled out Microsoft after the company announced its support for the Black Lives Movement.
“The world Microsoft seems to want is one where police have an invisible but inescapable surveillance presence in our communities,” wrote Matt Cagle, a technology and civil liberties attorney at the ACLU. “Where an infrastructure exists to scan your face and identify you as you walk down the street, go to a protest, attend a place of worship, and participate in public life. Building a surveillance apparatus this big would have severe consequences — chilling demonstrations, fueling a for-profit surveillance industry, and creating racist watchlists that governments and businesses will use for discriminatory ends.”
Cagle also highlighted companies that backed out of the AB 2261 bill in support of the use of facial recognition by law enforcement and private companies.
The bill was blocked last week after opposition from the ACLU of California and other groups. The ACLU had warned that the bill would allow companies to use face scans to deny applicants jobs or from getting financial services and healthcare.
Microsoft has also faced internal calls to stop selling its technology to law enforcement, with 250 Microsoft employees demanding this week that the company cancel its contracts with the Seattle Police Department.
Timnit Gebru, a Google researcher who collaborated with MIT’s Joy Buolamwini, said in an interview with The New York Times that facial recognition is currently too dangerous to be used for law enforcement purposes.
When asked whether he thought there was a way to responsibly use facial recognition for law enforcement and security, Gebru replied simply, “It should be banned at the moment. I don’t know about the future.”
For now, cities stretching from San Francisco and Oakland, California, to Brookline and Cambridge, Massachusetts, have banned the use of facial recognition.
Originally published at https://themilsource.com on June 13, 2020.