Despite protests and claims it has bias problems, the CEO of AWS reportedly told employees it will continue selling facial recognition to law enforcement.
According to a Verge source, AWS CEO Andrew Jassy told employees in a meeting:
“We feel really great and really strongly about the value that Amazon Rekognition is providing our customers of all sizes and all types of industries in law enforcement and out of law enforcement.”
Employees of Silicon Valley giants including Amazon, Microsoft, and Google have been lobbying their leaders not to sign contracts where technology they build could be abused – whether potentially oppressive to society, or for use in a military capacity.
Back in July, our sister publication AI News reported on findings by the ACLU (American Civil Liberties Union) which found Amazon’s facial recognition software disproportionately flagged members of the Congressional Black Caucus as criminals in a test on members of Congress to see if they match with a database of mugshots.
Jacob Snow, Technology and Civil Liberties Attorney at the ACLU Foundation of Northern California, said:
“Our test reinforces that face surveillance is not safe for government use.
Face surveillance will be used to power discriminatory surveillance and policing that targets communities of colour, immigrants, and activists.
Once unleashed, that damage can’t be undone.”
However, Amazon argued the ACLU left Rekognition’s default confidence setting of 80 percent on when the company suggests 95 percent or higher for law enforcement.
Commenting on the ACLU’s findings, Dr Matt Wood, GM of Deep Learning and AI at AWS, wrote in a blog post:
“The default confidence threshold for facial recognition APIs in Rekognition is 80%, which is good for a broad set of general use cases (such as identifying celebrities on social media or family members who look alike in photos apps), but it’s not the right setting for public safety use cases.
The 80% confidence threshold used by the ACLU is far too low to ensure the accurate identification of individuals; we would expect to see false positives at this level of confidence.”
Wood provided a case example of their own test where, using a dataset of over 850,000 faces commonly used in academia, the company searched against public photos of all members of US Congress ‘in a similar way’ to the ACLU.
Using the 99 percent confidence threshold, the misidentification rate dropped to zero despite comparing against a larger number of faces.
A month prior, Amazon employees wrote a letter to CEO Jeff Bezos. One excerpt read:
“We refuse to build the platform that powers ICE [Immigration and Customs Enforcement] and we refuse to contribute to tools that violate human rights.
As ethically concerned Amazonians, we demand a choice in what we build and a say in how it is used.”
Earlier this year, some Google employees quit in protest over the company’s Project Maven defense contract with the Pentagon. Google has since announced it will not be renewing the contract and will not undertake such work in the future.
Whether some Amazonians will decide to quit in protest remains to be seen. There is an argument to be made this technology will be developed one way or another, and that most technological developments could be used for harm.
At least those with morals staying at Amazon means they can help guide the project, push for transparency, and ensure it’s used ethically. That’s perhaps the best we can hope for to avoid the future becoming a complete dystopian horror.
Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.