A couple of years ago, we wrote about privacy issues surrounding emerging facial recognition technologies. In the intervening 700 or so days, the conversation has shifted dramatically. With political upheaval and a renewed commitment to racial justice emerging across the nation, the conversation around facial recognition artificial intelligence has taken on a sense of urgency.
Many people are likely familiar with basic facial recongition AI through their phones. For example, Facebook may alert you to a photo that includes you, uploaded without your knowledge by a friend, through an automated system that asks “Is this you?” Another example is Google Photos, which automatically builds albums of friends, family members, and pets, allowing you to identify them by name for easy searching.
However, the most lucrative markets for this type of tech are law enforcement and defense. In Detroit, the police have recently come under heat for two known instances of Black men being arrested for crimes they didn’t commit on the basis of the department’s facial recognition AI. The Detroit Chief of Police acknowledged the software has an incredible 96% false identification rate, which for some has raised questions about the software’s value to the community. The Detroit Police Department has promised to draft a policy about the use of this tech, which is produced by the company DataWorksPlus. In the meantime, a Congressional inquiry has been launched to examine the two facial recognition programs produced by DataWorksPlus, which are used by law enforcement in at least five states.
Tech companies working to produce this type of software are coming under pressure to stop its sale and production, not just by Congress or justice reform advocates, but by their own employees. One example is IBM, which has removed its general purpose facial recognition offerings from the market, and is urging other companies to do the same.
Arguments against use of facial recognition technology by government entities including law enforcement have previously focused on the inaccuracy of such tech. As we see from Detroit, that remains an issue. However, as this type of AI improves, concern has increasingly begun to shift towards the awesome power of accurate facial recognition tech, and its ability to obliterate privacy. As a result of of this, some local jurisditions have begun to specifically outlaw the use of facial recnognition tech. These municipalities are mostly cities in California and Massachusetts, incuding San Francisco and Boston, but now also include Portland, Maine.
One advancement is that facial recognition AI increasingly focuses on the space immediately around the eyes, so that would-be law-breakers and other evil-doers will struggle more to hide their identities. This also means that wearing a mask while you’re shopping might not stop corporate security from identifying you.
Further reading: