In Spring 2022, a Maryland man named Alonzo Sawyer was jailed for allegedly assaulting a Baltimore bus driver and stealing his phone. Sawyer was identified as a suspect by an analyst using facial recognition software on closed-camera TV footage of the incident.
The problem? Sawyer looked nothing like the perpetrator. As his wife informed the authorities, the person in the footage looked much younger, much shorter, and had other different features—like no facial hair and no gaps between his teeth. He even moved differently; her husband's right foot sticks out when he walks.
Related: Can Artificial Intelligence Help Reduce Police Brutality?
As WIRED reports on the details that only recently became public, the 54-year-old Sawyer was jailed for nine days, missing work, and accused of a crime because of artificial intelligence tech that has been shown to misidentify those with darker skin, particularly Black men in previous cases. As the story points out, "Face recognition systems have a history of misidentifying people with dark skin, and more than 60 percent of Baltimore residents identify as Black."
Outrage over this and previous incidents has lawmakers in Maryland trying to pass restrictions on facial-recognition use in the state, but it's a tight deadline; the legislature adjourns in April and doesn't return until early next year. Previously, there was a one-year moratorium on this type of tech, but it only applied to public and private users, not police. And it expired at the end of 2022.
How often do police use it? One public defender in Baltimore says her research shows Baltimore PD used facial recognition 800 times last year. A simple Instagram photo, she said, could be used to secure a no-knock warrant against a matching suspect.
It’s understandable if you're a fan of ChatGPT and are all-in on AI. But when police forcibly enter your residence and you didn't do anything, things become clear: The algorithms are not your friends.
More From LEVEL: