Artificial Intelligence's Scary Blind Spots
Photo by julien Tromeur / Unsplash

Artificial Intelligence's Scary Blind Spots

AI systems are not neutral. From wrongful arrests to unfair lending, the technology has a history of reinforcing racial injustice.

While people tend to see artificial intelligence as something cold and unfeeling, we shouldn’t mistake it for something neutral. In fact, anything created by a person is influenced by them and the society they live in. And evidence suggests there are racist implications in the widespread adoption of this technology. According to a study published in Nature, one way “language models convey covert racism” is through a “form of dialect prejudice,” where “raciolinguistic stereotypes about speakers of African American English (AAE)” are considered in decision-making processes. When used, these models contribute to Black people being “assigned less-prestigious jobs,” being “convicted of crimes” more often, and even being “sentenced to death (Hofmann et al., 2024).” Supporters of the AI movement might argue that companies have much to gain, as this technology could boost productivity and efficiency. However, there are also ethical concerns, such as who pays the price for this warm embrace of AI.

Technology itself is not inherently racist. However, even a simple household object like a rope can become a weapon, such as a noose, in the hands of someone with racist intent. White people are overrepresented among AI developers and academic leaders. Of course, no one would label them as DEI or accuse them of being unqualified for their positions, unlike when conservatives often criticize Black people fighting for equal access to opportunities. That’s privilege. It also explains why this technology shouldn’t be regarded as completely neutral. Cold? Yes. Unfeeling? Certainly. Neutral? No. A study at Lehigh University showed that an AI-driven program was much more likely to recommend denying mortgages to Black and Hispanic applicants, labeling them as “riskier” compared to white applicants. This was true even for applicants with the same credit score. Even though racial discrimination in housing is no longer legal, models that assess lendability are still influenced by prejudicial factors, leading to racial disparities.

Last Monday, Baltimore police handcuffed a Black high school student, Taki Allen, after football practice because an “AI-driven security system flagged the teen’s empty bag of chips as a possible firearm.” While faculty members investigated and quickly realized this was an egregious error, authorities initially treated him like a dangerous criminal. Allen told local news outlet WBAL, “They made me get on my knees, put my hands behind my back, and cuffed me.” The teenager was left to wonder if authorities would kill him, telling reporters, “they had a gun pointed at me,” and about “eight cop cars” arrived, which heightened the tension of the moment. “I was just holding a Doritos bag — it was two hands and one finger out, and they said it looked like a gun.” Such an incident raises questions about the validity and reliability of AI systems, not to mention ethical concerns. Given AI’s poor performance, I’m not convinced these systems should be widely adopted, let alone in educational settings.

Another problem with adopting AI for security surveillance is that authority figures can put some distance between themselves and the outcomes it produces. When Black individuals are targeted, they can cast all the blame on the purported flaws of a cold, lifeless system, rather than on the individuals who reviewed and acted upon the data. Had law enforcement officials reviewed the video footage with the human eye before sending officers to the scene, all of this could have been avoided. After the program claimed the Black teenager was holding a gun when he was really holding a bag of chips, authorities expressed no immediate regret about utilizing an AI-driven security program, even though this is what led police to hold a startled student at gunpoint. Kenwood Principal Kate Smith said, “Ensuring the safety of our students and school community is one of our highest priorities.” But her statement failed to acknowledge the increased risk some students face when these systems are adopted. While some may claim this incident has nothing to do with race — that AI made an honest mistake — many cases indicate AI models perpetuate racism.

In addition to AI struggling to identify simple objects like a bag of Doritos accurately, the technology has failed to reliably match Black people’s faces using video footage, increasing their rate of wrongful arrest. A few years ago, authorities improperly linked thefts committed in Jefferson Parish and Baton Rouge, Louisiana, to a 28-year-old Black man, Randall Reid. Despite prosecutors accusing him of committing crimes, he said, “I have never been to Louisiana a day in my life.” While the software flagged him as a suspect, it was a grave error — one that could have jeopardized his freedom. This year, a Brooklyn man, Trevis Williams, endured a similar ordeal when authorities wrongfully arrested him after facial recognition software selected him as a potential match for someone suspected in a “flashing incident.” Despite being taller and heavier set than the suspect, a woman picked him out of a lineup. According to a New York Times article, the only reason he appeared in the lineup was that police canvassed the area and conducted a “facial recognition search” using video footage from a surveillance camera. It’s the program that first pointed its finger at him.

In the so-called Sunshine State of Florida, Robert Dillon, a Black man, was a 93% match for a suspect who attempted to lure and abduct a child from a fast-food restaurant in 2023 by an AI program. Since the incident was captured on surveillance footage, law enforcement used facial recognition software to identify potential suspects. But the system was as wrong as two left shoes. It incorrectly identified an innocent man as potentially guilty, someone who lived more than 300 miles away from the scene of the crime. The state attorney’s office approved of his arrest, relying solely on the AI report as evidence. While the charges were later thrown out, this case highlights who pays the price for the widespread adoption of these systems — Black people. When law enforcement treats information from these automated systems as perfectly credible, they place citizens in danger. While Dillon had never been to Jacksonville, Florida, an AI program blamed and framed him. In light of all these cases, and the many more that fly under the radar, it’s irresponsible for society to welcome this technology with such a warm embrace.

Let’s consider one more case relevant to this discussion. The Department of Homeland Security posted a video of Black teenagers this month, some of whom wore hoodies and ski masks. As an overlay on the media, the text read, “ICE, we’re on the way. Word in the streets cartels put a 50K bounty on y’all.” Government officials responded to the video with “FAFO. If you threaten or lay hands on our law enforcement officers, we will hunt you down, and you will find out, real quick. We’ll see you cowards soon.” Federal employees threatening citizens is concerning. And given the country’s history of slavery and Jim Crow, it’s undeniably racist to threaten to “hunt” Black people as if they’re animals. But what makes this post so problematic is the falsehood it promotes. In reality, these teenagers did not threaten federal officers.

The video DHS posted shows Black teenagers purportedly threatening officials. But it was a doctored video. In the original one, they can be seen comically threatening retaliation against the country of Iran if its government attacked the United States. Either DHS representatives lack the necessary technological competency to distinguish between an AI-edited video and an authentic one, or they deliberately shared the post as a form of racist propaganda. Since crime rates have lowered in many major cities, such rhetoric may be an effort to justify the administration’s boots-on-the-ground approach. How ironic that when a group of Black teenagers expressed patriotism, government officials responded with an AI-edited video of them, accusing them of threatening the lives of federal officers. Also, they had the nerve to do so by co-opting AAVE. Saying “FOFO,” while targeting members of the racial group who popularized that lingo, would be comical if not for the harm perpetuated by anti-black racism in this country.

Where do we go from here? It’s clear that America, like many other countries, has embraced artificial intelligence. In healthcare, housing, banking, security, education, and other industries, leaders are progressively warming to the use of the technology. For many, it offers a cost-effective way to improve productivity and efficiency. However, as citizens, we have a responsibility to consider the ethics of readily adopting these systems. Black people shouldn’t be the ones left to pay the price so that others benefit — they shouldn’t have their environments polluted, like one AI system is doing in Memphis, Tennessee, or continue to endure systemic racism because the system supports the status quo, or be subjected to racist propaganda to justify disparate treatment. We can and must do better. Gideon Christian, an associate professor and university research chair in AI and law at the University of Calgary, suggested “the time is ripe for a ‘technological civil-rights movement dedicated to advocating for the ethical development.” However, given the rise of the anti-DEI policies in America, our country seems to be heading in the opposite direction, one where Black people are deprived of a seat at the table. To create a society where AI no longer perpetuates racist outcomes, we have to address deeply ingrained prejudice in our society.

This post originally appeared on Medium and is edited and republished with author's permission. Read more of Allison Gaines' work on Medium.