How Racism and Hate Spread on Social Media
Photo by camilo jimenez on Unsplash

How Racism and Hate Spread on Social Media

A look at the algorithms, networks, and tactics that amplify hateful content.

Somuch of what I read about artificial intelligence (AI) focuses on the laziness associated with its use. There is a preponderance of articles that decry content writers using AI to crank out articles, replacing the image of the writer hunched over a keyboard writing independently, with a cup of coffee and a cat on their lap, and, I guess, authentically. Then there is the epic one thousand ways to spot AI in an article: experts say AI uses transitions like “moreover” and “furthermore,” and adjectives like “transformative.”

But AI is also something else. It’s more than just software’s ability to write a paragraph eloquently (not always accurately, mind you). AI can also be used for insidiously racist reasons. This is not discussed as much. English majors’ and gatekeeping editors clutching pearls are less affected by the systemic effects of racial bias obfuscation behind “they get too much already,” and “now with affirmative action, we must stop the proliferation of academic and gainful employment of AI.” “They have Juneteenth?”

Moving on and forward.

Shockingly, a 2025 study revealed that violent or abusive racist content online exposed 95% of more than 800 young Black and minority ethnic people in the UK, highlighting a critical issue. This isn’t a glitch in the system. Although the 2025 study focused on the UK, various reports show comparable high percentages, showing similar patterns of online racist content exposure among young Black and racially marginalized individuals in the US.

This is the system working exactly as designed, prioritizing engagement over human dignity, clicks over community safety. Social media’s role in spreading racist content has developed from an unfortunate side effect to a structural feature embedded in the architecture of platforms that billions use daily. It’s no longer a choice but a necessity to understand these mechanisms if one wishes to engage in digital environments without aiding the dissemination of racial animosity.

It is important to understand that social engineering is a legitimate field of practice. For example, security professionals use social engineering techniques in penetration testing to identify vulnerabilities in an organization’s human defenses, just as a locksmith might “socially engineer” their way into a building during a test by posing as a technician to see if employees are following security protocols. This was a constant feature of my security training throughout my career in Washington D.C.

More specifically, social media management is the practice of creating, publishing, and engaging with content on social media platforms to build brand awareness, drive website traffic, and generate leads. This includes developing a social media strategy, identifying target audiences, selecting the right platforms, and measuring campaign effectiveness.

How Engagement Metrics Prioritize Inflammatory Material

Every social media platform runs on the same fundamental currency: attention. The longer you stay, the more ads you see, and the more revenue flows. Hence, the popular “doomscrolling” phrase. The phrase is used to describe a person waiting for their subway train, a nail appointment, an oil change, or their child’s dental appointment to finish. This phrase describes the act of compulsively consuming negative news, often on social media, leading to feelings of anxiety and hopelessness.

This business model creates a perverse incentive structure where content that triggers strong emotional reactions receives algorithmic amplification, regardless of whether those reactions are positive or destructive. Some more phrase tinder to the fire: “rage bait.” Rage bait is content designed to figuratively poke someone in the proverbial social media scrolling eye, literally provoking a strong, visceral emotional response, typically anger, from an audience.

What muddies the waters of discussion is that many who take part in this sort of social engagement do so for the “likes” and algorithms, hoping to gain or expand their sphere of influence to make money.

I’m not suggesting hate-mongers are solely financially driven, but in previous articles, I’ve pointed out that the current division in this country has a monetary value underneath all the angst, fear, and ugliness sown purposefully to offend, divide, and grab power.

Imagine the potential outcomes if poor white people, who face the same economic, housing, and educational challenges that many minorities have come to accept as normal, united to form a coalition. It’s possible that the coalition might develop into a unified voting bloc, thereby consolidating its influence. A cohesive voting bloc that the government intended to serve the populace would be obligated to address crucial matters such as healthcare access, the provision of clean water, the establishment of livable wages, ensuring affordable housing options, and creating humane opportunities that are contingent upon an individual’s character, inherent drive, and unwavering determination.

Push that aside. Just for now.

Racist content thrives in this wild west social media environment because it provokes. A post containing racial slurs or dehumanizing stereotypes generates comments, shares, and quote-tweets at rates that bland, balanced content simply cannot match. The algorithm doesn’t distinguish between engagement driven by agreement and engagement driven by outrage. It sees numbers going up and responds by pushing that content to more feeds.

The technical architecture makes this worse. Machine learning models trained on engagement data learn to identify patterns that predict viral spread. When racist content consistently generates high engagement, the model learns to promote similar content. This creates a feedback loop where algorithmic bias and hate speech amplification become self-reinforcing.

There was a litany of “Karen” tweets and reels posted online that went “viral”, such as the one where a woman, upset about mask mandates, berated a store employee and demanded to speak to the manager or the lady on the airplane. I know which one, right! So many to choose from.

Consider how recommendation systems work. When you watch a video or read a post, the platform immediately suggests related content. If you pause on a post containing racial stereotypes, even to express disgust, the algorithm interprets this as interest. Your feed populates with similar content. Users who consume this content are grouped together, and the platform recommends their profiles to one another.

Groupthink? MAGA? A defining characteristic of cultish tendencies is the persistent replaying of visual content, sustained in an unending loop regardless of its negative, critical, or fabricated qualities. Truth dies in willful ignorance and lies live clustered in memorable memes, hoaxes, and posts.

It’s way harder to stop fake news once it’s out there than it is to get the truth to people. Accurate information just can’t keep up with how fast fake, emotional stuff travels. Fact-checkers can’t keep up with racist hoaxes and stereotypes; they’ve already hit millions by then.

Platform executives often claim their algorithms are neutral, merely reflecting user preferences. This framing obscures a crucial truth: the choice to optimize for engagement rather than accuracy, community health, or user well-being is itself a value judgment with profound consequences. When weekly rates of hate speech on the platform X rose 50% after its 2022 purchase, according to UC Berkeley research, that increase didn’t happen by accident. Policy decisions, moderation changes, and algorithmic adjustments created conditions where racist content could flourish.

The Impact of Echo Chambers on Racial Prejudice

Echo chambers don’t form naturally. They’re engineered through personalization algorithms designed to show users content that confirms their existing beliefs. For someone harboring racial resentments, this means gradual exposure to increasingly extreme viewpoints, each one normalized by the previous.

The way people get radicalized happens in similar steps. A user might first check out content that covers minor complaints about immigration rules. The algorithm sees you like this and shows you more stuff like it. Slowly but surely, the content turns more extreme, moving from policy talk to cultural anxiety, and then to blatant racial hatred. Users don’t get challenged, so it all seems smooth.

The impact of echo chambers on racial prejudice extends beyond individual radicalization. These digital spaces create the illusion of consensus. When every post in your feed expresses similar views, when every comment section reinforces your beliefs, you assume everyone thinks this way. This perceived consensus emboldens users to express views they might otherwise keep private.

Group dynamics within these echo chambers speed up extremism. Users vie for status by expressing increasingly extreme positions. Moderate voices get drowned out or driven away, leaving only the most committed ideologues. The community develops its own vocabulary, inside jokes, and shared narratives that would seem bizarre or repugnant to outsiders but feel completely normal within the group.

Moving to different platforms makes this even bigger. People who post extreme things don’t just go away when they’re kicked off popular sites. Folks are switching to platforms that don’t censor much so they can speak their minds. Basically, these spots become hotspots for radical thinking, and what happens there then shows up on mainstream sites through shared content, planned campaigns, and people returning.

The psychological impact on those targeted by racist content within these spaces deserves emphasis. Research shows that 79.1% of participants reported experiencing individual racial discrimination on social media in a 2025 study. The constant exposure creates chronic stress, anxiety, and trauma that mirrors the effects of in-person discrimination while adding the unique burden of its inescapability in an increasingly digital world.

Impact of Racist Content on Social Media

The consequences of unchecked racist content ripple outward from individual harm to collective trauma, from psychological damage to physical danger. Dismissing online racism as mere words ignores the documented connections between digital hate and real-world violence.

Racist content on social media can have far-reaching consequences.

The mental health toll on targeted communities is substantial and well-documented. Exposure to racist content triggers stress responses, increases anxiety and depression, and contributes to what researchers call racial trauma. This isn’t sensitivity or overreaction; it’s a physiological response to perceived threat that activates the same neural pathways as physical danger.

For African American communities specifically, online racism compounds historical trauma. As documented research shows, the history of discrimination against African Americans has roots dating back centuries, and the disparities in wealth, education, and healthcare today are outcomes of past discrimination. Social media racism doesn’t exist in isolation; it echoes and reinforces centuries of dehumanization.

Young people bear disproportionate exposure. A 2024 UCLA study found that 80% of children between 10 and 18 reported encountering hate speech on social media in the prior month, with 71% viewing hate speech related to race and ethnicity. These numbers represent children in crucial developmental stages who are absorbing messages about their worth, their place in society, and their peers’ attitudes.

The normalization effect carries significant long-term risks. Regularly seeing racist content on feeds, which also receives likes and shares and is sometimes highlighted by platforms, makes it a common, unremarkable part of online life. Exposure leads to a diminished reaction from users. Something that would once have elicited strong negative reactions is now merely disagreeable. Things that were once considered difficult are now being considered.

Economic consequences stem from psychological harm. Studies show that exposure to racist content at work affects job performance, career progression, and income. Online harassment campaigns have forced individuals out of their careers, led to relocations, and ruined businesses. The digital and physical economies are too interconnected to believe online harm remains solely online.

Platform-specific patterns reveal where the problem concentrates. Research shows that Black emerging adults reported seeing racist content most often on Facebook at 66% and X/Twitter at 62%. These aren’t niche platforms; they’re the dominant spaces where public discourse occurs, where news spreads, where communities organize.

How the video targeting the Obamas was not just a personal insult but an attack on the empirical refutation of eugenics

The viral spread of racist content targeting prominent Black figures carries significance beyond the immediate harm to those individuals. When racist attacks target people like the Obamas, they serve as proxy attacks on the very concept of Black achievement and capability.

Eugenics, modern genetics has thoroughly discredited the pseudoscientific belief in racial hierarchies and the need to control human breeding based on perceived genetic worth. The existence of accomplished Black individuals in positions of power, intellectual achievement, and social influence represents living proof that eugenicist assumptions were false. Racist content targeting such figures attempts to reassert those discredited hierarchies.

The attacks function as intimidation directed at entire communities. When a former President and First Lady can be subjected to dehumanizing content that spreads virally with minimal consequence, the message to ordinary Black citizens is clear: no level of achievement will protect you. The psychological weight of this message, passed down through generations, contributes to what researchers identify as intergenerational trauma.

Understanding this context transforms how we interpret seemingly individual incidents of online racism. Each viral attack on a prominent Black figure reinforces narratives of Black inferiority that date back to slavery-era justifications. The content creators often know exactly what they’re doing: they’re taking part in a long tradition of using dehumanization to maintain social hierarchies.

The platforms that allow this content to spread and become complicit in perpetuating ideologies that have caused immeasurable harm. When racist content targeting public figures generates millions of views before removal, if removal happens at all, the damage is already done. The message has been delivered, the trauma inflicted, and the ideology reinforced.

Community responses to such attacks show both resilience and exhaustion. Constantly defending against dehumanization and asserting basic humanity in the face of denial drains emotional and psychological resources. This burden falls disproportionately on those already marginalized, creating a tax on participation in public life that white users simply don’t pay.

Platform Governance and Moderation Challenges for Extremist Content

Platforms face genuine difficulties in moderating racist content at scale. Billions of posts appear daily across major platforms, and the line between protected speech and harmful content isn’t always clear. But these real challenges have too often served as excuses for inadequate action.

The Limitations of AI and Human Oversight in Identifying Nuanced Racism

Automated content moderation systems struggle with context. A racial slur might appear in an educational discussion, a news report, or a work of art: contexts where removal would be inappropriate. The same system must also catch that slur when used as an attack. Teaching machines to distinguish between these uses has proven extraordinarily difficult.

Coded language presents even greater challenges. Racist communities deliberately develop a vocabulary that evades automated detection. Numbers, symbols, and seemingly innocent phrases gain racist meanings known only to insiders. By the time platforms update their detection systems, the vocabulary has already shifted.

Human moderators face different limitations. The sheer volume of content makes a comprehensive review difficult. Moderators must make split-second decisions about complex posts, often without full context. The psychological toll of constant exposure to the worst content humans produce leads to high turnover, burnout, and trauma among moderation staff.

Cultural and linguistic variations compound these challenges. Racism manifests differently across cultures, languages, and communities. A phrase that’s clearly offensive in one context might be neutral in another. Platforms operating globally must somehow account for this variation while maintaining consistent policies.

The concern that AI language models might embed biases, perpetuating systemic discrimination and unequal treatment, extends to moderation systems themselves. If training data reflects existing biases, automated moderation may disproportionately flag content from marginalized communities while missing sophisticated racism from dominant groups. Several studies have documented exactly this pattern.

Appeals processes reveal additional failures. Users theoretically have recourse when someone incorrectly removes content or incorrectly leaves content in place. Appeals often disappear into black boxes, with decisions made by undertrained workers following rigid scripts. The asymmetry between the ease of posting harmful content and the difficulty of getting it removed favors bad actors.

Resource allocation reflects priorities. Platforms invest billions in features designed to increase engagement and revenue while staffing moderation teams at levels clearly inadequate to the task. The moderation challenges for digital extremist content are real, but they’re also partly self-inflicted through chronic underinvestment.

Platform-specific reporting pathways vary in effectiveness:

Facebook and Instagram: Use the three-dot menu, select “Report,” and choose the most specific category available. “Hate speech” options are available, but may require navigating through multiple screens.

X/Twitter: The reporting process has changed repeatedly. Currently, flagging for hate speech triggers a review, but enforcement has become inconsistent since 2022.

TikTok: Report through the share menu. The platform’s younger user base makes reporting important.

YouTube: Flag videos through the three-dot menu beneath the video. Comments require separate reporting.

Specificity improves outcomes. Vague reports get ignored. Explaining exactly which community standards the content violates, quoting the specific language, and providing context increases the likelihood of action. Treat the report like a brief legal argument, not a complaint.

Collective reporting amplifies impact. When multiple users report the same content, platforms take notice. Coordinating with others who witnessed the same content, or connecting with organizations that track online hate, can transform individual reports into documented patterns that demand a response.

Following up on matters. Platforms provide notifications about report outcomes, although these are often vague. If harmful content remains after reporting, escalate through alternative channels: direct messages to platform safety accounts, public posts tagging the platform, or reports to external organizations like the Anti-Defamation League or Color of Change.

Understanding the limitations prevents frustration. Platforms will not remove all content you find offensive. They will not act as quickly as you’d like. They will decide that you disagree with. Effective reporting requires persistence without expectation of immediate results.

Social media platforms have demonstrated their capacity to spread racist content at unprecedented scale and speed. They must now demonstrate equal capacity to prevent that spread. Users, regulators, and civil society must demand nothing less.