To End Mass Killings — Control Guns or Incendiary Speech?
How to manage “gun control” on social media platforms
“You see pictures of Amerie and her friends on the news. You should know they didn’t get buried looking sweet and happy like their photos. Some are missing limbs. Some had holes in their tiny chests. You might mistakenly imagine a funeral where a child lies peacefully in a colorful coffin. But make no mistake, there is no peace in the death of a child by a weapon of war.”
- Dr. Roy Guerrero, the sole pediatrician in Uvalde, Texas
Confession: I forgot to post this piece, which I wrote almost a month ago. The day I wrote it, two prominent international law blogs published on violent extremist speech — not a surprise in light of the shooting at Club Q, the LGBTQ nightclub in Colorado Springs November 20, 2022. But some topics are evergreen — there is a mass shooting nearly every week, and in the last week two — in Rochester, while people filmed a rap video meant to commemorate the shooting death of a 19-year-old, and another outside a birthday party in Portage Park Chicago. To top it off, the House is holding hearings on gun violence, including the Uvalde school massacre. This year may be the deadliest yet for mass shootings in the U.S., which leads the developed world in this macabre statistic.
Conventional news coverage of these frequent events tends to devolve into the deep-worn ruts of gun control argument — do guns or people kill people? — or the regulatory loopholes and bureaucratic failures that let yet another disturbed person act out their murderous fantasies. This never seems to lead us to policies that reduce mass shootings — just further tinkering around the edges of gun control law and a lot of hand-wringing.
It might instead be useful to look at some bedrock social tensions between this country’s passionate devotion to free expression and its equally passionate devotion to personal weapons for the pursuit of self-defense — each of which finds no counterpart in most other democratic and developed nations.
The articles in Just Security and Lawfare, while totally different from each other, do just that. In Lawfare, a group of authors from the defense and security consultants Valens Global argued that a new typology is needed to capture the rising tide of extremists whose beliefs defied pigeon-holing in any specific ideology. They call this a Composite Violent Extremism framework, although you could just say “a wide variety of angry and unstable men who are triggered by multiple factors and sometimes act in concert with more than one group.”
Their catchy CVE acronym is no doubt meant to make you think of another CVE, for Countering Violent Extremism, government practices for preventing ‘extremism’ and ‘radicalization’ — themselves malleable and variously defined tags. The new Composite Violent Extremism and the old Countering Violent Extremism frameworks are both somewhat murky in their policy implications, especially for human rights — raising questions such as just how should social media companies censor, how should police monitor suspects, how should government infiltrate and disrupt cause organizations, how should those vulnerable to extremism be identified and what sort of interventions are appropriate or rights-abusing.
Just Security, meanwhile, published a thoughtful essay by the head of the Dangerous Speech Project, tracking the rise in unapologetically violent rhetoric by public figures and popular support for political violence. When incendiary or “dangerous” speech proliferates, the author says actual violence, and indeed even mass violence, is not far behind.
Both pieces assume that speech (particularly online speech and social media) has a direct causal impact on people who are susceptible to acting out hatreds and fears in the real world. But such a causal relationship is often studied and disputed, going back to moral panics over newspapers, radio and television.
It is possible that social media may have had a more dramatic and traceable effect on priming people for violence, given its algorithmic disposition to continually push “engaging” material (read: sensational, incendiary, and often hateful), the ease of distributing inflammatory falsehoods on platforms, as well as the interactive nature of such platforms that enable reinforcing communities to form around violent viewpoints.
Social media companies, who control the data, are not very keen on opening their files to independent research on this, but increasingly, studies have shown that social media hate speech does tend to correlate with real-life hate crimes. Quite a bit of provocative speech that seems anecdotally to impel people to violence involves a narrative of imminent harm about to be wrought on innocents by sub-human perpetrators (take your pick: pedophiles, homosexuals, Jews, blacks, liberals, racists, elites, police, election-stealers….). Valorizing gunmen, praising violent groups, and glorifying crime is also thought to play a part in preparing people to take action.
What is to be done, if free speech is to be preserved? The Valens Global team is a bit demure, leaving the details of implied “CVE” tactics such as surveillance, interventions, and moderation/censorship to their governmental clients. The Dangerous Speech Project, in contrast, focuses on persuasive measures to get influencers to give up incendiary rhetoric and to inoculate audiences against disinformation and propaganda.
Freedom of expression, though core to democratic societies, is not absolute. Private platforms have great leeway to set rules, and little incentive to act as though all their users are going to put violent words into action. Yet even Elon Musk acknowledges (when he’s not dismantling Twitter) that free expression does not mean turning the platform into an unregulated “hellscape.” Some sort of moderation and content curation is required, both to reduce the reach of violent speakers and to suspend where necessary the unregenerate but highly influential speaker. Prosecuting those who actually incite violence is another important social action — even in the United States, it is possible to win an incitement case, and in other democracies, the idea of incitement to discrimination or violence is both broader and more nuanced. Debunking and pre-bunking disinformation and propaganda can do much, especially if liars are appropriately stigmatized.
That’s how to manage “gun control” on a public platform. As for finding potential shooters, measures to predict and intervene are very difficult to frame without invasion of privacy, discrimination against disfavored groups, and chilling of speech rights. A lot of people just like to mouth off, and free expression protects bigoted, false, and reprehensible ideas, so long as they don’t incite crimes. Understanding the likely effect of online media on various types of individuals is quite difficult to impossible, and framing non-discriminatory, rights-respecting interventions is a delicate work. Community leaders, psychologists, and educators have all struggled with this problem, with very mixed results.
In the end, Just Security’s author, Susan Benesch, identifies the real project — not finding new ways to predict who will become violent, but shifting general social discourse norms to reduce and stigmatize incendiary speech. This is the hardest measure of all, but one with the greatest potential to suppress violence and preserve rights. Free expression does not mean saying anything you want in any way you like without regard to likely social consequences. It is only freedom from the heavy hand of the law. We are right to hold those with the most social influence to a high standard, and that is a whole-of-society endeavor, not just a job for social media companies or government officials.