Can AI Reduce Violence from School Shooters?

Humans alone may be unable to solve a problem that humans create.

Posted May 19, 2018

The problem with early warning

According to a Secret Service study of school shooters, which examined 37 attacks by 41 attackers since 1974, 75% of school shooters informed someone—usually a peer—of their plans in advance, and often school staff were also aware of warning signs exhibited by students who eventually became school shooters. Princeton sociologist Katherine Newman, who has studied school shootings in depth, said “They [school shooters] never explode spontaneously; they usually let out hints many months in advance.”

And yet, in most cases, according to the Secret Service, no one reported such “red flags” in advance to authorities.

Why AI might help

Although the reasons that people don’t warn authorities vary; from peer pressure (snitches aren’t cool), to abundance of “false alarms” (lots of kids vent), to cognitive biases such as the normalcy effect (we don’t plan or react to problems that have never happened to us), to learned helplessness (bureaucrats won’t do anything anyhow), all of the people who keep danger signs to themselves have one attribute in common: they are human.

And humans are—and will always be—subject to social pressures, cognitive biases, beliefs and other forces that will make them reluctant to come forward with accurate concerns about possible school shooters for the foreseeable future. Furthermore, even if more people did come forward, authorities would likely be overwhelmed with false alarms, and thus be slow to react to genuine threats. Finally, even peers or teachers of school shooters who hear advanced warnings of an attack often don’t have knowledge of all of the factors that predict lethality—such as ready access to firearms—that would help them differentiate genuine threats from false alarms.

So here’s a radical idea: Let’s apply Artificial Intelligence (AI) along with recent advances in digital privacy protection, to have computers—not humans--generate early warning signs of school shootings.

Computers running AI algorithms are not subject to social pressures and can access much more information about potential shooters—such as access to forearms—than peers or teachers can. Therefore, AI systems, given sufficient data and “training” may achieve reasonably high “Hit rates” with low “False alarms.” Moreover, new privacy technologies can protect civil liberties while the computers are crunching their numbers.

Before going into specifics of why AI, together with privacy technologies can—theoretically at least—reduce deaths from school shooters, I need to acknowledge that no technology, however advanced and accurate, can solve the problem all by itself, because no technology will address the deep cultural, anthropological, legal and political roots of the problem.

Technology, at best, only offers hope for ameliorating some of the symptoms of deeply rooted problems such as school shootings.

That said, if such “symptomatic” treatments can save even one life, they’re worth considering.

How AI, with parallel advances in digital privacy, might put a dent in the problem

AI is getting pretty good at a task computer scientists call classification: Does a photo have a cat anywhere in or not? Is a caller to a customer support center angry or not? Is a voice on a phone a male or female, native speaker or non-native speaker? Will an applicant for auto insurance likely “churn” (turnover to another carrier) or not? AI has grown proficient at all of these tasks.

It is entirely possible—even likely—that AI algorithms could soon also get pretty good at “classifying” which students are genuine threats to conduct lethal rampages vs. which students are not likely to do so. The AI would be “fed” as much diverse data on both shooters and non-shooters as possible and would be “taught” to discriminate between real threats and false alarms. Examples of data would include:

  • Postings on social media (of both potential shooters and their peers talking about potential shooters).
  • School surveillance camera footage (for example, Dr. Paul Ekman’s work on micro-expressions suggests that it may be possible to sense lethal intent from facial expressions).
  • Registries of gun ownership in families or relatives of students cross-correlated with student body rosters. (most shooters had ready access to firearms, and often obsession/fascination with firearms).
  • Anonymized reports/concerns of peers, school staff.
  • Demographic data on students (white, un-athletic, marginalized males with above average grades in rural areas comprise most shooters, according to the Secret Service).

An enormous problem using any of the data sources just listed is insuring protection of personal privacy and civil liberties. Any organization, be it a school or police force, that collected and correlated such data, would, on its face, be acting like Orwell’s Big Brother.

But thanks to emerging technologies with exotic names like homomorphic encryption and secure multi-party encryption, it is now possible to encrypt all of the data sources mentioned above—at the point of collection—and perform AI computations on them while encrypted. Thus, at no point during the data collection, transmission, storage and analysis cycle, would any human—or computer for that matter—know whom the collected information pertained to.

Only in the rare case that a classification algorithm triggered a red flag, would a school (or possibly judicial court) be notified that attention should be paid to a particular student (without exposing any of the personal data that gave rise to the AI’s alert) so that a discrete investigation, and ultimately  an intervention might be planned (such as offering counseling or checking high risk students for weapons upon entering school).

This unlocking of identity could be based on evidence presented by the AI to a judge, for example, summarizing the reasons for concern (violent social media posts, access to weapons, comments of peers) without even telling the judge specific details. Only if the judge believed it were warranted, would the judge then use special “digital keys” (available only to the judiciary) to unlock the student’s identity for notifying the school and parents.

The AI’s decision to notify a judge, as suggested above, would not be influenced by social pressure, cognitive biases, learned helplessness or other factors that cause humans to notify or not notify authorities of a threat.

It’s true that using AI to spot dangerous students poses many challenges. For instance, can a public institution—such as school or criminal justice system take action based only upon what a student might do in the future vs. what they actually have done?

All of the questions surrounding AI and school shooters are thorny and difficult and fraught with ethical challenges, and some will say using AI could be a de-humanizing approach to reducing deaths and injuries from school shootings.

But nothing robs a student of their humanity—or rights—more than being killed by a fellow student.

References

Farr, K. (2017). Adolescent Rampage School Shootings: Responses to Failing Masculinities by Already-troubled Boys, Gender Issues DOI 10.1007/s12147-017-9203-z

“Interim Report on the Prevention of Targeted Violence in Schools,” October 2000, U.S. Secret Service National Threat Assessment Center.

https://www2.ed.gov/admins/lead/safety/preventingattacksreport.pdf

http://www.popcenter.org/problems/bullying/PDFs/ntac_ssi_report.pdf

http://mynorthwest.com/5619/common-traits-of-all-school-shooters-in-the-us-since-1970/?

https://en.wikipedia.org/wiki/Homomorphic_encryption

https://www.youtube.com/watch?v=0re40JS0tJY

More Posts