Skip to main content

Verified by Psychology Today

Artificial Intelligence

Can Artificial Intelligence Detect Dangerous People?

Prioritizing the power of human perception.

Key points

  • AI can be a component of violent risk assessment—in combination with human judgment.
  • Human perception involves intuition and instinct.
  • AI can identify and analyze methods of detecting danger.

Are you living next door to an axe murderer? Don’t expect AI to know.

Image by tigerlily713 from Pixabay
Image by tigerlily713 from Pixabay

The Power of Perception

When it comes to sizing up personality and character, AI cannot replicate personal perception. Interacting with a stranger, acquaintance, or reclusive neighbor involves a mix of intuition and instinct. AI can sort statistics, research, and calculate at lightning speed, but lacks a sixth sense. In a social setting, AI cannot cultivate chemistry or bond through building rapport. It is great for performing tasks and retrieving data, but lacks human instinct and insight—which are often critical to threat assessment work.

But can AI assist with a risk assessment analysis? Absolutely. This can include assisting a threat assessor in making proactive predictions.

AI Can Suggest Methods of Detecting Danger

As an illustration, I asked Chat GPT how AI can help detect dangerous people. Among the answers it gave, were the following categories, which I paraphrase below:

Facial Recognition: Analyzing facial features and matching them against known databases of individuals of interest, including the use of this technology in public spaces and high-security areas to identify individuals with a criminal history or on watchlists.

Video Surveillance: Analyzing live or recorded footage in real-time, detecting suspicious behavior, identifying weapons, or recognizing other potentially dangerous behavior.

Natural Language Processing: AI can analyze text, written or spoken, to identify potential threats or potential indicators of dangerous behavior. Of particular relevance to the modern internet era, this could include monitoring social media platforms to detect signs of violence or extremism.

Behavioral Analysis: Using AI algorithms to recognize potentially dangerous patterns of behavior in order to identify abnormal behaviors or deviations from “expected norms.” This type of analysis can be useful within public places such as airports, or within the workplace.

Data Integration and Analysis: Aggregating and analyzing data from multiple sources, which could include criminal record databases, travel history, and social media activity.

Chat GPT incorporated within its answer a well-founded disclaimer, including the recognition that while AI can function as a valuable tool in detecting potential threats, “Human judgment and intervention should always be involved in making final decisions based on the outputs generated by AI systems.”

Agreed. So if AI is to be used as a risk assessment investigative tool, the key is knowing what to delegate, and what to do ourselves.

Using AI in Violent Risk Assessment

Benjamin L. Spivak and Stephane M. Shepherd (2021) researched the use of AI in violence risk assessment.[i] They adopt a definition of AI as an algorithm capable of “performing some function previously thought to be exclusive to human intelligence." They note that interpreting AI in this limited way may constitute merely a new term for a long-established practice.

In addressing the question of transparency, Spivak and Shepherd describe factors supporting human judgments of risk as “opaque,” noting that a clinician's risk assessment will likely be influenced by processes outside of conscious awareness or involve processes that defy adequate explanation. Along these lines, they note that although a clinician can explain the selection of risk-assessment classification, it is debatable whether or not the explanation accurately reflects the process by which the classification was reached.

Spivak and Shepherd explain that, unlike using our brains, AI-based risk assessments are based on math, which facilitates exploring whether risk classification would be different if the subject were a different age, or lacked a criminal record. They explain these answers will not be reliable where the judgment about risk assessment involved human discretion.

Combining research with common sense, threat assessment professionals can use AI proactively in combination with the human power of perception in targeted violence prevention.

References

[i] Spivak, Benjamin L., and Stephane M. Shepherd. 2021. “Ethics, Artificial Intelligence, and Risk Assessment.” Journal of the American Academy of Psychiatry and the Law 49 (3): 335–37. https://search-ebscohost-com.libproxy.sdsu.edu/login.aspx?direct=true&db=psyh&AN=2021-92340-006&site=ehost-live&scope=site.

advertisement
More from Wendy L. Patrick, J.D., Ph.D.
More from Psychology Today