56% of active shooters who carried out mass attacks between 2000 and 2013 leaked their intent to commit violence prior to the attack.
According to a report by Smart Insights in February 2019, there are just over 3.4 billion social media users worldwide in 2019. When thinking about social media platforms, mainstream sites usually come to mind, such as Facebook, Instagram and Twitter, however information relevant to threat monitoring is more likely to be found on obscure, less regulated and anonymous platforms.
A report from Paladin Risk Solutions aims to document views of risk professionals, research organizations, psychologists and journalists on the use of social media monitoring to predict and prevent mass violence.
The report will highlight social media platforms that have been identified as being useful for threat monitoring, including obscure, anonymous forums. Key problems and limitations encountered with automated monitoring will be identified and assessed, as well as the challenges of human-computer interaction.
How has Social Media been used to Predict Mass Violence?
Following two mass shootings over one weekend in August 2019 in Dayton, Ohio, and El Paso, Texas, the director of the FBI issued an order for agents to conduct online threat assessments in an effort to prevent similar attacks. In many incidents of lone shootings and terror attacks, online threats of violence have preceded the incident. By the end of August 2019, over 20 people in the United States had been arrested for making threats of violence online.
One of these was 18-year-old Justin Olsen, who had threatened to carry out shootings at Planned Parenthood locations on meme-sharing site iFunny under the user name “Army of Christ”. According to an FBI spokesperson, the posts had first been flagged by an FBI office in Anchorage, Alaska, and agents continued to monitor Olsen’s posts. A search of Olsen’s house two days after his arrest located a collection of weapons including 15 rifles including AR-15 style rifles, 10 semi-automatic pistols, over 10,000 rounds of ammunition, and a machete in the teenager’s car.
In the aftermath of two August 2019 mass shootings in Ohio, and Texas, USA, and two consecutive March 2019 mosque shootings in Christchurch, New Zealand, social media threat monitoring has been the subject of much discussion. The objective of this report is to document views on what is needed from technology and human expertise to utilize threat monitoring of social media platforms.
Key Findings of the Report
- 56% of active shooters who carried out mass attacks between 2000 and 2013 leaked their intent to commit violence prior to the attack.
- Information relevant to threat monitoring is more likely to be found on obscure, less regulated platforms, rather than mainstream social media platforms.
- People who later commit a violent attack are more likely to use emotionally charged words, more likely to use direct pronouns, and less likely to use words about the external world in their online posts.
- Eight warning behaviours of individuals who could present a concern for lone-actor terrorism are Pathway (research, preparation), Fixation, Identification as an agent of a cause, Novel aggression (a small unrelated act of violence), Energy burst, Leakage, Direct threat, and Last resort (a declaration which indicates increased distress).
- 10 characteristics of individuals who could present a concern for lone-actor terrorism are Personal grievance and moral outrage, Framed by an ideology, Failure to affiliate with an extremist group, Dependence on the virtual community, Thwarting of occupational goals, Failure of sexual-pair bonding (evidence of failure to form lasting intimate relationships), Changes in thinking and emotion, History of mental disorder, Creativity and innovation (in regards to tactical planning of an attack), and History of criminal violence.
- Possible limitations with automated threat monitoring tools include bias, human-computer interaction issues and accountability.
- Misinterpretation of non-verbal communication, foreign languages, slang, non-familiar cultural references, and non-standard English dialects are potential limitations of automated monitoring tools.
- Recommendations for human expertise to support an automatic threat monitoring system include multilingual analysts and linguists; investigative experience; ability to access critical data and resources; and knowledge of privacy laws, copyright acts and violations of social media platform’s terms of service
Read the full report here.