Since the beginning of her career Dr Olivia (Liv) Brown has been researching how to prevent and improve the response to terror attacks. The focus of her work is how groups interact in various contexts. This has many uses – from helping emergency services communicate better, to deciding whether someone exhibiting worrying behaviour online should be more closely monitored.
In recent years, her work has become ever more relevant. Since 2014, there has been a 320% increase in right-wing terror attacks in western countries. In the UK, a cross-party group of MPs and peers shared ‘serious concerns' about the growing threat posed by the far-right in a recent report. They warned that more (primarily) young men are being radicalised online, often through gaming sites or messaging boards.
When does online hate become offline violence?
The report showed how growing levels of global inequality, combined with an increasingly polarised social discourse, means that vulnerable or disaffected people are easily exploited by groups looking to recruit.
The Covid-19 pandemic has also contributed to the problem. Far-right groups have capitalised on fear and uncertainty to spread conspiracy theories and generate anti-government, anti-authority sentiment.
Worryingly, more younger people are getting involved in right wing spaces, through platforms like Instagram and Telegram.
Research with impact
Liv joined the School of Management as a postdoctoral researcher to work on a project funded by the Centre for Research and Evidence on Security Threats (CREST) which looked at how to link online to offline behaviours.
This work is central to her current research, which investigates how online extremism spills over into real-world violence. Liv is developing a model to identify if someone is likely to commit a terrorist attack, based on their digital interactions.
Impact is central to Liv’s approach to research. She originally wanted to become a clinical psychologist. But a module on counter terrorism during her psychology degree changed everything. Her interest in counter-terrorism stems from the importance of the work, and how she could use her skills to make a difference.
'For me research has always been about doing something useful. While research for research’s sake is important, I’m personally more motivated by real-world outcomes. How can somebody use what I’ve done? How will that impact on tangible things in the real world?'
She was also heavily influenced by the ethos of CREST, who funded both her PhD and postdoctoral research.
'Working with CREST really affected how I approach my work, as they are so outcome focused. It makes me consider how this research can be useful to practitioners, and not just seeing impact as being published in a top journal.’
Creating a more intelligent algorithm
An understanding of the real-world application of research is essential to Liv’s work on online extremism. It allows her to tailor what she’s doing to plug existing gaps in the way security services approach radicalisation.
'My main concern is to translate what we’re doing into workable tools or resources that the security services can use. Resources are finite, and they can’t monitor everyone. In fact, they shouldn’t, for ethical reasons. We want to help them establish clearer markers or risk signals for what might turn into genuine violence so that they can prioritise.’
Existing algorithms used to identify terror threats aren’t as helpful as they could be. They detect harmful language, but it’s extremely difficult to deduce which people are more likely to commit a violent offence than others.
Liv says: ’There’s lots of hateful content online, some very extreme, but only a tiny proportion of those people will actually do anything offline. Not to mention that people planning to do harm are often good at avoiding key terms that would normally be flagged as concerning.’
A mixed methods approach
To address this, Liv and her colleagues combine data science with qualitative methods. They analyse digital information through a psychological lens and use that to create a more intelligent predictive model.
They started by understanding how convicted far-right terrorists interacted online. This involved a lot of investigative work – reading court records, news reports and scouring the internet – to find these terrorists' online data.
They then compared that to other hateful or extremist content online. This revealed what’s different about the way that actual offenders communicate. This meant the laborious, manual analysis of hundreds of thousands of online posts.
‘By going through post by post you generate an understanding of what’s really important. You understand how and why people are using terms in different contexts, rather than just using a machine to track patterns.’
Future-proofing the research
As with any research involving technology, there is a worry that conclusions will quickly become outdated. The difficulty with right wing spaces – and online spaces in general – is that they evolve. Changes to the way that far right groups use language, and the websites they use, means that Liv and her colleagues may need to update the model in future.
‘It’s a technological race as well as a research question,' Liv acknowledges. Luckily, she and the team were aware of this from the beginning, and their methods can be replicated easily.
'We’re always trying to think about how we can make we do useful even if the landscape shifts,’ says Liv.
Resilience in the face of hate
Though Liv’s mixed methods approach to the project was invaluable in creating a more intelligent algorithm, it also meant that she was exposed to some of the internet’s most toxic and extreme content.
Liv found trawling through all the racism, homophobia and hate difficult. But she was able to stay strong by reminding herself of the value of the work.