Skip to main content

Using artificial intelligence in the fight against human trafficking

PhD student, Thao Do, is researching the capabilities and risks of using AI in combating human trafficking, to promote responsible AI use and empower survivors.

A large blurred crowd of people cross a pedestrian crossing.
As well as the technical challenges, the use of AI surveillance raises legal and ethical concerns that must be addressed.
‘If you are not careful in investigating the technology you’re using, you can wind up doing more harm than good.’
Thao Do PhD student in the Department of Computer Science

Stolen from your home. Forced into slavery in a foreign country where you don’t speak the language. Your suffering is invisible to the people you meet. This is the reality for far too many people the world over, trapped by the web of human trafficking.

Human trafficking is the involuntary movement of human beings for forced labour or sexual exploitation. It impacts millions of people worldwide. Artificial intelligence (AI) technologies, adopted more and more every day by the world, are now being developed and deployed in the fight against human trafficking. However, the effectiveness of using AI for stopping human trafficking remains uncertain. Using the technology also raises ethical and legal concerns, risking potential further harm to survivors of trafficking.

Thao Do is a doctoral candidate in the interdisciplinary doctoral training centre for Accountable, Responsible and Transparent Artificial Intelligence, better known as ART-AI, at the University of Bath. She is investigating the use of AI systems to stop human trafficking. She has previously worked to combat human trafficking with Action Pour Les Enfants (APLE) Cambodia, a non-governmental organisation dedicated to stopping the exploitation of children. This issue is particularly poignant for Thao due to the high rate of human trafficking in her native Vietnam.

How AI can help

Traditionally, law enforcement agents will manually analyse message histories shared in online chatrooms or between traffickers, or investigate video footage and photos for signs of trafficking.

This is very labour-intensive, often ineffective, and places an enormous psychological burden on the humans regularly exposed to the horrors of human trafficking. Police agencies struggle to keep up with the demand.

Thao’s research focuses on technologies like natural language processing and computer vision. These are forms of AI that examine bodies of text and images from photos and videos, respectively. For example, AI systems can be used to analyse surveillance footage from hotels to detect known traffickers or missing persons, comparing hundreds of faces in moments.

If technologies using AI can perform these tasks, this could reduce the strain on labour and increase effectiveness. AI technologies can even be used to detect trafficking occurring out at sea. By using satellite images, unusual boat movements that indicate the potential presence of traffickers can be detected and brought to attention.

However, Thao is quick to point out some of the fundamental challenges with using AI for this purpose.

The challenges of building AI to stop trafficking

Most modern AI systems require a huge amount of example data to work effectively. By learning what patterns in the data are signs of trafficking, it can automatically flag the data for a human investigator to act on. However, the nature of AI means that it can struggle to adapt to different locations, where the data it sees may differ from what it has used to learn. Even small, seemingly invisible variations in what the AI sees can potentially break the technology.

Much of this AI technology built to combat trafficking is trained to do so in North America and Europe. This can reduce AI’s effectiveness in major areas of human trafficking in Asia and Africa because the technologies have not been trained to assess the relevant images, patterns and languages for these diverse contexts. This means that technologies may be much less effective where they could have most impact and may systematically miss some groups of victims. This perpetuates injustice against these groups.

Legal and ethical concerns

Ensuring that these AI systems work across the globe isn’t Thao’s only concern with using AI technologies to stop trafficking. As well as the technical challenges in building the technology to work across borders, there are legal barriers and ethical concerns to address.

Many jurisdictions take a cautious approach to using AI technologies in surveillance. Some countries have banned it outright. While this is important for maintaining civil liberties, it complicates the use of AI systems to detect active trafficking or identify victims in need of rescue. Other forms of the technology that could be used by the police to counter trafficking, such as monitoring social media platforms for warning signs of exploitation, face similar legal hurdles.

These highlight important questions about using AI technologies when it comes to balancing benefits and rewards. While surveillance may, for example, be able to locate people forced into sex work, you risk harming the privacy of an already marginalised group by monitoring them in this way. An AI system may mistakenly flag someone as trafficked when they are not, interfering with the person and drawing resources away from those who do need help.

Evaluating the technology

AI systems for stopping trafficking are mostly developed in private industry. This can put up barriers to knowing what data and algorithms have gone into the technology, making it difficult to understand the design decisions behind them. Another aspect of Thao’s research is to better understand the design and development of these tools, to aid evaluation of their likely effects and possible implications when applied by police or NGOs, for example.

The general lack of transparency in AI system development can hide unknown biases that have infiltrated the system, the kind that could lead to victims from particular backgrounds being consistently overlooked and further victimised. It also makes it harder to know how the system has been tested, preventing a robust assessment by those who need to decide whether they want to use it. Even worse than simply not working, a faulty system could cause even more harm than not using it at all.

Thao’s PhD involves engaging stakeholders in anti-trafficking including professionals from law-enforcement, non-governmental organisations like APLE, and survivors of the experience. This research can help to combat the many challenges of using AI technologies for anti-trafficking, by refining how these systems are developed and deployed.

Thao’s goal

Ultimately, Thao wants to expand the power in how these tools are designed, beyond the technical developers, to create an ecosystem of expert knowledge from all stakeholders. The dream of her research is to support the design of AI systems that work well in all domains, are robustly tested to ensure this is the case, and empower survivors of trafficking to use their knowledge to prevent further harms.

Asked for her thoughts on the future of AI being used to stop trafficking, Thao offered a cautiously ambitious view. She hopes to see more innovation focusing on other parts of the world, but insists on a careful approach. While the technology no doubt offers promises for combating human trafficking, there are many careful considerations to make and discussions to be had first.

This article was written by Jack McKinlay, a postgraduate research student in the Department of Computer Science. It was produced as part of the Science Communication Ambassador project in the Faculty of Science.

Related case studies

Find out about more research from the Department of Computer Science

Counting elephants from space

elephants seen from space.

Satellite images processed with the help of computer algorithms devised at the University of Bath are a promising new tool for surveying endangered wildlife.