HUMAN trafficking is the movement of human beings mostly for forced labour, sexual exploitation, organ buying or forced criminality which impacts millions of people worldwide.
With the looming growth of technology and multimedia, traffickers have increasingly turned to social media and other online platforms to locate, groom, and recruit potential victims, advertise and sell them, transfer illicit funds, and even monitor their victims.
Additionally, traffickers exploit internet technologies to directly harm victims, leading to the emergence of “cyber-trafficking” – a new form of trafficking where victims are exploited through online means.
Cyber-trafficking is further exacerbated by advances in Artificial Intelligence (AI) technologies which allow traffickers to manipulate appearances and identities making it easier for them to evade detection and continue to exploit people.
Some traffickers use AI to automate and expand their scope of operations, targeting vulnerable individuals through online platforms and social media.
For example, AI-powered social media algorithms enable traffickers to reach potential victims with deceptive ads (e.g., fake job offers, romantic schemes, etc.) while maintaining anonymity.
They can also use deepfake technology to create falsified images or videos of victims for online commercial sex markets and pornography. These images are often graphic, disturbing, and non-consensual.
Traffickers also use deepfake images and videos to deceive victims by impersonating a prominent figure to gain their confidence and trust. This method is often used for job scams and sex trafficking.
While the rise of AI has profoundly being used by criminal organisations and traffickers to sustain their network and operate sophisticatedly, it has also offered law enforcement agencies new tools to counter human trafficking.
Hence, law enforcement agencies has increasingly adopted AI technologies in their quest to counter human trafficking. This is definitely a progressive step in enhancing their investigations and intercepting the modus operandi of trafficking operations.
For example, AI would be able to analyse message histories shared in online chatrooms or between traffickers and buyers or traffickers and victims or investigate video footage and photos for signs of trafficking which has proven to be labour intensive and cumbersome when carried out the traditional way.
With the advancement of technologies, AI can perform these tasks and reduce the strain on labour while minimising the psychological burden on law enforcers who are under intense pressure to rescue victims.
However, the effectiveness of using AI for countering human trafficking remains uncertain and raises ethical and legal concerns, risking potential harms to victims of trafficking.
These harms include but not limited to potential violation of data privacy and AI biases that create discriminatory outcomes.
As of now, the Malaysian Parliament is yet to pass a bill to address human trafficking cases that involve explicit images created through deepfake technology, or a bill that comprehensively addresses AI-enabled human trafficking cases.
Given the lacuna in law relating to the use of AI, it is imperative for the Malaysian government to decide on culpability, entity, and methods of prosecution when it involves AI.
Who can we place the blame on if the traffickers uses chatbot, deepfake images, video or AI related technologies to deceive, coerce or abuse the vulnerability of a person?
Can we then call this an “offender-less” crime or do we penalise the programmer for their inherent role which tantamount to a breach of natural justice?
At present, laws are enforceable by and against legal persons only. Within this context, human beings, states, businesses, professional bodies, companies and corporations are legal persons and can be brought to court.
Can we now consider machines themselves as legal persons? But how can we establish the intention of the robot? Could a robot claim defence currently available to people such as diminished responsibility, provocation, self-defence, necessity, mental disorder or intoxication should it begin to malfunction or make flawed decisions?
At the moment, there is no recognition of robots as legal persons – so they cannot currently be held liable or culpable for any wrongdoings or harm caused to anyone.
In conclusion, AI has become a double-edged sword in the context of human trafficking. While it offers traffickers new tools to expand their operations, it also empowers law enforcement with the ability to detect and dismantle trafficking networks more efficiently.
AI and algorithms also has the potential to enhance various aspects of criminal justice decision-making. However, lawmakers need to update and amend the current anti-human trafficking legal frameworks to effectively counter human trafficking.
New laws and regulations should focus on utilising AI while eliminating privacy infringements and biased nature of AI.
It should also include situations of “offender-less” crime in situations where it is hard to detect the traffickers. The law as it stands now lacks clarity.
Therefore, it is imperative for states to not only embrace the use of AI in countering human trafficking but also address the culpability of AI when used in trafficking operations. – Dec 3, 2024
The author is a criminologist and Deputy Dean (Higher Degree), Faculty of Law, Universiti Malaya, Kuala Lumpur.
The views expressed are solely of the author and do not necessarily reflect those of Focus Malaysia.
Main image: Unsplash/Clint Patterson