Resource Center

article

Machines, AI, and the future of intelligence-led policing

Predictive policing 

There is potential for data-driven policing to predict and prevent crimes. Still, racial equality campaigner Mecole Jordan-McBride, Senior Program Manager at NYU School of Law Policing Project and a member of Axon's US ethics board, warns that its use comes with implications for civil liberties and BAME communities.

Predictive policing is already used in many places in the US as well as the UK, but there is increasing controversy over its rapid adoption due to lack of transparency and its potential to discriminate and alienate communities. This prompted the EU to propose new rules for AI in February 2020, saying that ‘Clear rules are needed to address ‘high-risk AI systems’ such as those in health, policing or transport, which should be ‘transparent, traceable and guarantee human oversight.’ 

Predictive policing was introduced around 2011, when Time magazine hailed predictive policing as one of the top 50 inventions that year. It was a time where we were just starting to grasp the potential of big data that came with the advancing digitisation as well as high hopes that this technology would improve our lives.

Numerous police forces in the US and UK have started trialling and using historic data to predict crimes over the last decade. 

Predictive policing systems work by using machine-learning algorithms, which can analyse millions of data points. These can include details of past crimes and identities of criminals, which can be combined with local intelligence, social media history and mobile phone data. The AI systems are trained to spot correlations and patterns within the data to, for example, learn the modus operandi of offenders, and make predictions of where future crimes are likely to occur and who may become a victim or perpetrator of violence. 

From the technology’s inception there has been great optimism that it would enable the police to move from traditional policing to intelligence-led policing, or from a reactive to a proactive approach, preventing rather than having to solve crimes. This would benefit not only victims and society as a whole but also perpetrators, who might avoid entering the criminal justice system and the detrimental effects this often has on their life chances.

Mounting concerns

However, there is mounting concern that this use of data negatively impacts civil liberties and communities. Mecole Jordan-McBride, Senior Program Manager at NYU School of Law Policing Project and a member of Axon's US ethics board, calls for a more cautious approach to predictive policing. “We live in a society which is becoming increasingly reliant on AI,” she says. “But this doesn't give a green light for applications such as predictive policing without taking a deep dive into how it works and how it impacts communities.”

Mecole also has concerns around how the technology affects BAME communities. Some systems focus on location, trying to predict where and when future crimes are likely to happen, while others try to prevent crimes by pinpointing the individuals that are most likely to become a victim or perpetrator of crime. To compile the list, the system looks at residents with a history of violence and criminal activity, as well as the people they are affiliated to, through, for example, gang activity or social media history.  

Mecole finds this deeply problematic because she argues that the data used replicates existing racial biases in society, risking a new kind of profiling. She says, “Some communities of colour are already over-policed, and these algorithms are simply reproducing this, focusing disproportionately on them.” 

Mecole observes how this is alienating the very people that agencies are trying to protect, and who feel not only targeted but occupied. “In some neighbourhoods, there are police on every corner, who look like they are just waiting for something to happen. That sends a negative message. It alienates people and erodes trust, creating a vicious circle.” 

But it doesn't only affect the people who have a criminal history, as these systems can also capture the identities of social connections, including friends and relatives. Mecole says, “It's forcing people to make difficult choices. For example, I have had no contact with law enforcement but, if I wanted to stay connected to a family member who did, I would have to face an invasion of my privacy on a very granular level, such as being put on strategic mapping lists, which puts me in view of law enforcement.” 

Civil liberty campaigners argue that this doesn't only violate associational freedoms, but also goes against a reasonable expectation of privacy. There are also concerns about the lack of transparency, as there is often no easy way to see the list and what kind of information is held, rectify errors or remove oneself from it.

So far, the evidence of how effective these systems are is mixed and difficult to assess, in particular, as moderate increases in crimes solved in the short term could be outweighed by the detrimental impact this has on some communities in the long run.

While Axon recognises the potential for data-driven policing, Axon is not currently developing such technologies and works closely with its ethics board to make sure any solution the company provides is effective as well as ethical. 

Engaging communities

Mecole asks, “What message does it send to communities when police forces invest millions on AI rather than building relationships with these communities, or investing in much-needed social services, which would prevent some of the problems that lead to criminal activities? There is an absence of input by the communities as well as transparency.” 

Does she think predictive policing would be more suited to the UK, where communities tend to be more mixed? “It looks different but, at the core, there is the potential for the same institutional, systemic bias, which is replicated in AI,” she says.

Finally, could there be a new type of AI, developed to embed fairness into such a predictive policing system? “I won't say that AI doesn't have a place in policing,” says Mecole, “but technology won't ever replace relationship-building, sensitivity to social issues and trust. We need to have these difficult conversations. I think Axon is doing just that by working with an ethics board, so kudos to them. We are an independent board, so they don't have to follow our advice, but they have always listened and observed our recommendations, and I think this is helping them to succeed.” And, Mecole says, it's exactly this engagement that may enable the development of less-biased AI in future. “Nothing is impossible.”