Governance, oversight and transparency of Artificial Intelligence within policing
Recent years has seen the expanded use of Artificial Intelligence (AI) within the criminal justice system, as data driven technology and algorithms play an increasing role in shaping law enforcement and criminal justice decisions across the globe.
Idea in Brief
- While police have an obligation to consider and implement new technologies which may advance their functionality and improve operational effectiveness, there must also be commitments to oversight and governance.
- Without sufficient safeguards, oversight and evaluation, the adoption and use of AI technology and capabilities by police may have serious implications on individual and societal rights to privacy, fairness and public perceptions of police legitimacy.
Considerations for Policing
In going forward with the procurement and implementation of emerging AI technology, police may wish to consider the following:
- Developing their own sets of principles to govern the response use of technology: Having police-led and developed standards in place can support proper consideration and assessment of the implications of technology within police agencies. ANZPAA is currently undertaking work to develop cross-jurisdictional AI principles for Australian and New Zealand police.
- Increasing transparency around the deployment of AI technology: Police can help to alleviate perceptions that AI technologies may lead to unfair or discriminatory outcomes through prioritising transparency around their processes for selecting and deploying AI.
- Developing technology tools ‘in house’ or in partnership with tech vendors: Participating in the development of AI technology can ensure capabilities are aligned with policing standards and obligations.
Introduction
Artificial Intelligence (AI) technologies, such as facial recognition technology (FRT) and predictive algorithms are increasingly playing a pivotal role in the criminal justice system.
When deployed, these technologies have the potential to improve the efficiency and functionality of policing and find solutions to complex problems. Police can utilise these technologies to support the detection and prevention of crime and generate new insights for policing. For example, AI technologies have provided significant assistance in detecting and prosecuting child abuse image trade and financial crime matters. However, as AI’s deployment within the justice system has become more widespread, growing research has emerged to challenge its use. AI’s critics have noted its capacity to perpetuate historical biases and the potential for serious implications on human rights and civil liberties.
This brief considers the governance and oversight of police’s use of new and emerging technology. It examines lessons learned from international and domestic examples and the potential repercussions if proper oversight and safeguards are not considered. The brief also identifies several opportunities for policing to develop their capability to avoid such repercussions.
Case Study
The Los Angeles Police Department (LAPD) recently terminated their use of a predictive policing program called PredPol, which aimed to identify when and where future crimes would occur based on past data. PredPol’s algorithm was publicly criticised for reinforcing harmful patterns and creating a ‘feedback loop’. Communities with a higher police presence will naturally have higher arrest rates, leading to datasets that appear to reflect higher crime rates, but which really reflect greater police attention. The use of these datasets may fuel the over-policing of Black and minority communities.
|
Algorithm bias
Biased algorithms within AI have made headlines across many industries in recent years, including within policing. Police agencies around the world increasingly use predictive algorithms based on historical crime data to assess offender risk levels and probability of further crime. A growing number of academics and researchers caution that these technologies may exacerbate discrimination, highlighting concerns relating to the dangers of human bias becoming embedded in the data that feeds AI algorithms and decision making.
Other studies have found that AI algorithms used to predict recidivism may demonstrate racial bias. Additionally, research on FRT has found that the technology is consistently less accurate on subjects of certain ethnicities and genders, in some cases leading to wrongful convictions. This is primarily due to a lack of diversity in the datasets that are used to train the technologies. Certain researchers have gone so far as to claim that FRT is inherently biased, while others agree that the use of larger datasets with better training methodologies should lead to greater accuracy.
Transparency and accountability concerns
There is often a shortage of information about how crime prediction and data-driven technologies are used within policing. Some critics have argued that the scepticism surrounding the use of this technology has less to do with the technology than a lack of transparency from the agencies administering it. In Los Angeles, details about the LAPD’s use of predictive policing programs only emerged after years of campaigning from civil activists who demanded transparency around program operations.
Similarly, developments in technology used by police agencies have not always been accompanied by adequate safeguards, particularly if the technology is acquired commercially. While there may be great enthusiasm for the potential use of emerging technology by police and in the justice system, there does not appear to be a corresponding commitment to thorough evaluation and oversight processes.
Technology acquired from big data companies may not be subject to the review and oversight that police agencies require as part of their governance and accountability structures. For example, an audit of Operation LASER used by the LAPD found that the program used criteria that was inconsistent and imprecise.
In the UK, in a review into the use of new technologies in the justice system, the House of Lords outlined serious concerns around the lack of minimum scientific or ethical standards in place for AI tools before their adoption and use in the criminal justice sphere.
Considerations for policing
Developing principles to govern the responsible use of technology
In 2021, the European Union (EU) developed a draft set of Rules for the development, placement on the market and use of AI systems. Considering the EU’s significant global influence, if these rules are adopted, there are likely to be significant effects on the development of new technologies and commercial strategies, even outside the EU.
To guide the use of emerging technology within their jurisdictions, police may wish to create their own principles. ANZPAA will be developing a set of crossjurisdictional Principles to guide the adoption and use of AI within policing in early 2023. Having a robust set of police led and developed standards in place can support proper consideration and assessment of the implications of technology prior to implementation.
To increase transparency and oversight, police may also wish to work with experts in this space while developing principles and establish a mechanism to seek independent and specialist advice relating to their technology goals.
Case Study: New Zealand Police
New Zealand Police recently developed an emergent technology program in a public commitment to using technology safely and responsibly. An expert, independent panel was created to provide advice and oversight from an ethical and policy perspective. The panel acts as a reference group for proposed applications of new and emerging technology in policing. New Zealand Police have committed to making the panel’s advice public wherever possible.
Supported by advice from the expert panel, a policy on trialling or adopting new technology was developed. This policy governs approvals for all new technology-based capabilities, or new uses of existing technology.
Noting key community concerns around FRT, with the use of this technology becoming more widespread, New Zealand Police also commissioned an external review of the use of FRT in policing. The review provided a detailed assessment of the opportunities and risks surrounding the use of FRT in New Zealand communities.
The review made 10 recommendations, all of which were accepted by New Zealand Police.
|
Increasing transparency around deployment of technology
With transparency and accountability around police’s use of technology key community concerns, police may wish to make transparency around the deployment of AI a priority when going forward. Police can pre-emptively provide reassurance to the public and their communities through transparency about a technology’s purpose, benefits, data collection and storage methods, and safeguards in place. This may help to demonstrate responsible use of technology and inspire public trust and confidence.
This aligns with a procedural justice approach and may help to support public perceptions that police are exercising their authority in the deployment of AI technology lawfully. Should police aim to be clear and open about their processes for selecting and deploying AI technologies, this may help to address any perceptions that AI technologies lead to unfair or discriminatory outcomes (e.g., deployment is non-consensual or there is a lack of transparency).
Developing tools in-house or in partnership with tech vendors
At ANZPAA’s recent Policing Forum on Artificial Intelligence, it was suggested that police may need to develop AI tools ‘in house’ wherever possible, in order to avoid some of the risks associated with ‘off the shelf’ solutions.
For example, the intellectual property protections of certain commercial products may sometimes prevent users from obtaining information on the technology being used and the data which it depends on. This makes it difficult to assess the data used to train the algorithm. Another risk relates to the data that underpins externally sourced technology, which may have been developed overseas within a different policing context. As such, applying this technology in an Australian and New Zealand context may mean that communities are not reflected appropriately, and that the technology may not operate fairly.
However, police may not always have the resources to develop technology ‘in house’. Alternatively, police may seek to develop their internal capability to work with technology vendors to create police-appropriate technology solutions and capabilities. In doing so, police can work directly with vendors to involve themselves in technology development processes to ensure that its capabilities and functions are aligned with policing standards and obligations.
Collaborating on the development of technology or building it ‘in house’ may avoid some of the complexities surrounding commercial engagement. If police are involved in the development processes, there is likely to be greater clarity and understanding as to how data was gathered and prepared, providing greater assurance that the model used is fair and of appropriate complexity.
Case Study: Clearview AI
Controversial facial recognition company Clearview AI was used by hundreds of police agencies around the world to help solve shoplifting, identity theft, credit card fraud, murder and child exploitation cases. Since 2020, the company has faced multiple lawsuits and has been accused of violating numerous privacy and data protection laws around the world, including not having a lawful reason to collect personal information and a failure to have mechanisms in place to stop data being held indefinitely.
Many police staff and agencies who used Clearview later admitted to having only a limited knowledge of how the program worked.
|
Members only access
All Australia and New Zealand police members/employees are entitled to access this publication through ANZPAA's secure member site. You must provide your official police jurisdictional email address to subscribe.
LoginSubscribe
BACK TO BLOG