Skip to main content

Axon's AI Ethics Board

Artificial Intelligence (AI) technologies hold great promise for enhancing the effectiveness of public safety. As a leading technology company for law enforcement, Axon is proud to lead the way in developing this technology, but we believe we have the obligation to do so in a responsible way. One that promotes transparency, with built in mechanisms for accountability. That is why we have assembled an Axon AI and Policing Technology Ethics Board.

The mission of this independent board is to provide expert guidance to Axon on the development of its AI products and services, paying particular attention to its impact on communities.

 

OVERVIEW

 

We commit to being transparent and responsible in how we use AI and in how we design technologies for police. To guide us, we have formed the Axon AI and Policing Technology Ethics Board. This external advisory body is made up of experts from varying fields including AI, computer science, privacy, law enforcement, civil liberties and public policy. 

The charter of the board is to provide us with guidance about the responsible development of police technologies and of AI features in our products and services, which includes considering when to use and not use AI. The role of the board is to give us candid advice. We recognize that by providing us with advice, board members are not endorsing our products or services.

The set of operating principles presented below outlines how our team will work with the board, highlighting our commitment to transparency and our commitment to providing the necessary information to the board so they can provide us with constructive input. Note that these principles are not intended to be exhaustive of all of our responsibilities and principles related to AI or police technologies (which will be discussed in future publications), but pertain specifically to the role of the board. 

These principles demonstrate our commitment to designing technologies in a responsible and ethical fashion. We recognize that as we learn and grow and technologies change, these operating principles may evolve over time, and in such cases, we will revise the text of the principles publicly and provide an explanation for such changes. 

 

OPERATING PRINCIPLES

  • When considering a new AI application or police technology for which there may be substantial ethical risks, we will ensure that the board has an opportunity to discuss its pros and cons, and how it can be done most ethically. We will discuss new products with the board before launching a product that raises ethical concerns so that that they can provide us with guidance on new product development.

  • We will keep the board informed as to which tools we implement to allow oversight and transparency regarding how key AI and related technologies are being utilized, and how these tools are operating. We will build tools and systems to enable oversight around how these technologies are used in the field.

  • We will provide meaningful information to the board about the logic involved in constructing our algorithms. We will clearly describe our thinking behind our models, what they are intended to do, ways in which they might be misused, and our efforts to prevent such misuse. 

  • We will provide a description to the board of the data on which a model was or is continuously trained. We will demonstrate that we have considered the potential biases in the data on which an algorithm was or is continuously trained and the assumptions used in the model. We will explain the steps taken to mitigate any negative consequences associated with bias or inaccuracy in our trained models. 

  • We will provide a list of all the inputs used by an algorithm at inference time to the board. For each AI algorithm running on our devices or services, we will provide a list of its input parameters along with a description for each parameter.

  • We will provide the board with the measures we have taken to ensure high levels of data security and privacy. Our public safety customers and their communities need to be confident that their data is appropriately protected to meet security and privacy requirements. We will discuss these measures with the board.

ABIDING BY THE OPERATING PRINCIPLES

To hold ourselves responsible to these operating principles, the following two avenues are available to anyone within the company to raise and address their concerns.

  • Contact the AI and Policing Technology Ethics Board Lead. The board lead is outside of the AI Team chain-of-command. S/he will attempt to address concerns with Axon leadership. The current lead is Mike Wagers: mwagers@axon.com

  • Contact the external AI and Policing Technology Ethics Board Ombudsperson. Each year, the board will identify one member to act as an ombudsperson to hear any concerns from the AI team. That person will work with other members of the board and with Axon leadership to address concerns. The current board ombudsperson is Tracy Ann Kosa: kosat@seattleu.edu

 

AI Research

Our AI Research team is working to cut the time spent on paperwork by improving the efficiency and accuracy of report-writing and information analysis in law enforcement.

LEARN MORE ABOUT AXON'S AI RESEARCH

 

Publications

First Report: Facial Recognition

Second Report: AUTOMATED LICENSE PLATE READERS

Product Evaluation Framework

 

AI Ethics Board Members

The board is made up of individuals from varying fields in order to bring a multitude of perspectives to the table. For a full list of members, please see below:

 

BARRY FRIEDMAN

Barry Friedman serves as the Director of the Policing Project at New York University School of Law, where he is the Jacob D. Fuchsberg Professor of Law and Affiliated Professor of Politics. The Policing Project is dedicated to strengthening policing through ordinary democratic processes; it drafts best practices and policies for policing agencies, including on issues of technology and surveillance, assists with transparency, conducts cost-benefit analysis of policing practices, and leads engagement efforts between policing agencies and communities. Friedman has taught, litigated, and written about constitutional law, the federal courts, policing, and criminal procedure for over thirty years. He serves as the Reporter for the American Law Institute’s newPrinciples of the Law, Policing. Friedman is the author ofUnwarranted: Policing Without Permission(Farrar, Straus and Giroux, February 2017), and has written numerous articles in scholarly journals, including on Democratic Policing and the Fourth Amendment. He appears frequently in the popular media, including theNew York Times,Slate,Huffington Post, Politicoand theNew Republic.He also is the author of the critically acclaimedThe Will of the People: How Public Opinion Has Influenced the Supreme Court and Shaped the Meaning of the Constitution(2009). Friedman graduated with honors from the University of Chicago and received his law degree magna cum laude from Georgetown University Law Center. He clerked for Judge Phyllis A. Kravitch of the US Court of Appeals for the 11th Circuit.

 

CHRIS HARRIS

Chris Harris is a native Texan and passionate advocate working to transform the criminal legal system. He currently serves as a Campaign Coordinator for Texas Appleseed, a public interest nonprofit that promotes social and economic justice for all Texans, and as a Public Safety Commissioner representing District 1 in the City of Austin. Chris has led and contributed to winning campaigns aimed at limiting: racial disparities in the criminal justice system, state violence, unnecessary arrests, ICE detentions, pre-trial incarceration, subpar indigent defense and further investment in police and prisons. Chris is an alumnus of Austin College where he studied political science and international relations. Previously, Chris worked in the software industry with a focus on public sector solutions and held various roles, including analyst, consultant, and product manager.

 

CHRISTY LOPEZ

Christy E. Lopez is a Distinguished Visitor from Practice at Georgetown University Law Center, where she teaches courses on police reform and criminal justice. She also co-leads Georgetown’s Program on Innovative Policing, which in 2017 launched the Police for Tomorrow Fellowship. From 2010-2017, Professor Lopez served as a Deputy Chief in the Special Litigation Section of the Civil Rights Division at the U.S. Department of Justice. She led the Section’s Police Practice Group, which conducted pattern-or-practice investigations of police departments and other law enforcement agencies; litigated related cases; and negotiated and implemented police reform settlement agreements. She also helped coordinate the Department’s broader efforts to ensure constitutional policing. While with the U.S. Department of Justice, Ms. Lopez led civil rights investigations of many law enforcement agencies, including the Ferguson Police Department. She was a primary drafter of the Ferguson Report and negotiator of the Ferguson consent decree. She also led investigations of the Chicago Police Department, the New Orleans Police Department, the Los Angeles Sheriff’s Department, the Newark (New Jersey) Police Department, and the Missoula, Montana police department, campus police, and prosecutor’s office. Professor Lopez received her J.D. from Yale Law School and her undergraduate degree from the University of California at Riverside.

 

DANIELLE CITRON

Danielle Citron is a Professor of Law at the Boston University School of Law where she teaches and writes about information privacy, free expression, and civil rights. She was named a MacArthur Fellow in 2019. Professor Citron is the Vice President of the Cyber Civil Rights Initiative and sits on the Board of Electronic Privacy Information and the Future of Privacy. Professor Citron advises federal and state legislators, law enforcement, and international lawmakers on privacy issues. She has presented her research more than 300 times, including at federal agencies, meetings of the National Association of Attorneys General, the National Holocaust Museum, the Anti- Defamation League, Wikimedia Foundation, universities, companies, and think tanks.

 

JEREMY GILLULA

Dr. Gillula began his career in academia doing research in the fields of robotics and machine learning. As a participant in the DARPA Desert Grand Challenge, he did work on computer vision systems and sensor fusion systems for unmanned autonomous ground vehicles. During his doctorate, his research focused on how to design guaranteed safe control algorithms for hybrid systems, with a focus on unmanned aerial vehicles. His thesis focused on the design of guaranteed-safe machine learning systems, fusing control theoretic and machine learning techniques. Since finishing his Ph.D., Dr. Gillula has turned his attention to the intersection of technology and civil liberties issues, including mobile devices, big data, net neutrality, and algorithmic fairness and transparency. He provides technical expertise to lawyers and activists who work on digital civil liberties, and has given a multitude of talks to conferences, invited groups, and policymakers. Dr. Gillula holds a B.S. in Computer Science with a minor in Control and Dynamical Systems from Caltech, and an M.S. and Ph.D. in Computer Science from Stanford University.

 

JIM BUEERMANN

Jim Bueermann is the former President of the DC-based National Police Foundation – America’s oldest non-partisan, non-membership police research organization. He worked for the Redlands (CA) Police Department for 33 years where he served as the Chief of Police and Director of Housing, Recreation and Senior Services from 1998 until his retirement in 2011.

 

KATHLEEN M. O’TOOLE

Kathleen O’Toole is a career police officer and lawyer who has earned an international reputation for her principled leadership and innovation. She was the former Chief of Police in Seattle, Washington where she led the organization through a major reform project. She previously served as Boston Police Commissioner, Massachusetts Secretary of Public Safety, and Chief Inspector of the Garda Inspectorate in Ireland. Kathleen earned a BA from Boston College, a JD from New England School of Law and is a member of the Massachusetts Bar. In 2018, she completed her PhD at the Business School of Trinity College, Dublin.

 

MECOLE JORDAN

Mecole Jordan began her non-profit career in 2006, volunteering for a campaign to reduce the criminalization of individuals with drug addiction. She quickly became entrenched in the work, supervising reentry and violence prevention programs that provided comprehensive wraparound support for the formerly incarcerated and at-risk youth. She later began local and statewide organizing, conducting organizing and racial equity training for community organizations and parents engaged in education reform. She has gained significant experience working with and building broad-based coalitions for local and statewide reform efforts, particularly around Police Reform and Racial Equity.

 

MILES BRUNDAGE

Miles Brundage is a Research Scientist at OpenAI and Research Associate at the University of Oxford's Future of Humanity Institute. He researches the governance challenges associated with artificial intelligence (AI), especially those related to AI's impact on security. Recently, Brundage was the lead author of a report, "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," which received widespread news coverage and has been cited by both the U.S. Congress and the U.K. Parliament. Brundage has published widely in conference proceedings, journals, books, and magazines on a range of AI-related topics. His publications include a review of the field of deep reinforcement learning (a research area in AI), a proposed framework for responsibility in AI development, and an analysis of challenges associated with imbuing AI systems with human values. Prior to beginning his work on AI and governance in 2012, Brundage served for two years as Special Assistant to the Director of the Advanced Research Projects Agency - Energy (ARPA-E) and Special Assistant to the Under Secretary of Energy at the U.S. Department of Energy. In addition to his research position at Oxford, Brundage is a Ph.D. candidate in Human and Social Dimensions of Science and Technology at Arizona State University, and serves on the editorial staff of the Journal of Responsible Innovation.

 

TRACY ANN KOSA

Tracy Ann Kosa is an adjunct professor at Seattle University, a privacy and technology specialist, and a former Fellow at Stanford University. Her current work proposes privacy patterns for enforcement and regulatory activity. Dr. Kosa has held a number of leadership roles in privacy at Microsoft and in government. She holds a doctorate in computer science (privacy), Master's degrees in ethics and public policy, and undergraduate work in economics and political science.

 

WAEL ABD-ALMAGEED

Dr. Abd-Almageed is a Research Associate Professor at the Department of Electrical and Computer Engineering, and a research Team Leader and Supervising Computer Scientist with Information Sciences Institute, both are units of USC Viterbi School of Engineering. His research interests include representation learning, debiasing and fair representations, multimedia forensics and visual misinformation (including deepfake and image manipulation detection) and biometrics. Prior to joining ISI, Dr. Abd-Almageed was a research scientist with the University of Maryland at College Park, where he leads several research efforts for various NSF, DARPA and IARPA programs. He obtained his Ph.D. with Distinction from the University of New Mexico in 2003 where he was also awarded the Outstanding Graduate Student award. He has two patents and over 70 publications in top computer vision and high performance computing conferences and journals. Dr. Almageed is the recipient of the 2019 USC Information Sciences Institute Achievement Award.