Axon is leading the industry in Responsible AI
May 10, 2018

It's not often that companies the size of Axon are compared favorably to Google. But it happened last week in a Forbes article:
"This week, Axon ... demonstrated the kind of transparency that Google should aspire towards when it announced an AI ethics board to 'help guide the development of Axon's AI-powered devices and services'."
Let's recall that Google started with a machine-learned algorithm called PageRank. Since then, it has been a powerhouse for machine learning (ML), and arguably no other company has commercialized ML and artificial intelligence (AI) more than Google. They have also been paying attention to the ethical side of AI for a relatively long time. DeepMind's ethics board was formed in 2014. In his 2017 Founders’ Letter, Sergey Brin (the Google co-founder with the beard - in case you mix them up like I do) writes about AI: “such powerful tools also bring with them new questions and responsibilities. How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?”
So being positioned above Google in the AI ethics space is a tall order.
For most people, the conversation about ethics of AI starts with the thought of sentient AI that could take over and rule us all one day. Before a congressional hearing on AI, Congressman Hurd asked people on the street about AI, and no surprise, most people's idea of AI is formed by movies. Naturally, doomsday scenario is the #1 notion in their mind.
The AI-apocalypse-frenzy has happened many times when a new technology has made a breakthrough. The printing press, the Internet, the mobile phone, Snapchat...the list goes on. Perhaps when the wheel was invented, concerned parents were skeptical. At the same time, if data is the new oil as they say, we obviously made a mess with the previous oil we discovered.
Which scenario is going to unfold then? Is it going to be all rosy or all gloomy? What can we do about it? Freeze and take no action? Or take a swing and see how things will unfold?
We don't have to accept what the movies or any pundit or expert predicts. We can help forge the world around us at present and create the future. There are talkers - people who freeze at the sight of any potential change and their only response is inaction and talking about how things are not going to work out. And then there are doers - people who work on making the world a better place every day by facing challenges, taking risks, and walking into uncharted territories. At Axon we are doers, and we believe that the world will be a better place with AI, if developed responsibly.
As a technologist, I believe that technology should—as it has in many cases so far—allow us to be more human. I am optimistic that AI, done the right way, will allow us to be more human.
The Michael Jordan of ML wrote in a recent blog post that what is clumped together as “AI” these days can be divided into three main categories which are in fact different things:
- Human-imitative Artificial Intelligence (HIAI): Making machines that understand the world and have similar cognitive capabilities as humans. This is what most people know as AI.
- Intelligence Augmentation (IA): The kind of AI that augments human intelligence and improves our productivity. The same way that eyeglasses can make us see better or cars can make us move faster, this kind of AI can enable us to do more by achieving higher levels of cognitive abilities. Examples include search engines and smart speakers.
- Intelligent Infrastructure (II): When we use AI to improve the infrastructure of our information-based society and usually involves many connected devices. For example, Uber uses AI to create an Intelligent Infrastructure that moves millions of people around the world every week, and dating websites use AI to create an intelligent infrastructure for matching couples.
At Axon, the AI Research Team has focused on #2 and #3 and not on #1. We see AI as an assistant to our customers in public safety. It will help and support them in making decisions. It will provide a very effective set of tools for the officers to do more (efficiency) and have better access to crucial information and real-time safety measures (vigilance). In doing this we ensure that our AI technologies complement and augment the human intelligence while leaving all decision making to human judgment. In short, Axon AI technologies will empower the officers to be more human.
We do our AI development while keeping the ethical aspects front and center to every step of the process - from data collection to model training, inference, evaluation, and interpretation of the results. By avoiding AI type #1, we ensure that there is always a human-in-the-loop to make the final judgments whenever there might be a risk of impinging upon privacy or civil rights. Axon's AI Ethics Board keeps us in check.
We are committed to ensuring that the technology we develop makes the world a better and safer place. Our first-of-its-kind AI Ethics Board is a profound statement in that direction.
We are nowhere near done. There will be critics who point out shortcomings, fairly or unfairly, but this is a journey and it matters that we have started it. Theodore Roosevelt once said:
WANT TO KNOW MORE ABOUT AXON'S AI RESEARCH? Learn More