How AI Ethics Can Help Reprioritize Humanity
7 Questions for Tracy Kosa, the Ombudsman of Axon’s AI Ethics Board
Sep 30, 2020
Tracy Kosa is a privacy engineer at Google, recently finished a post-doc at Stanford on predicting privacy regulation. Prior, Dr. Kosa ran the global privacy compliance program at Microsoft and served on the City of Seattle's Privacy Advisory Committee. We are excited to bring Dr. Kosa to Axon to share information about our AI ethics board works and to discuss Axon’s products, technology, and ultimately the impact we have on society. With that impact comes great ethical responsibility and Dr. Kosa will walk us through the ethical considerations integral to incorporate our daily work at Axon.
- What excites you about Axon, and why did you choose to be on their AI Ethics board?
The fact they wanted to do an ethics board before it was popular, the selection of diverse criteria of representatives, and the ability to talk openly about the work being done. I knew this was a company that cares about ethics; it wasn’t merely a good press story.
- How has AI changed in the past five years? What do you predict will happen in the next 5 to 10 years?
The tech’s getting really good at image recognition, speech recognition, and translation. I’m guessing we’ll continue on that trajectory until we have it at high accuracy rates almost in real-time.
- What is the biggest challenge in the AI space at the moment?
Aside from untangling the ethical questions, it’s prediction. That’s what everybody’s really interested in - can the computer tell us what’s going to happen in situation ABC, etc.
- What effect has AI made on the policing industry/ or society?
Both positive and negative because it helped shine a bright light on the systemic issues with law enforcement practices and policies.
- What led to Axon’s decision to ban facial recognition in body cameras?
The training data isn’t there. Imagine a situation where you’ve got a massively disproportionate group of people in the prison system, or those who have mug shots. That’s us in the United States. Now imagine using that same data for training models to pick out people. Now imagine using that for ‘predictive’ purposes. That’s where we’ll end up if we don’t stop now and correct the ‘garbage in / garbage out’ problem.
- What are the most critical changes that we must make to face the future effectively?
Nobody’s going to like this, but we need to reprioritize the human being. We’ve built a complex worldwide infrastructure with automated models on top of automated models. Honestly, nobody knows what can or could happen as we continue to tack things on the top/sides of it.
Additionally, product and software teams absolutely must be diverse — in background, thinking, gender, and age. Putting an actual representation of today’s human population into the development of products changes everything each step of the way.
- What are the best resources for people who want to dive deeper?
Start with the civil society organizations that I mentioned in the recent Axon webinar for employees, but mostly I encourage you to contribute to the great work done in this space (as opposed to the litany of problems and tragedies).
Also, I developed a free short course with some excellent professors from Seattle University that's available online here.
Other non-profits / Civil Society Organizations / Conferences I recommend you look into: