Ethical AI
What's just? What's ethical? These aren't theoretical questions for philosophers to debate. These are hard questions every responsible citizen must consider — but few have good answers to.
Every time we hear of racial prejudice or see unconscious bias play out, we tune into just how imperfect we are and how inconsistent our answers to these questions are.
Artificial intelligence, and specifically machine learning algorithms, provides us an opportunity to optimize and scale our collective ethical point-of-view. We can do this because AI is able to automate critical decisions — today subject to human bias — such as whether you qualify for a loan, or what your life insurance premium should be, or even what sentence you should receive for a crime and we can constrain these algorithms to optimize for ethical considerations.
Developing a new "ethical literacy"
Just as any responsible citizen has the obligation to consider what's just and ethical from a human-to-human standpoint, we now need to form a new "ethical literacy" for developing socially aware algorithms that manifest our ethical points-of-view from a machine-to-human standpoint.
Computer scientists won't be the modern day philosophers defining ethics though. Rather, it will take business leaders, policy makers, and many others that collectively develop a mental model for what's possible and constantly calibrate these critical algorithms to balance for accuracy and bias.
In this way, I see parallels to Aristotle's philosophy of the Golden Mean where you reach a balance between two extremes. A classic example of this is the virtue of courage being a balance between recklessness and cowardice. I've always envisioned the Golden Mean as a dashboard of dials that you need to constantly calibrate and keep within appropriate ranges. Just like how your car's dashboard has gauges showing the optimal engine temperature and oil pressure. Deviating outside of these optimal ranges jeopardizes the integrity of the motor.
Similarly, privacy and fairness can be dialed up or down within a given algorithm at the cost of accuracy. For example, a data set might have latent bias in it because of how it was collected or how decisions were previously made. Unless told otherwise, an algorithm leveraging this data will likely amplify this bias in the pursuit of the "best prediction." Collectively we must hold each other accountable for continuously calibrating AI models based on what's ethical and fair. Failing to do so will allow algorithms to drift outside the optimal bounds of fairness and privacy, potentially jeopardizing the integrity of our society as a whole.
Machine Learning algorithms "are good at optimizing what you ask them to optimize, but they cannot be counted on to do things you'd like them to do but didn't ask for, nor to avoid doing things you don't want but didn't tell them not to do. Thus if we ask for accuracy but don't mention fairness, we won't get fairness. If you ask for one kind of fairness, we'll get that kind but not others." - The Ethical Algorithm
Thinking of ethical algorithms in this way highlights that they won't be perfect, just as human aren't.
The Ethical Algorithm: The Science of Socially Aware Algorithm Design
A new book by Michael Kearns, Computer Science professor and National Center Chair at UPenn, and Aaron Roth, CIS professor at UPenn, provides an excellent overview of the current research related to ethical algorithm development.
This timely and important book lays the foundational knowledge necessary for building the new "ethical literacy" by addressing questions such as:
- What are the pros/cons of differential privacy?
- What type of algorithmic fairness should I use? Statistical parity? Equality of false negatives? What are the pros/cons of each?
- What groups am I protecting?
- How do I arrive at a set of "reasonable" choices for the trade-off between accuracy and fairness? i.e. defining an optimal Pareto Frontier
- What's the risk of gerrymandering? How do I control for it?
- and many others
No blog post will do justice to the depth of examples and detail Michael and Aaron go into, so I won't attempt it here.
A critical lesson I learned in reading The Ethical Algorithm is that while science can tell us the pros and cons of the different definitions of fairness, it won't decide for us. We need to do that. The responsibility is ours to take the important first steps towards developing and scaling ethical algorithms by forming our collective understanding.