Ethics of AI in a World That Isn’t Black or White

AI is always in the news these days. Google was recently hit with a complaint that they aren’t diverse enough and they don’t treat AI rollouts with enough care. I can’t speak to these specific arguments, but I can provide my thoughts on the ethics of AI. It seems the future is making us revisit the past.

What Are Your Checks & Balances in Your System?

Every good system will always have checks and balances. Take US politics, for example. It’s common to hear that “US politics is broken” and that “democracy is dying”. While I can’t predict the future, I can see that the US political system’s design has beautiful checks and balances.

The Congress checks the Senate and vice versa. Both houses check the power of the president. The president himself can check over ambitious legal actions by providing federal pardons to whoever they please.

It’s easy to disagree with any of these specific parts of the system. We may think that the Senate is blocking laws from being enacted or may complain that the president has too much power, especially when it comes to pardons. However, we can’t look at the system individually; we have to look at it holistically.

The same applies to AI. We can’t simply focus on the code and the models. We need to look at the entire system. How will this AI program affect our customers? What is the legal liability? What are the moral implications of the decisions the model is making?

I don’t think you can just let “facts speak for themselves”. Our most important institutions don’t function this way. Take the example of Facebook who is getting pressure from all sides for their content moderation policies. 

They recently implemented a committee board that will look at specific decisions such as banning a specific user and determining if that decision was correct. The board has the power to overturn the decision. This is an excellent example of a check in the system. 

Think Practical Philosophy in the Ethics of AI

Depending on your inclination, you either remember philosophy classes fondly or remember them as your nap times. We have been debating ethics and morals for thousands of years. We still even read the books of old, including Plato, Aristotle, and Marcus Aurelius. 

AI is simply stepping into a world full of history and ambiguity. AI programs need to focus on the practical aspects of philosophy. The tidbits that help us, such as the approach to problems from Stoicism or the Socratic questioning technique. There are rabbit holes in this world, and you want to avoid them.

In your AI program, answer the following questions:

  • Where do we draw the line, and how does this affect decisions?
  • How do we define our most important terms, such as hate speech, racism, and diversity?
  • How would we review the day to day decisions being made our models?

The World Isn’t Black and White

Before color TV arrived, everyone watched black and white TV. These days, we’ll watch black and white movies to reminisce. Black and white is simple and forces you to focus on the characters and stories.

Unlike TV and films, our world isn’t black and white. I see way too many people who seem to take a stand on an issue that is complex. This leads to polarization since there is no room for disagreement. 

Think about the law. It’s easy to think that laws are crystal clear. You either break them, or you don’t. And yet, we have an entire system of courts, judges, and juries that interpret the laws and provide punishments. If the law were truly black and white, we wouldn’t need them. 

This is the challenge with AI. The code that is written behind AI models is black and white. It is binary, but the world in which AI is deployed is colorful. It will consistently run-up to edge cases. Self-driving cars have logged millions of miles simply because they try to solve all the edge cases that a human can solve by driving for a few hours.

AI is seen as a technology of absolutes, but it is being deployed in a world of grays. The world isn’t black and white. Some of the most important institutions like politics and law were designed to deal with the ambiguity, and AI will have to be the same. Companies should understand the ethics of AI decisions, which means understanding the past as humanity has been debating ethics and morals for a long time.

One more thing before you go! Do you know how to get more insights out of your data? 

All companies are sitting on a goldmine of data that they haven't fully explored. It's not about technology or capturing more data. The key is to learn how to make the most of your current data and convert it into actionable insights. This is the main idea behind my first book, The Data Miage: Why Companies Fail to Actually Use Their Data

I'm excited to announce the release of the book through all major retailers. If you're interested, you can download the first chapter for free using the form below. You'll learn what the best data-driven companies do differently and how to make sure you're playing the right data game.

    Your information is secured. No spam ever.