top of page

Transparency & Explainability

We want AI systems to treat us equally, fairly, and justly. We want to be given all the opportunities available and possible for us given the capacity for intelligent systems to pull more information together than a human would ever be able to. We want our lives to be generally improved as a result of using artificial intelligence instead of a human being. Otherwise, what is the point of the trade-off other than helping a company make more money? But how do we know an AI system is living up to its promise? How can we verify outcomes when we are unsure of the quality of the results? How can we be sure the system is treating everyone fairly, equally, and justly? The answer is Transparency & Explainability.


When we talk about Transparency & Explainability, what we mean is the following: can the system clearly show how and why it made a certain choice, suggestion, recommendation, etc.? Can the system explain all the key variables that went into the calculation and how they were weighted? Is the explanation intelligible at various levels, expert to user? Can the system be tested for bias and identify the source of the bias? And so on. While it might seem like a lot to ask from a system, these are the same things we would ask from a doctor, police officer, government employee, college admissions member, human resources officer, and fellow citizen in the country we reside. It’s not asking too much; it’s asking for the bare minimum.


Now, there are many arguments on the side of black-box systems being faster, better, robust, more profitable, hard to copy, etc. But we ask the question, “is society and the users who make up society the real beneficiaries for the system, or is the company?” We believe that as it stands, the real beneficiaries are the companies and not the users or society. This has a certain asymmetry of benefits that is unethical.


This isn’t to say that there isn’t a place for black-box systems in business. These opaque systems can be employed for any number of activities where the only constituency impacted is the company and not its users. Arguments can be made for trading models, identifying blackholes, uncovering cancer, uncovering new drug treatments, and many other applications. But black-box systems shouldn’t be used in system-to-user treatment recommendations, jail sentencing recommendations, credit approvals, college admissions, job applications, etc. Not until those systems can be explained should they be considered for use.


So we leave you with this. Humankind has split the atom, sent probes to interstellar space, sequenced the genome, and found a vaccine for pandemics in months, not years. We think it’s fair to ask, why can’t we create systems that treat us the same as we would aspire to treat one another? Why would we envision a world dominated by machines that aren’t clear, explainable, accountable for their actions? In a free society, no person, organization, or system should have unchecked power over users. 

What is AI Ethics? Why is it important? Why will it impact all of us? 

Implementing Ethical AI isn’t only the right thing to do, it’s the profitable thing to do.

What is the regulatory case for Ethical AI?

Understand the importance of Transparency & Explainability in AI systems. 

What happens when intelligent models don’t accurately include or reflect all genders, races, groups? Why is it so important that models are designed with diversity in mind?

Where do we see AI Ethics going forward? How will it apply to companies and individuals?

bottom of page