Someone Should Be Responsible
Every intelligent system should have an AIC* (Artificial Intelligence Controller) and key individuals accountable for each iteration of the system being deployed. Much in the same way corporate controllers and CFOs have to sign off on financial disclosures, AICs and key individuals should sign off on the design, deployment, auditing, and output of the models from their company.
Should a model harm a person or group, those individuals would be held civilly responsible and, in extreme cases, criminally accountable.
Regulatory Compliance Should Be Mandatory. Not Voluntary or Self Reporting.
Companies should be required to adhere to a specific set of regulations and should be audited on a regular basis. All businesses utilizing intelligent systems would have to ensure that systems were operating as presented to the public with a minimal number of adverse results. Those results would be monitored by an AICA* (Artificial Intelligence Compliance Agency) what would track the quality of outputs from companies operating in a given country or union.
We don’t believe companies should be allowed to self-regulate, self-report, or have the option to report or not on the operation, design, and quality of outputs from their systems. Given the level of trust we are putting into these intelligent systems to decide many of the most important milestones in our lives, these systems should be under the highest degree of regular scrutiny.
Clear Rule for How Systems Report Outcome Decision Factors
There should be a standardized manner of reporting outcome decision factors where anyone can clearly see and understand all the data points that went into developing a given output. Companies shouldn’t be able to game the system by creating proprietary explanation systems that no one can understand, nor would anyone be able to check if the company making the system understood their explanation system.
Users Should Be Able to Retrieve, Contest, And Remove the Information Used In AI Systems
Users should be able to retrieve, contest, and remove all information any company has about them. No company should be able to maintain datasets that are not open to scrutiny by the users within those data sets. Users should have absolute transparency on all information collected on them so that they can be aware of what information exists on them, what information they do not want to be stored anymore, what information is incorrect, and if they so choose, the user can have some or all the information updated or removed.
In a data-driven world, where data is the new oil, the producers of that data (i.e., the users) should have complete control over their data. We wouldn’t allow an oil company to drill on our property without compensation, negotiation, and agreement. Why should we allow companies to drill our lives for data without any rights given to us in the short, middle, or long term?
*The AIC (Artificial Intelligence Controller) and AICA (Artificial Intelligence Control Agency) are terms developed by us at Wandering Alpha. These agencies do not exist yet. But we hope they will.
Regulatory Case for Ethical A.I.
When discussing AI Ethics, the natural question arises: How would a country or union of countries realistically regulate artificial intelligence while still allowing for innovation, growth, and change? At Wandering Alpha, we are huge believers in pragmatic, common-sense guardrails that focus on bringing the regulations that govern society closer to the ethics we aspire to, not the other way around.
Regulations governing AI Ethics should be clear, measurable, realistic, and accountable. Building off work done by The Data Oath, we would suggest the following ideas for regulations that would provide flexibility for companies to systematically grow, while respecting the rights of users and society at large.
What is AI Ethics? Why is it important? Why will it impact all of us?
Implementing Ethical AI isn’t only the right thing to do, it’s the profitable thing to do.
What is the regulatory case for Ethical AI?
Understand the importance of Transparency & Explainability in AI systems.
What happens when intelligent models don’t accurately include or reflect all genders, races, groups? Why is it so important that models are designed with diversity in mind?