AI has a credibility problem.
Sure, it’s exciting that neural nets can beat the best humans at board games. But how can we let “black box” AI make life-altering decisions for parole hearings, home loans, and driving cars?
There must be a better way.
To have Responsible AI,
we must have Understandable AI that is:
Accurately learns from trusted data and observations. Empowers unbiased decision making.
Provides transparent reasoning. Offers detailed explanations that can be interpreted by people.
Exposes the connection between AI decisions and its training data. Provably removes unwanted data.
THE FUTURE OF AI
Responsible applications of Understandable AI:
Loan Approvals Are Unbiased
Every loan is approved on its own merit because the bias has been addressed.
We Trust Self-Driving Vehicles
Manufacturers are easily able to find, understand and address training gaps before tragic accidents occur.
Healthcare Costs Are Reduced
Physicians and providers use a trusted and transparent system to quickly diagnose and approve procedures.
Maintenance Operations Are Predictive
Predicting and replacing parts before they break keeps costs down and everybody safe.
AI Augments Human Productivity
Computer performs repetitive decisions exactly how scientists do so they can get more hands-on time in the lab.
Personal Data Is Fully Deleted
Deleting a social media account definitively removes all personal data and connections from the underlying AI.