Your browser does not support JavaScript or scripts may have been blocked ! If they are blocked, please enable them for proper functioning of this page. If that isn't the case, it may be time to upgrade your browser.

THE POINT IS...


People must behave responsibly with technology whilst technology must be explainable!

Explainable AI versus Responsible AI

 

A few months ago, I was on a podcast, and the host asked me, "What is the difference between explainable AI and responsible AI? And do we need both?" So, this is what my response was.

Think of an air crash investigation; it is a classic example with which we can compare explainable AI. In the air crash investigation, when something goes wrong, say there was an accident. You first find the Black Box, open it, analyze it, and go through the whole sequence of operations. Then understand what happened, why it happened, and how you can prevent it next time. But that is the post-facto operation, a postmortem. You are not avoiding the incident in the first place.

As a responsible approach, you train your pilots, your crew to avoid these kinds of mishaps. You build your operations in such a way that it prevents these accidents from happening.

When it is explainable AI, it is post-facto. It is necessary as an after-the-fact. But when it comes to responsibility AI, it is essential to prevent mishaps from happening.

I think if you do an excellent job of being responsible, explainability will not be an issue, because that would rarely or never be required.

But, incidents happen, so pragmatically, you must insist on both.

People must behave responsibly with technology whilst technology must be explainable!

Stay connected

Sharing insights, ideas, perspectives, and best practices.

Get better. Get smarter!


Never miss the latest insight

Anand is fortunate to share his work with a broad audience. Thousands of people subscribe to his monthly email newsletter. We'd love to have you join us today!


© Copyright 2024   |   All Rights Reserved   |   Privacy Policy