A few months ago, I was on a podcast, and the host asked me, "What is the difference between explainable AI and responsible AI? And do we need both?" So, this is what my response was.
Think of an air crash investigation; it is a classic example with which we can compare explainable AI. In the air crash investigation, when something goes wrong, say there was an accident. You first find the Black Box, open it, analyze it, and go through the whole sequence of operations. Then understand what happened, why it happened, and how you can prevent it next time. But that is the post-facto operation, a postmortem. You are not avoiding the incident in the first place.
As a responsible approach, you train your pilots, your crew to avoid these kinds of mishaps. You build your operations in such a way that it prevents these accidents from happening.
When it is explainable AI, it is post-facto. It is necessary as an after-the-fact. But when it comes to responsibility AI, it is essential to prevent mishaps from happening.
I think if you do an excellent job of being responsible, explainability will not be an issue, because that would rarely or never be required.
But, incidents happen, so pragmatically, you must insist on both.
People must behave responsibly with technology whilst technology must be explainable!