#ai#machine learning

What is explainable AI?

As algorithms become more complicated and machine-to-machine learning makes judgements harder to read, explainable AI is becoming essential

|Feb 4|magazine5 min read

How does AI make decisions? We ought to know because we built it. But high-profile cases of machine learning bias have taught us that while we know what we’re putting into the training data, and how we have programmed the computer to learn from it, we cannot always predict unexpected outcomes.

Why is explainable AI important?

As AI is being used increasingly in fields where lives are at stake – medicine, safety controls – and we start to look at scenarios where humans might be taken away from supervisory positions, knowing that your AI is making the right decisions might not be enough. It will be important, not least from a legal perspective, to be able to show how the AI made its judgement(s).

Explainable AI and the ‘black box’ phenomenon

The ‘black box’ phenomenon occurs when AI reaches a decision via processes that are not easy to explain. As cybersecurity experts grapple with data poisoning, we can see that machines can be trained to be misled. Equally, without foul play, engineers may not be able to foresee how data will be processed, leading to unexpected outcomes. Explainable AI seeks to address this.

How does explainable AI help?

With explainable AI, the deep learning algorithm not only produces a result, but also shows its workings. That means that where a decision has been reached using a boilerplate algorithm, but where other factors may have had an influence, data scientists will be able to deem whether outlying parameters should have been taken into account. When an autonomous vehicle causes damage, injury or death, an enquiry can use the explainable AI to identify the soundness of the machine’s ‘decision’.

Who is developing explainable AI?

Fujitsu Laboratories is working with Hokkaido University to develop a way to explain ‘counterfactual’ explanations (what might have happened in a different scenario). This uses LIME, an explainable AI technology that gives simple, interpretable explanations for decisions, and SHAP, which looks at explanatory variables (what if…?). At the moment scientists are working in three fields: diabetes, loan credit screening and wine evaluation.

When can we expect to see explainable AI in the real world?

Fujitsu AI Technology Wide Learning is planned for commercial use this year, but expect the wider AI community to jump on the opportunity to fast-track AI’s wider adoption, and acceptance, by society.

Find out more
Read more