Explainable Artificial Intelligence (XAI) is responsible AI

Can you trust your Artificial Intelligence (AI)?

by UBS 07 Jan 2019

As the UBS Strategic Development Lab continues to innovate in this area, creating and applying new kinds of AI and machine learning, how can we be confident that our programs are working as intended? One attempt to answer this important question is Explainable Artificial Intelligence (XAI). XAI is an emerging branch of AI, whereby systems are programed to provide reasoning behind every decision they make.

The problem

Here's an insight into the dilemma…

AI methods are either:

  • Simple (rule-based and explainable)
  • Complex (not designed to be interpretable)

Depending on the application, there are clear ethical, legal and business reasons to ensure we can explain how AI algorithms and models work. Unfortunately, simple AI methods that are explainable to the average person lack the accuracy that would optimise AI decision making. A lot of methods that provide optimal accuracy, such as ANN (Artificial Neural Networks), are complex models that are not designed to be interpretable.

As you can see in the diagram, an inverse relationship exists between the accuracy and interpretability of AI methods. The negative correlation between the two is as follows: the greater the interpretability, the lower the accuracy and vice versa.

Some less accurate models are still popular because they have elements of simulatability (so a human can repeat them), a computation process that is fully explainable (algorithmic transparency), and each part of the model has an intuitive explanation (decomposability).

As Deep Learning and Reinforcement Learning gain popularity, the demand to explain the complex neural networks has soared, driving the development of XAI tools. The ultimate goal: to achieve responsible, traceable and understandable AIs.

The Evolution of explainable AI

Recent survey results from PWC have divulged that XAI is one of the top AI technology trends. As the actual accuracy of the AI models has improved, customers are now challenging Data Scientists and Technologists to answer and explain, "How do I trust AI decision making?". This problem is being approached with both model-specific and model-agnostic techniques:

Model Specific Techniques

These deal with the inner working of an algorithm or model to interpret its results. It can be applied to partly explainable models, but there is also research such as the White Box Model trying to create complete explainability.

Model Agnostic Techniques

These work from outside a model, analyzing the relationships between the model input features and the model output results, to further understand the underlying mechanism.

There are some out-the-box techniques, an example is an R package called LIME which in basic terms, tries to estimate local boundaries for decision making.

As machine learning continues to be developed, we have to keep asking deeper questions about how we measure the success of these techniques, and whether we have the balance of explainability and accuracy that works for our particular use case. Luckily, that's the kind of job that only a human can (currently) do.

To find out how we are using these methods contact the UBS Strategic Development Lab team at SDL@ubs.com

You may also be interested in