As you can see in the diagram, an inverse relationship exists between the accuracy and interpretability of AI methods. The negative correlation between the two is as follows: the greater the interpretability, the lower the accuracy and vice versa.
Some less accurate models are still popular because they have elements of simulatability (so a human can repeat them), a computation process that is fully explainable (algorithmic transparency), and each part of the model has an intuitive explanation (decomposability).
As Deep Learning and Reinforcement Learning gain popularity, the demand to explain the complex neural networks has soared, driving the development of XAI tools. The ultimate goal: to achieve responsible, traceable and understandable AIs.
The Evolution of explainable AI
Recent survey results from PWC have divulged that XAI is one of the top AI technology trends. As the actual accuracy of the AI models has improved, customers are now challenging Data Scientists and Technologists to answer and explain, "How do I trust AI decision making?". This problem is being approached with both model-specific and model-agnostic techniques:
Model Specific Techniques
These deal with the inner working of an algorithm or model to interpret its results. It can be applied to partly explainable models, but there is also research such as the White Box Model trying to create complete explainability.
Model Agnostic Techniques
These work from outside a model, analyzing the relationships between the model input features and the model output results, to further understand the underlying mechanism.
There are some out-the-box techniques, an example is an R package called LIME which in basic terms, tries to estimate local boundaries for decision making.
As machine learning continues to be developed, we have to keep asking deeper questions about how we measure the success of these techniques, and whether we have the balance of explainability and accuracy that works for our particular use case. Luckily, that's the kind of job that only a human can (currently) do.
To find out how we are using these methods contact the UBS Strategic Development Lab team at SDL@ubs.com