Opening the black box: an introduction to Model Explainability

AI and Machine Learning are powerful techniques to generate insights into your workforce. As many organisations are only now starting to use it, blindly trusting predictions coming out of black-box models might still sound quite daunting to most. This is where Model Explainability comes in.

What is model explainability and why do we need it?

Machine learning has been on a roll during the past decade. Increasingly more businesses have recognized the potential of the technology and have started to implement predictive models to drive decision making processes all across their value chain. 

Driven by the widespread availability of cheap computational power and an abundance of qualitative datasets, these predictive models have become increasingly more sophisticated and complex. Recently, a lot of the attention has gone to artificial neural networks and deep learning, both of which are especially well suited to capture the underlying interactions between features in large and complicated datasets. In many applications, these models turn out to have superior performance in comparison to other, less complex models.

However – as is always the case in machine learning – increased performance comes with a trade-off. As models become more complex, their internal representation does as well. This makes it incredibly difficult to get a proper understanding of how complex machine learning models arrived at a certain prediction – even when you trained the model yourself. 

This trade-off, usually referred to as the performance-interpretability trade-off, is one of the key design choices that data scientists face every day.

So why do we need model interpretability? Isn’t it sufficient to know the prediction that comes out of our machine learning model? For many people and industries, it is not.

Whereas the fundamental concepts of Machine Learning have been around for quite a while, most people are only now beginning to become acquainted with the technology and its wide variety of implementations. Therefore, blindly trusting the prediction that came out of a black-box model – without the slightest form of explainability – might still sound quite daunting to most. 

For some industries, the importance of the prediction’s explanation might even surpass the importance of the prediction itself. Imagine a situation where your loan request gets denied because ‘the algorithm told us so’. Or a situation in which your physician informs you that you are seriously ill, but is incapable of explaining to you how he or she arrived at that conclusion. 

JiGSO - Opening the black box: an introduction to Model Explainability
Opening the black box: an introduction to Model Explainability

When it goes wrong: the Amazon recruitment case

Another striking example – this time from the field of Human Resources – is the infamous Amazon recruitment case. In 2014, a team of Amazon data scientists tried to develop a hiring tool capable of automatically scanning through candidate’s resumes with the goal of selecting the most promising hires. 

Quite quickly, it became clear that their engine did not rate candidates in a gender-neutral way, and that it was favoring male candidates over female candidates. The main reason for this bias was the data set that was used during model training – which was based on the hiring practices that had been implemented by Amazon during the past decade.

Since Amazon mainly recruited male candidates for technical jobs like software engineering and solutions architects, the algorithm learned to penalise resumes that pointed to female candidates by detecting words like “women” or “women’s”, and to recognise graduates from all-women’s colleges. Evidently, the program was shut down after finding out that the model exhibited discriminatory behaviour towards female candidates.

It’s obvious to see that in situations like this – where human interactions are involved – explanations are an utmost necessity.

Biases and preferences in algorithms

Perhaps more important than ever, model explanations allow us to detect possible biases or preferences that might have sneaked into the machine learning algorithm. As data scientists, we are on the frontline in the battle of safeguarding ethical practices within the field of Artificial intelligence and Machine Learning. We don’t want our algorithms to exhibit discriminative or preferential behaviour towards certain characteristics or features. Therefore, model explanations provide data scientists with the much-needed tool to check for potential biases before implementing newly trained algorithms.

Last but not least: model explainability is a legal obligation. According to the European General Data Protection Regulation (GDPR), every person has the right to obtain an explanation about a decision that was made solely based on automated processing. In short, GDPR states that businesses should strive for transparency and accountability when developing automated decision-making tools.

So how do we get this additional layer of explainability on top of our prediction? How can we open up the black box to generate truly trustworthy models for human-sensitive applications?

One way to do this is to leverage the power of so-called explainability models. These models are built on top of your regular machine learning model, and have the sole purpose of explaining how your predictive model arrived at a certain prediction. Due to the increased interest in ethical and explainable AI (XAI), researchers have spent a lot of time in figuring out efficient and trustworthy models which do exactly that.

In the next part of this series on Model Explainability I will discuss one of the techniques that has gained widespread popularity during the last couple of years: Shapley Additive Explanations.

Share this article

Table of Contents

About the author

Wout Goossens

With a master’s in Artificial Intelligence and Industrial Engineering, Wout is an ambitious, social, and driven engineer, interested in disruptive technology and personal finance.

More Articles

The Importance of Continuous Performance Management

Why is continuous performance management so important and how to do it.

This blog will explore the role of continuous performance management in modern workplaces. It will emphasize the importance of regular performance assessments in fostering employee growth, enhancing productivity, and aligning individual efforts with organizational values. The blog will provide insights

Background image increase survey response rate

Increase Your Employee Survey Response Rates: 4 Tips and Strategies

Maximizing employee survey response rates is crucial for gaining valuable insights and driving positive change. To achieve this, organisations must design user-friendly surveys, communicate effectively, and minimize barriers. Transparently sharing results and fostering a culture of continuous improvement further enhance