Bridging the gap between explainable AI in academia and industry

Why do we need explainable AI?

Machine learning and AI have shown tremendous promise in addressing some of the hardest problems faced in industry. With more data available now than ever before, we can identify complex relationships present in practical problems, and utilise them to attain predictions, forecasts and insights that benefit businesses. However, when AI is applied to industrial problems, “black box” models are no longer sufficient. When the internal workings of a model are not well understood, it is often referred to as a “black box model”. In the application of AI, models can fall under this umbrella due to the non-linear relationships they exploit in the data. When these models are then applied to practical problems, it is vital that we understand why they are generating the outputs that they are.

What did the summer project involve?

To establish a greater understanding of AI outputs, explainable AI has become an active research field: developing methods and tools to understand the complex behaviour of AI models. In the summer of 2022, we explored the limitations and possible extensions of these explainable AI methods through a summer project coordinated between Oxford University and Smith Institute. Over the course of the summer, Gabriela van Bergen Gonzalez-Bueno joined the Smith Institute and investigated the current state of academic research regarding explainable AI and extended this to address concerns for industry, under the guidance of Dr Kieran Kalair, a senior mathematical consultant at the Smith Institute with a particular interest in explainable AI and who wanted to explore the field further….Read more