Up
0
Down

Transparency is Key: Exploring the Promise and Potential of Explainable AI

If you’ve been following the news or at least reading our posts in this SWForum.eu discussion, you’ve noticed that Artificial intelligence (AI) is rapidly transforming our world in a variety of ways, from healthcare and transportation to finance and entertainment. The drivers of change are often deep learning algorithms with enormous potential.  However, as AI becomes more advanced, it is also becoming more complex and difficult to understand. This can create challenges when it comes to ensuring that AI is being used in a safe, ethical, and responsible manner. And as we’ve been discussing in this forum, ethics and security are fundamental in the future of AI. But towards this aim, many efforts have been put in employing an AI that is not a “black box” and of which we can understand how certain procedures are put in place, and how automated decisions happen. Fundamental, when algorithms are deciding on loans in banking, drug prescription in healthcare, or employment. 

labyrinth

Explainable AI (XAI) refers to AI systems that are designed to provide explanations for their actions and decisions. XAI aims to make AI more transparent and understandable, so that humans can better understand how and why AI systems are making the decisions that they are making. This can be particularly important in situations where AI is being used to make important decisions that can have a significant impact on people's lives, such as in finance, employment, law enforcement or healthcare (occupying an important space in this year’s AI in Medicine AIME conference, hosted by Slovenia in early June). Even if we understand the goals specified by the mathematics employed in the algorithms used by the AI, in deep learning methods it is often impossible to get insight into the internal working of the models. Although it seems that these concerns have been with us for a long time, 2023 is the year of the first World Conference on Explainable Artificial Intelligence, happening in my birth city Lisbon, Portugal. Again, this is the common ground from discussions between a diversity of domains including Computer Science, Psychology, Philosophy, and Social Science. The premise is that the outcomes of such discussions could shed a light on how XAI could address some of the problems of AI highlighted in the Regulation of the European Parliament and The Council (AI ACT) by, e.g., “laying down harmonised rules on AI and amending certain union legislative acts”.

There are a number of benefits to using XAI. First and foremost, XAI can help to increase trust in AI systems that continues to be the bottleneck in its adoption across industry. When humans are able to understand the reasoning behind an AI system's decisions, they are more likely to trust that system and feel confident that it is making decisions in a fair and impartial manner. This can be particularly important in situations where AI is being used to make decisions that can have a significant impact on people's lives, such as in getting a loan, a job or a sentence. XAI can also help to identify and correct biases in AI systems (a topic that we will came back to in one of our forthcoming posts). By providing explanations for their actions, transparent in their automation, AI systems can help to identify when they are making biased decisions based on factors such as race, gender, or age. This can help to ensure that AI is being used in a fair and equitable manner, and can help to prevent discrimination and other harmful outcomes. 

Another benefit of XAI is that it can help to improve the usability and effectiveness of AI systems. When humans are able to understand how an AI system is making decisions, they are better able to use that system effectively and make the most of its capabilities. This can help to maximize the benefits of AI and ensure that it is being used to its full potential. However, there are also challenges associated with XAI. One challenge is that creating XAI systems can be difficult and time-consuming, limiting capabilities that cannot be made transparent, and compromising the potential of a more powerful but “black box” AI. It requires significant investment in research and development, as well as in the training of AI models and the creation of explanation mechanisms to address explainability. Additionally, XAI systems can be less accurate and less efficient than non-explainable AI systems, which can limit their usefulness in certain applications. 

And to open up more questions and make knowledge more transparent and accessible, there are a number of initiatives out there, one of which being the open access first release of Patrick Hall and Navdeep Gill’s book “An Introduction to Machine Learning Interpretability”, providing an applied perspective on “Fairness, Accountability, Transparency, and Explainable AI”, published by O’Riley and offered by H2O.ai. In the same year of 2019, Duke’s director of interpretable machine learning lab, Cynthia Rudin, publishes in Nature Machine Intelligence the praise to “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead” (an article freely accessible through arXiv.org). It is a topic well present in the forefront of the car industry, by the seven principles for AI at the BMW Group, or by the big tech through IBM’s AI Explainability 360 and Google Cloud Explainable AI.

Explainable AI has the potential to play a key role in ensuring that AI is being used in a safe, ethical, and responsible manner. By providing explanations for their actions, AI systems can increase trust, identify and correct biases, and improve usability and effectiveness. While there are challenges associated with XAI, the benefits are significant and can help to ensure that AI is being used to its full potential, eventually complying with established norms and ethics regulations that are foreseen in the near future. As AI continues to transform our world, XAI will likely become an increasingly important tool for ensuring that AI is being used in a way that benefits everyone. What do you think about this honest compromise of AI towards a less “black box” future of the digital transformation of our society’s workflows? Share your thoughts!