Seminar XPLN: Exploring Explainability in NLP

Master Seminar, Saarland University, Summer Semester 2024, 2024

Together with Simon Ostermann we have offered a Seminar on Explainable and Interpretable AI.

Please find the course page on his personal website: https://simonost.github.io/home/teaching/xpln-24.html.

Seminar Content

The rise of deep learning in AI has dramatically increased the performance of models across many sub-fields such as natural language processing or computer vision. In the last 5 years, large petrained language models (LLMs) and their variants (BERT, ChatGPT etc.) have changed the NLP landscape drastically. Such models got larger and larger over the last years, reaching increasingly impressive performance peeks, sometimes even surpassing humans.

A central issue with deep learning models with millions or billions of parameters is that they are essentially black boxes: From the model’s parameters, it is not inherently clear why a model exhibits a certain behavior or makes certain classification decision. Understanding the inner workings of such large models is however extremely important, especially when AI takes on critical tasks e.g. in the medical or financial domain. Trustworthiness and fairness are important dimensions that such large models should adhere to, that are often not taken into account.

In this seminar we will try to shine a spotlight on the rapidly growing field of interpretable and explainable AI (XAI), that develops methods to peak into the black box that LLMs are. We will introduce general methods used in XAI, and look into insights gained from applying these methods to LLMs. Depending on the students’ preferences, we will cover some of the topics listed below.