Google
Saliency maps can explain a neural model's predictions by identifying important input features. They are difficult to interpret for laypeople,�...
Maximilian Dustin Nasert. Latest. Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods.
We formalize the underexplored task of translating saliency maps into natural language and compare methods that address two key challenges of this approach.
Oct 13, 2022Authors:Nils Feldhus, Leonhard Hennig, Maximilian Dustin Nasert, Christopher Ebert, Robert Schwarzenberg, Sebastian M�ller. View a PDF of the�...
In this position paper, we establish desiderata for Mediators, text-based conversational agents which are capable of explaining the behavior of neural models�...
Oct 13, 2022Authors:Nils Feldhus, Leonhard Hennig, Maximilian Dustin Nasert, Christopher Ebert, Robert Schwarzenberg, Sebastian M�ller.
Aug 1, 2024Connected Papers is a visual tool to help researchers and applied scientists find academic papers relevant to their field of work.
... Maximilian Dustin Nasert and Christopher Ebert and Robert Schwarzenberg and Sebastian M\"{o}ller", booktitle = "Proceedings of the First Workshop on Natural�...
Sep 8, 2024We conduct a human evaluation of explanation representations across two natural language processing (NLP) tasks: news topic classification and�...