Interpretable Machine Learning (IML) / Explainable AI (XAI)

Machine learning models are often referred to as black boxes because their predictions are often intransparent and not easy to understand for humans. Numerous post-hoc methods from the field of interpretable machine learning have been developed in recent years to gain new insights into black box models and their underlying data. Furthermore, model interpretation helps to validate and debug models, which also contributes to a better understanding. In our research group, we explore and implement approaches for interpretable machine learning. Our research focus lies on model-agnostic methods for tabular data.

Focus Areas:

Projects and Software

Members

Name       Position
Dr. Giuseppe Casalicchio       PostDoc / Lead
Dr. Susanne Dandl       PostDoc
Fiona Katharina Ewald       PhD Student

Former Members

Name       Year
Dr. Christoph Molnar       2017 – 2021
Dr. Quay Au       2017 – 2020
Dr. Gunnar König       2019 – 2023
Dr. Julia Herbinger       2020 – 2024
Christian Scholbeck       2019 – 2024

Contact

Feel free to contact us if you are looking for collaborations:

giuseppe.casalicchio [at] stat.uni-muenchen.de

Publications

  1. Rundel D, Kobialka J, Crailsheim C von, Feurer M, Nagler T, Rügamer D (2024) Interpretable Machine Learning for TabPFN 2nd World Conference on eXplainable Artificial Intelligence,
    link|pdf
    .
  2. Herbinger J, Dandl S, Ewald FK, Loibl S, Casalicchio G (2024) Leveraging Model-Based Trees as Interpretable Surrogate Models for Model Distillation. In: In: Nowaczyk S , In: Biecek P , In: Chung NC , In: Vallati M , In: Skruch P , In: Jaworek-Korjakowska J , In: Parkinson S , In: Nikitas A , In: Atzmüller M , In: Kliegr T , In: Schmid U , In: Bobek S , In: Lavrac N , In: Peeters M , In: Dierendonck R van , In: Robben S , In: Mercier-Laurent E , In: Kayakutlu G , In: Owoc ML , In: Mason K , In: Wahid A , In: Bruno P , In: Calimeri F , In: Cauteruccio F , In: Terracina G , In: Wolter D , In: Leidner JL , In: Kohlhase M , In: Dimitrova V (eds) Artificial Intelligence. ECAI 2023 International Workshops, pp. 232–249. Springer Nature Switzerland, Cham.
    link | pdf
    .
  3. Dandl S, Casalicchio G, Bischl B, Bothmann L (2023) Interpretable Regional Descriptors: Hyperbox-Based Local Explanations. In: In: Koutra D , In: Plant C , In: Gomez Rodriguez M , In: Baralis E , In: Bonchi F (eds) ECML PKDD 2023: Machine Learning and Knowledge Discovery in Databases: Research Track, pp. 479–495. Springer Nature Switzerland, Cham.
    link | pdf
    .
  4. Dandl S, Hofheinz A, Binder M, Bischl B, Casalicchio G (2023) counterfactuals: An R Package for Counterfactual Explanation Methods.
    link | pdf
    .
  5. Molnar C, König G, Bischl B, Casalicchio G (2023) Model-agnostic Feature Importance and Effects with Dependent Features–A Conditional Subgroup Approach. Data Mining and Knowledge Discovery.
    link | pdf
    .
  6. Scholbeck CA, Funk H, Casalicchio G (2023) Algorithm-Agnostic Feature Attributions for Clustering. In: In: Longo L (ed) Explainable Artificial Intelligence, pp. 217–240. Springer Nature Switzerland, Cham.
    link | pdf
    .
  7. Molnar C, Freiesleben T, König G, Herbinger J, Reisinger T, Casalicchio G, Wright MN, Bischl B (2023) Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process. In: In: Longo L (ed) Explainable Artificial Intelligence, pp. 456–479. Springer Nature Switzerland, Cham.
    link | pdf
    .
  8. Herbinger J, Bischl B, Casalicchio G (2023) Decomposing Global Feature Effects Based on Feature Interactions. arXiv preprint arXiv:2306.00541.
    link | pdf
    .
  9. Löwe H, Scholbeck CA, Heumann C, Bischl B, Casalicchio G (2023) fmeffects: An R Package for Forward Marginal Effects. arXiv preprint arXiv:2310.02008.
    link | pdf
    .
  10. Scholbeck CA, Moosbauer J, Casalicchio G, Gupta H, Bischl B, Heumann C (2023) Position Paper: Bridging the Gap Between Machine Learning and Sensitivity Analysis. arXiv preprint arXiv:2312.13234.
    link | pdf
    .
  11. Dandl S, Pfisterer F, Bischl B (2022) Multi-Objective Counterfactual Fairness Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 328–331. Association for Computing Machinery, New York, NY, USA.
    link
    .
  12. Au Q, Herbinger J, Stachl C, Bischl B, Casalicchio G (2022) Grouped Feature Importance and Combined Features Effect Plot. Data Mining and Knowledge Discovery 36, 1401–1450.
    link | pdf
    .
  13. Herbinger J, Bischl B, Casalicchio G (2022) REPID: Regional Effect Plots with implicit Interaction Detection. International Conference on Artificial Intelligence and Statistics (AISTATS) 25.
    link | pdf
    .
  14. Moosbauer J, Casalicchio G, Lindauer M, Bischl B (2022) Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution. arXiv:2111.14756 [cs.LG].
    link | pdf
    .
  15. Scholbeck CA, Casalicchio G, Molnar C, Bischl B, Heumann C (2022) Marginal Effects for Non-Linear Prediction Functions. To Appear in Data Mining and Knowledge Discovery.
    link | pdf
    .
  16. Molnar C, König G, Herbinger J, Freiesleben T, Dandl S, Scholbeck CA, Casalicchio G, Grosse-Wentrup M, Bischl B (2022) General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models. In: In: Holzinger A , In: Goebel R , In: Fong R , In: Moon T , In: Müller K-R , In: Samek W (eds) xxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, pp. 39–68. Springer International Publishing, Cham.
    link | pdf
    .
  17. König G, Molnar C, Bischl B, Grosse-Wentrup M (2021) Relative Feature Importance 2020 25th International Conference on Pattern Recognition (ICPR), pp. 9318–9325.
    link | pdf
    .
  18. Moosbauer J, Herbinger J, Casalicchio G, Lindauer M, Bischl B (2021) Explaining Hyperparameter Optimization via Partial Dependence Plots. Advances in Neural Information Processing Systems (NeurIPS 2021) 34.
    link | pdf
    .
  19. Moosbauer J, Herbinger J, Casalicchio G, Lindauer M, Bischl B (2021) Towards Explaining Hyperparameter Optimization via Partial Dependence Plots 8th ICML Workshop on Automated Machine Learning (AutoML),
    link | pdf
    .
  20. König G, Freiesleben T, Bischl B, Casalicchio G, Grosse-Wentrup M (2021) Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT). arXiv preprint arXiv:2106.08086.
    link | pdf
    .
  21. Dandl S, Molnar C, Binder M, Bischl B (2020) Multi-Objective Counterfactual Explanations. In: In: Bäck T , In: Preuss M , In: Deutz A , In: Wang H , In: Doerr C , In: Emmerich M , In: Trautmann H (eds) Parallel Problem Solving from Nature – PPSN XVI, pp. 448–469. Springer International Publishing, Cham.
    link
    .
  22. Scholbeck CA, Molnar C, Heumann C, Bischl B, Casalicchio G (2020) Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations. In: In: Cellier P , In: Driessens K (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019, pp. 205–216. Springer International Publishing, Cham.
    link | pdf
    .
  23. Liew B, Rügamer D, De Nunzio A, Falla D (2020) Interpretable machine learning models for classifying low back pain status using functional physiological variables. European Spine Journal 29, 1845–1859.
    link
    .
  24. Molnar C, Casalicchio G, Bischl B (2020) Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability. In: In: Cellier P , In: Driessens K (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019, pp. 193–204. Springer International Publishing, Cham. link | pdf.
  25. Molnar C, Casalicchio G, Bischl B (2020) Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges. In: In: Koprinska I , In: Kamp M , In: Appice A , In: Loglisci C , In: Antonie L , In: Zimmermann A , In: Guidotti R , In: Özgöbek Ö , In: Ribeiro RP , In: Gavaldà R , In: Gama J , In: Adilova L , In: Krishnamurthy Y , In: Ferreira PM , In: Malerba D , In: Medeiros I , In: Ceci M , In: Manco G , In: Masciari E , In: Ras ZW , In: Christen P , In: Ntoutsi E , In: Schubert E , In: Zimek A , In: Monreale A , In: Biecek P , In: Rinzivillo S , In: Kille B , In: Lommatzsch A , In: Gulla JA et al. (eds) ECML PKDD 2020 Workshops, pp. 417–431. Springer International Publishing, Cham.
    link | pdf
    .
  26. Molnar C, König G, Herbinger J, Freiesleben T, Dandl S, Scholbeck CA, Casalicchio G, Grosse-Wentrup M, Bischl B (2020) Pitfalls to Avoid when Interpreting Machine Learning Models ICML Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers,
    link | pdf
    .
  27. Casalicchio G, Molnar C, Bischl B (2019) Visualizing the Feature Importance for Black Box Models. In: In: Berlingerio M , In: Bonchi F , In: Gärtner T , In: Hurley N , In: Ifrim G (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2018, pp. 655–670. Springer International Publishing, Cham.
    link | pdf
    .
  28. Molnar C, Casalicchio G, Bischl B (2018) iml: An R package for Interpretable Machine Learning. The Journal of Open Source Software 3, 786.
    link | pdf
    .
  29. Casalicchio G, Bischl B, Boulesteix A-L, Schmid M (2015) The residual-based predictiveness curve: A visual tool to assess the performance of prediction models. Biometrics 72, 392–401.
    link | pdf
    .