Research
Our research mission is to contribute to reliable AI methods for different modalities that can safely be applied in practice. Our research areas include:
Explainable and User-centric AI
“Why does the model think this patient has stage 3 cancer?”, “The model says I am not creditworthy - but why, and what actions can I take to obtain the credit?”. If AI models (more specifically, machine learning models) are used in decision-support systems, such questions naturally arise. The field of eXplainable AI (XAI) developed methods to elicit such explanations of model behavior targeting various stakeholders (e.g., decision makers, machine learning researchers).
Our research contributes to a new generation of intrinsically interpretable models, conversational XAI, evaluation standards for XAI methods, and XAI methods for various machine learning methods. We also aim to understand the AI-user relationship in applications.
Key Publications
- M. Nauta, J. Schlötterer, M. van Keulen, and C. Seifert, “PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification,” 2023, doi: 10.1109/CVPR52729.2023.00269.
- A. Papenmeier, D. Kern, G. Englebienne, and C. Seifert, “It’s Complicated: The Relationship between User Trust, Model Accuracy and Explanations in AI,” ACM Trans. Comput.-Hum. Interact., vol. 29, no. 4, Mar. 2022, doi: 10.1145/3495013.
- M. Nauta et al., “From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI,” ACM Comput. Surv., Feb. 2023, doi: 10.1145/3583558.
- V. B. Nguyen, J. Schlötterer, and C. Seifert, “From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent,” in Proc. World Conference Explainable Artificial Intelligence (XAI), Cham, 2023, pp. 71–96, doi: https://arxiv.org/abs/2209.02552.
- A. Papenmeier, D. Kern, D. Hienert, Y. Kammerer, and C. Seifert, “How Accurate Does It Feel? – Human Perception of Different Types of Classification Mistakes,” New York, NY, USA, 2022, doi: 10.1145/3491102.3501915.
- P. Q. Le, M. Nauta, V. B. Nguyen, S. Pathak, J. Schlötterer, and C. Seifert, “Benchmarking eXplainable AI - A Survey on Available Toolkits and Open Challenges,” in Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, Aug. 2023, pp. 6665–6673, doi: 10.24963/ijcai.2023/747.
Natural Language Processing
“What do you mean?” - Using information from texts in application or creating conversational interfaces requires an understanding of this unstructured, non-standardized, individualized source of information.
Our research contributes towards understanding of capabilities and limitations of current large-language models, and novel solutions in various domain-specific NLP tasks (i.e., text simplification, de-identification and structuring of medical reports). Visit this ICLR Blog post if you want to know what mammoths have to do with natural language generation.
Key Publications
- P. Youssef, O. Koraş, M. Li, J. Schlötterer, and C. Seifert, “Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained Language Models,” in Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, Dec. 2023, pp. 15588–15605, doi: 10.18653/v1/2023.findings-emnlp.1043.
- S. Pathak, J. van Rossen, O. Vijlbrief, J. Geerdink, C. Seifert, and M. van Keulen, “Post-Structuring Radiology Reports of Breast Cancer Patients for Clinical Quality Assurance,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, May 2019, doi: 10.1109/TCBB.2019.2914678.
- J. Trienes, P. Youssef, J. Schlötterer, and C. Seifert, “Guidance in Radiology Report Summarization: An Empirical Evaluation and Error Analysis,” 2023, doi: 10.18653/v1/2023.inlg-main.13.
- S. Zerhoudi et al., “The SimIIR 2.0 Framework: User Types, Markov Model-Based Interaction Simulation, and Advanced Query Generation,” in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, New York, NY, USA, 2022, pp. 4661–4666, doi: 10.1145/3511808.3557711.
Machine Learning in Applications
“Theory and practice sometimes clash. And when that happens, theory loses. Every single time.” (Linus Torvalds, Message to Linux kernel mailing list. 2009-03-25)
Applying machine learning to practical problems is not straightforward. Many (implicit) assumptions made when developing against benchmark data sets need to be relaxed (e.g., data quality, availability of annotations, available data set size) leading to different research questions. In sensitive domain, special attention has to be paid to legally compliant compute infrastructures.
We adapt and apply machine learning techniques to various domains and data answering relevant practical questions. We are especially interested in the medical domain with its varying data quality and high multi-modality (text, structured data, 2D and 3D imagery, sensor data).
Key Publications
- S. Pathak, C. Lu, S. B. Nagaraj, M. van Putten, and C. Seifert, “STQS: Interpretable multi-modal Spatial-Temporal-seQuential model for automatic Sleep scoring,” Artificial Intelligence in Medicine, vol. 114, p. 102038, 2021, doi: 10.1016/j.artmed.2021.102038.
- M. Nauta, D. Bucur, and C. Seifert, “Causal Discovery with Attention-Based Convolutional Neural Networks,” Machine Learning and Knowledge Extraction, vol. 1, no. 1, pp. 312–340, Jan. 2019, doi: 10.3390/make1010019.
- B. C. S. de Vries, J. H. Hegeman, W. Nijmeijer, J. Geerdink, C. Seifert, and K. G. M. Groothuis-Oudshoorn, “Comparing three machine learning approaches to design a risk assessment tool for future fractures: predicting a subsequent major osteoporotic fracture in fracture patients with osteopenia and osteoporosis,” Osteoporosis International, Jan. 2021, doi: 10.1007/s00198-020-05735-z.
- O. Paalvast et al., “Radiology report generation for proximal femur fractures using deep classification and language generation models,” Artificial Intelligence in Medicine, vol. 128, p. 102281, 2022, doi: 10.1016/j.artmed.2022.102281.