Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| aira:start [2025/11/03 14:29] – [Schedule Autumn 2025] mzk | aira:start [2025/11/03 14:36] (current) – [2025-10-30] mzk | ||
|---|---|---|---|
| Line 471: | Line 471: | ||
| + | |||
| + | ==== 2025-11-06 ==== | ||
| + | <WRAP column 15%> | ||
| + | {{ : | ||
| + | </ | ||
| + | |||
| + | <WRAP column 75%> | ||
| + | |||
| + | **Speaker**: | ||
| + | |||
| + | **Title**: LLM-based feature generation from text for interpretable machine learning. | ||
| + | |||
| + | **Abstract**: | ||
| + | Traditional text representations like embeddings and bag-of-words hinder rule learning and other interpretable machine learning methods due to high dimensionality and poor comprehensibility. This article investigates using Large Language Models (LLMs) to extract a small number of interpretable text features. We propose two workflows: one fully automated by the LLM (feature proposal and value calculation), | ||
| + | |||
| + | **Biogram**: | ||
| + | Tomáš Kliegr is a Professor at the Faculty of Informatics and Statistics at the Prague University of Economics and Business (VSE Praha), where he is part of the Data Science & Explainable AI (DSXAI) research team. His research interests include Explainable AI (XAI), Interpretable Machine Learning, and neurosymbolic methods. He has published on topics such as the effect of cognitive biases on model interpretation in journals including Artificial Intelligence and Machine Learning. He is active in the rule-based systems community. | ||
| + | |||
| + | Dr. Lukas Sykora is a Research Assistant at the Department of Information and Knowledge Engineering and a Lecturer at the Prague University of Economics and Business. | ||
| + | |||
| + | </ | ||
| + | <WRAP clear></ | ||
| ==== 2025-10-30 ==== | ==== 2025-10-30 ==== | ||