This is an old revision of the document!


Transparent, Explainable and Affective AI in Medical Systems (TEAAM)

TEAAM 2019 is a workshop to be held on the 17th Conference on AI in Medicine (AIME)

Chairs: Grzegorz J. Nalepa, Gregor Stiglic, Sławomir Nowaczyk, Jose M. Juarez, Jerzy Stefanowski

Introduction: Medical systems highlight important requirements and challenges for the AI solutions. In particular, demands for interpretability of models and knowledge representations are much higher than in other domains. The current health-related AI applications rarely provide an integrated yet transparent and humanized solutions. However, from both patient's and doctor's perspective, there is need for approaches that are comprehensive, credible and trusted. By explaining the reasoning behind recommendations, the medical AI systems support users to accept or reject their predictions. Furthermore, healthcare is particularly challenging due to medicine and ethical requirements, laws and regulations and the real caution taken by physicians while treating the patients. Improving individual's health is a complex process, requiring understanding and collaboration between the doctor and the patient. Building up this collaboration not only requires individualized personalization, but also a proper adaptation to the gradual changes of patient’s condition, including their emotional state. Recently, AI solutions have been playing an important mediating role in understanding how both medical and personal factors interact with respect to diagnosis and treatment adherence. As the number of such applications is expected to rapidly grow in next years, their humanized aspect will play a critical role in their adoption. This workshop will bring together researchers from academia and industry to discuss current topics of interest in interpretability, explainability and affect related to AI based systems present in different healthcare domains.

Motivation: The investment and development of AI in the clinical field offers huge societal benefits in the current era of digital medicine, with a significant amount of data around healthcare processes captured in the form of Electronic Health Records, health insurance claims, medical imaging databases, disease registries, spontaneous reporting sites, clinical trials, etc. This positive impact is put under the spotlight regarding the medical responsibilities, the potentially harmful use, the emerging interest in the regulation of algorithms and the need of explanations. Predictive modelling becomes increasingly necessary for both data analysts and healthcare professionals, as it offers unique opportunities for deriving healthcare insights. At the same time, these opportunities come with significant dangers and risks that are unlike anything we have seen in the past. This controversial discussion provides a number of research challenges such as:

  • Line regarding interpretability in Machine Learning/AI
  • Line regarding affective AI in medicine
  • Data safety - patient data are highly sensitive and require appropriate safety measures and regulation.
  • Data heterogeneity - medical data comes in many forms including: structured, unstructured, text, images, continuous signals from sensors, etc.
  • Sparsity, imperfectness and data gaps – patient records maybe sparse due to infrequent clinical visits, and often, data are not equally collected at each medical encounter as well as they are affected by various sources of imperfectness.
  • Irregularity - due to the heterogeneity of medical conditions, patient-related patterns may be very irregular even for the same disease.

Topics of interest:

  • explanation in medical systems
  • comprehensive and interpretable knowledge representations
  • interpretable machine learning in medical applications
  • explanation user interfaces and human computer interaction for explainable AI
  • ethical aspects, law and social responsibility
  • fairness, accountability and trust
  • emotion-based personalization
  • affective computing solutions in medicine
  • adaptation in medical systems
  • patient behaviour change detection
  • person-centered health care
  • context-aware medical systems
  • empowering patients and self-management
  • consequences of black-box AI systems
  • impact of humanized AI on system certification and compliance

Important Dates

  • Paper submission: 2019-04-15
  • Notification: 2019-05-13
  • Camera-ready: 2019-06-10

More details will follow soon

teaam/start.1551257213.txt.gz · Last modified: 2019/02/27 08:46 by gjn
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0