Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
xaila:start [2018/12/06 12:56] – [Workshop Schedule] Martin's talk gjnxaila:start [2021/11/27 17:39] (current) – [Submission and proceedings] gjn
Line 1: Line 1:
-====== The EXplainable AI in Law (XAILA) 2018 Workshop ======+====== The EXplainable & Responsible AI in Law (XAILA) Workshop ======
  
-**XAILA 2018 webpage [[http://xaila.geist.re]]**+**The main XAILA webpage is [[http://xaila.geist.re]]**
  
-**Organized by:** Grzegorz J. NalepaMartin Atzmueller, Michał Araszkiewicz, Paulo Novais\\ +XAILA is an interdisciplinary workshop on the intersection of AI and Lawfocusing on the important issues of EXplainable and Responsible AI.
-at the [[http://jurix2018.ai.rug.nl/|31st international conference on Legal Knowledge and Information Systems]] December 12–14, 2018 in Groningen, The Netherlands+
  
-===== Abstract ===== +See more information on the [[start#past editions of XAILA]].
-Humanized AI emphasizes transparency and explainability in AI systems. These perspectives have an important ethical dimension, that is most often analyzed by philosophers. However, in order for it to be fruitful for AI engineers, it has to be properly focused. The intersection of Law and AI that makes it possible, as it provides a conceptual framework for ethical concepts and values in AI systems. A significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with values. These results may now be reconsidered in a broader context, concerning the development of HAI systems and their social impact. It is a timely issue for the AI and Law community.+
  
-===== Motivation and workshop topics =====+===== XAILA at JURIX 2021 =====
  
-Humanized AI (HAI) includes important perspectives in AI systems, including transparency and explainability (XAI). Another one is the affective computing paradigm. These perspectives have an important ethical dimension. While ethical discussion is conducted by many philosophers, in order for it to be fruitful for engineers in AI, it has to be properly focused with specific concepts and operationalized. +The 5th International Workshop on eXplainable and Responsible AI and Law (XAILA2021@JURIX) 
-We believe, that it is the intersection of Law and AI that makes such an endeavor possible. Together, this lays foundations and provides a conceptual framework for ethical concepts and values in AI systems. Therefore, when discussing ethical consequences and considerations of transparent and explainable AI systems, including affective systems, we should focus on the legal conceptual framework. A significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with values. These results may now be reconsidered in a broader context, concerning the development of XAI systems and their social impact. As such it is a very timely issue for the AI and Law community. +at the  
-Our objective is to bring people from AI interested in XAI/HAI topics (possibly with broader background than just engineering) and create an ample space for discussion with people from the field of legal scholarship and/or legal practice. As many members of the AI and Law community join both perspectives, the JURIX conference should be assessed as perfect venue for the workshop. Together we would like to address some questions like: +34th International Conference on Legal Knowledge and Information Systems 
-  * non-functional design choices for explainable and transparent AI systems (including legal requirements+Mykolas Romeris UniversityVilnius, Lithuania December 8, 2021 
-  * legal requirements for AI systems in specific domains +[[https://jurix2021.mruni.eu]]
-  * legal consequences of black-box AI systems +
-  * legal criteria for explainable and transparent AI systems +
-  * possible applications of XAI systems in the area of legal policy deliberation, legal practice, teaching and research +
-  * ethical and legal implications of the use of AI systems in different spheres of societal life +
-  * relation of XAI and argumentation technologies +
-  * XAI models and architectures +
-  * understanding of the notions of explanation and transparency in XAI  +
-  * risk-based approach to analysis of AI systems and the influence of XAI on risk assessment  +
-  * incorporating ethical values into AI systems and the legal interpretation and consequences of this process +
-  * XAIprivacy and data protection +
-  * possible legal aspects and consequences of affective systems +
-  * legal requirements and risks in AI applications +
-  * XAI, certification and compliance+
  
-===== Program committee =====+Organizing Committee: 
 +Michał Araszkiewicz, Jagiellonian University in Kraków, Poland, 
 +Martin Atzmueller, University in Osnabrück, Germany, 
 +Grzegorz J. Nalepa, Jagiellonian University in Kraków,  
 +Bart Verheij, University in Groningen
  
-Martin AtzmuellerTilburg University, The Netherlands\\ +==== Description ==== 
-Michal Araszkiewicz, Jagiellonian University, Poland\\+In the last several years we have observed a growing interest in advanced AI systems achieving impressive task performance. However, there has also been an increased awareness of their complexity and challenging consequences of their possibly limited understandability to humans. In response, a number of research directions have been initiated. These include humanized or human-centered AI, as well as ethically aligned, ethically designed, or just ethical AI. For many of these ideas, the principal concept seems to be the explanatory capability of the AI system (XAI), e.g. via interpretable and explainable machine learning, inclusion of human background knowledge and adequate declarative knowledge, that could provide foundations not only for transparency and understandability, but also for a possible value alignment and human centricity, as the explanation is to be provided to humans. 
 +Recently, the term responsible AI (RAI) has been coined as a step beyond XAI. Discussion of RAI has again been strongly influenced by the “ethical” perspective. However, as practitioners in our fields we are convinced that the advancements of AI are way too fast, and the ethical perspective much too vague to offer conclusive and constructive results. We are convinced that the concepts of responsibility, and accountability should be considered primarily from the legal perspective, also because the operation of AI-based systems poses actual challenges to rights and freedoms of individuals. In the field of law, these concepts should obtain some well-defined interpretation, and reasoning procedures based on them should be clarified. The introduction of AI systems into the public, as well as the legal domain brings many challenges that have to be addressed. The catalogue of these problems include, but is not limited to: (1) the type of liability adequate for the operation of AI (be it civil, administrative of criminal liability); (2) the (re)interpretation of classical legal concepts concerning the ascription of liability, such as causal link, fault or foreseeability and (3) the distribution of liability among the involved actors (AI developers, vendors, operators, customers etc.). As the notions relevant for the discussion of legal liability evolved on the basis of observation and evaluation of human behavior, they are not easily transferable to the new and disputable domain of liability related to the operation of artificial intelligent systems. The goal of the workshop is to cover and integrate these problems and questions, bridging XAI and RAI by integrating methodological AI, as well as the respective ethical and legal perspectives, also specifically with support of established concepts and methods regarding responsibility, and accountability. 
 + 
 +==== Topics of interest ==== 
 +Our objective is to bring people from AI interested in XAI and RAI topics  and create an ample space for discussion with people from the field of legal scholarship and/or legal practice, and most importantly the vibrant AI & Law community. As many members of the AI and Law community join both perspectives, the JURIX conference is the perfect venue for the workshop. Together we would like to address some questions like: 
 +   * the notions of transparency, interpretability and explainability in XAI 
 +   * non-functional design choices for explainable and transparent AI systems 
 +   * legal consequences of black-box AI systems 
 +   * legal criteria and requirements for explainable, transparent, and responsible AI systems 
 +   * criteria of legal responsibility discussed in the context of intelligent systems operation and the role of explainability in liability ascription 
 +   * possible applications of XAI systems in the area of legal policy deliberation, legal practice, teaching and research 
 +   * legal implications of the use of AI systems in different spheres of societal life 
 +   * the notion of right to explanation 
 +   * relation of XAI and RAI to argumentation technologies 
 +   * approaches and architectures for XAI and RAI in AI systems 
 +   * XAI, RAI and declarative domain knowledge 
 +   * risk-based approach to analysis of AI systems and the influence of XAI on risk assessment 
 +   * incorporation of ethical values into AI systems, its legal interpretation and consequences 
 +   * XAI, privacy and data protection (conceptual and theoretical issues) 
 +   * XAI, certification and compliance 
 + 
 +==== Important dates ==== 
 + 
 +Submission:                   19.11.2021\\ 
 +Notification:                    28.11.2021\\ 
 +Camera-ready:               05.12.2021\\ 
 +Workshop:                      08.12.2021 
 + 
 +==== Submission and proceedings ==== 
 + 
 +We accept regular/long papers up to 12pp. We also welcome short and position papers of 6pp. Please use the Springer LNCS format. A dedicated Easychair installation is provided at 
 +https://easychair.org/conferences/?conf=xailajurix2021 
 + 
 +==== Program Committee (tbe & tbc) ==== 
 +Martin AtzmüllerOsnabrück University, Germany\\ 
 +Michał Araszkiewicz, Jagiellonian University, Poland\\
 Kevin Ashley, University of Pittsburgh, USA\\ Kevin Ashley, University of Pittsburgh, USA\\
-Szymon Bobek, AGH University, Poland\\+Floris Bex, Utrecht University, the Netherlands\\ 
 +Szymon Bobek, Jagiellonian University, Poland\\ 
 +Georg Borges, Universität des Saarlandes, Germany\\
 Jörg Cassens, University of Hildesheim, Germany\\ Jörg Cassens, University of Hildesheim, Germany\\
 David Camacho, Universidad Autonoma de Madrid, Spain\\ David Camacho, Universidad Autonoma de Madrid, Spain\\
 Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain\\ Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain\\
-Colette CuijpersTilburg UniversityThe Netherlands\\ +Enrico FrancesconiIGSG-CNRItaly\\
-Rafał Michalczak, Jagiellonian University, Poland\\ +
-Teresa Moreira, University of Minho Braga, Portugal\\+
 Paulo Novais, University of Minho Braga, Portugal\\ Paulo Novais, University of Minho Braga, Portugal\\
-Grzegorz J. Nalepa, AGH University, Jagiellonian University, Poland\\+Grzegorz J. Nalepa, Jagiellonian University, Poland\\
 Tiago Oliveira, National Institute of Informatics, Japan\\ Tiago Oliveira, National Institute of Informatics, Japan\\
 Martijn von Otterlo, Tilburg University, The Netherlands\\ Martijn von Otterlo, Tilburg University, The Netherlands\\
 +Jose Palma, Universidad de Murcia, Spain\\
 Adrian Paschke, Freie Universität Berlin, Germany\\ Adrian Paschke, Freie Universität Berlin, Germany\\
-Jose PalmaUnivesidad de Murcia, Spain\\+Juan PavónUniversidad Complutense de Madrid, Spain\\
 Monica Palmirani, Università di Bologna, Italy\\ Monica Palmirani, Università di Bologna, Italy\\
 Radim Polčák, Masaryk University, Czech Republic\\ Radim Polčák, Masaryk University, Czech Republic\\
 Marie Postma, Tilburg University, The Netherlands\\ Marie Postma, Tilburg University, The Netherlands\\
-Juan Pavón, Universidad Complutense de Madrid, Spain\\+Víctor Rodríguez-Doncel, Universidad Politécnica de Madrid, Spain\\
 Ken Satoh, National Institute of Informatics, Japan\\ Ken Satoh, National Institute of Informatics, Japan\\
 +Jaromír Šavelka, Carnegie Mellon University, USA\\
 Erich Schweighofer, University of Vienna, Austria\\ Erich Schweighofer, University of Vienna, Austria\\
 Piotr Skrzypczyński, Poznań University of Technology, Poland\\ Piotr Skrzypczyński, Poznań University of Technology, Poland\\
-Dominik ŚlęzakWarsaw University, Poland\\ +Michal ValcoConstantine the Philosopher University in NitraSlovakia\\ 
-Michal Valco, University of PresovSlovakia\\+Bart Verheij, University of GroningenThe Netherlands\\
 Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland
  
-===== Important dates ===== 
  
-  * Submission: 23.<del>14</del>.11.2018 +===== Past editions of XAILA =====
-  * Notification:  30.<del>23</del>.11.2018 +
-  * Camera-ready: 07.12.<del>30.11</del>.2018 +
-  * Workshop:  12.12.2018+
  
-===== Submission ===== +[[xaila2021icail|The fourth edition of XAILA, XAILA2021ICAIL]] was 
-Please submit using the dedicated Easychair installation [[https://easychair.org/conferences/?conf=xaila2018]]+organized by Michał Araszkiewicz, Martin Atzmueller, Grzegorz J. Nalepa, Bart Verheij 
 +at the 18th International Conference on Artificial Intelligence and Law (ICAIL 2021) held in Sao Paulo, Brazil (entirely online).
  
-We accept long (8 pages) and short (4 pages) papers in PDF.  +[[xaila2020|The third edition of XAILA, XAILA2020]] was  
-Please use the [[http://www.iospress.nl/service/authors/latex-and-word-tools-for-book-authors/ +organized by Grzegorz JNalepa, Michał Araszkiewicz, Bart Verheij, and Martin Atzmueller at the JURIX 2020JURIX 2020 is the 33rd International Conference on Legal Knowledge and Information Systems organised by the Foundation for Legal Knowledge Based Systems (JURIX) since 1988.
-|IOS Press format.]]+
  
-===== Proceedings ===== +XAILA 2020 proceedings can be found at [[http://ceur-ws.org/Vol-2891/]]
-Workshop proceedings will be made available by CEUR-WS.  +
-A post workshop journal publication is considered.+
  
-===== Call for papers ===== +[[start2019|The second edition of XAILA, XAILA2019]] was organized by: Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais\\ 
-{{ :xaila:xaila-cfp-v3.pdf }}+at the [[https://jurix2019.oeg-upm.net/|JURIX 2019 32nd International Conference on Legal Knowledge and Information Systems]] on the  
 +December 11, 2019, Madrid, Spain in ETSI Minas y Energía School (Universidad Politécnica de Madrid) 
 +[[start2019|See the dedicated page for XAILA2019]]
  
-===== Accepted papers =====+XAILA 2019 proceedings can be found at [[http://ceur-ws.org/Vol-2681]]
  
-Regular papers: +We also proposed XAILA to be held on the [[https://icail2019-cyberjustice.com|International Conference on Artificial Intelligence and Law (ICAIL)]]June 17-21, 2019, Montréal (Qc.)CanadaWhile the workshop was met with a large interest, and attracted many registered participantssurprisingly too few papers were actually submitted 
-  * Jakub Harašta. //Trust by Discrimination: Technology Specific Regulation & Explainable AI// +[[icail2019|See the dedicated page for XAILA2019@ICAIL]]
-  * Giovanni Sileno, Alexander Boer and Tom Van Engers. //The Role of Normware in Trustworthy and Explainable AI// +
-  * Martijn Van Otterlo and Martin Atzmueller. //Two Tales of Explainability for Legal AI// +
-  * Michał Araszkiewicz and Grzegorz J. Nalepa. //Explainability of Formal Models of Argumentation Applied to Legal Domain// +
-  * Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. //Utilizing iALC to Formalize the Brazilian OAB Exam// +
-  * Muhammad Mudassar Yamin and Basel Katt. //Ethical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain// +
- +
-Short papers: +
-  * Michał Araszkiewicz and Tomasz Zurek. //A Dialogical Framework for Disputed Issues in Legal Interpretation// +
-  * Veronika Žolnerčíková. //Homologation of Autonomous Machines from a Legal Perspective// +
- +
-===== Workshop Schedule ===== +
-9.30-9.40 - **Introduction** (conference chairs)\\ +
-9.40-10.10 -  Jakub Harašta. Trust by Discrimination: Technology Specific Regulation & Explainable AI\\ +
-10.10-10.40 - Giovanni Sileno, Alexander Boer and Tom Van Engers. The Role of Normware in Trustworthy and Explainable AI\\  +
-10.40-11.00 - Michał Araszkiewicz and Tomasz Zurek. A Dialogical Framework for Disputed Issues in Legal Interpretation +
- +
-11.00-11.30 - **Coffee break** +
- +
-11.30-12.30 - **Keynote lecture: [[http://www.ai.rug.nl/~verheij|Bart Verheij]]: Good AI and Law**\\ +
-//Bart Verheij holds the chair of artificial intelligence and argumentation at the University of Groningen. He is head of the department of Artificial Intelligence in the Bernoulli Institute of Mathematics, Computer Science and Artificial IntelligenceFaculty of Science and Engineering. He participates in the Multi-Agent Systems research programHis research focuses on artificial intelligence and argumentationoften with the law as application domainHe is currently working on the connections between knowledgedata and reasoningas a contribution to explainable, responsible and social artificial intelligenceHe is president of the International Association for Artificial Intelligence and Law (IAAIL).//+
    
-12.30-13.00 - Michał Araszkiewicz and Grzegorz J. Nalepa. Explainability of Formal Models of Argumentation Applied to Legal Domain\\ +[[start2018|The first edition, XAILA2018]] was  
-13.00-14.00 - **Lunch** +Organized by: Grzegorz J. NalepaMartin Atzmueller, Michał Araszkiewicz, Paulo Novais\\ 
- +at the [[http://jurix2018.ai.rug.nl/|31st international conference on Legal Knowledge and Information Systems]] December 12–14, 2018 in Groningen, The Netherlands 
-14.00-14.30 -  Martijn Van Otterlo and Martin Atzmueller. On Requirements and Design Criteria for Explainability in Legal AI +[[start2018|See the dedicated page for XAILA2018]]
-\\ +
-14.30-15.00 - Muhammad Mudassar Yamin and Basel KattEthical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain+
  
-15.00-15.30 **Coffee break**+XAILA 2018 proceedings can be found at [[http://ceur-ws.org/Vol-2381]]
  
-15.30-16.00 - Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. Utilizing iALC to Formalize the Brazilian OAB Exam\\ 
-16.00-16.20 - Veronika Žolnerčíková. Homologation of Autonomous Machines from a Legal Perspective\\ 
-16.20-16:45 - **XAILA, closing & open discussion** 
  
xaila/start.1544101018.txt.gz · Last modified: 2018/12/06 12:56 by gjn
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0