Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
xaila:start [2020/09/30 21:48] – 2020 gjnxaila:start [2020/11/29 19:51] gjn
Line 12: Line 12:
  
 ===== XAILA 2020 at JURIX2020 ===== ===== XAILA 2020 at JURIX2020 =====
 +==== Workshop Program ====
 +The workshop will take place on 09.12.2020 online using MSTeams.
 +More details will follow.
 +
 +==== Invited Speakers ====
 +
 +{{:xaila:p_hacker.jpg?100 |}}
 +**Professor Dr. Philipp Hacker**, LL.M. (Yale), holds the Chair for Law and Ethics of the Digital Society at European University Viadrina in Frankfurt (Oder). He serves jointly at the Faculty of Law and at the European New School of Digital Studies (ENS). Before joining Viadrina, he was an AXA Postdoctoral Fellow at the Faculty of Law at Humboldt University of Berlin. Previous research stays include a Max Weber Fellowship at the European University Institute and an A.SK Fellowship at the WZB Berlin Social Science Center. His research focuses on law and technology as well as (behavioral) law and economics. In 2020, he received the Science Award of the German Foundation for Law and Computer Science. His most recent books include Regulating Blockchain. Techno-Social and Legal Challenges (Oxford University Press, 2019, co-edited with Ioannis Lianos, Georgios Dimitropoulos and Stefan Eich); Theories of Choice. The Social Science and the Law of Decision Making (Oxford University Press, forthcoming, co-edited with Stefan Grundmann); and Datenprivatrecht [Private Data Law] (Mohr Siebeck, 2020).
 +
 +**Title of the talk** AI and Discrimination: Legal Challenges and Technical Strategies
 +
 +**Abstract**
 +The talk will focus on the interaction between AI models and liability in the domain of non-discrimination. As is well-known, the output of AI models may exhibit bias toward legally protected groups. In the past, various fairness definitions have been developed to mitigate such discrimination. Against this background, the talk will first present a new model which allows AI developers to flexibly interpolate between different fairness definitions depending on the context of the model application. In the second step, however, the talk will inquire to what extent AI developers may risk liability under affirmative action doctrines if they seek to implement algorithmic fairness measures in their models.
 +
 ==== Call for Papers ==== ==== Call for Papers ====
 {{ :xaila:xaila2020cfp1.pdf |}} {{ :xaila:xaila2020cfp1.pdf |}}
Line 66: Line 80:
 Marie Postma, Tilburg University, The Netherlands\\ Marie Postma, Tilburg University, The Netherlands\\
 Ken Satoh, National Institute of Informatics, Japan\\ Ken Satoh, National Institute of Informatics, Japan\\
 +Jaromír Šavelka, Carnegie Mellon University, USA\\
 Erich Schweighofer, University of Vienna, Austria\\ Erich Schweighofer, University of Vienna, Austria\\
 Michal Valco, Constantine the Philosopher University in Nitra, Slovakia\\ Michal Valco, Constantine the Philosopher University in Nitra, Slovakia\\
Line 72: Line 87:
 ==== Important dates ==== ==== Important dates ====
  
-Submission: 26.10.2020\\+Submission: //09.11.2020// <del>04.11.2020</del> <del>26.10.2020</del>\\
 Notification:  23.11.2020\\ Notification:  23.11.2020\\
 Camera-ready: 30.11.2020\\ Camera-ready: 30.11.2020\\
Line 78: Line 93:
  
 ==== Submission details ==== ==== Submission details ====
 +
 +We accept regular/long papers up to 12pp.
 +We also welcome short and position papers of 6pp.
 +Please use the [[https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines|Springer LNCS format]].
  
 A dedicated Easychair installation is provided at [[https://easychair.org/conferences/?conf=xaila2020]] A dedicated Easychair installation is provided at [[https://easychair.org/conferences/?conf=xaila2020]]
xaila/start.txt · Last modified: 2021/11/27 17:39 by gjn
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0