[ecoop-info] CFP - Workshop on Machine Ethics and Explainability - The Role of Logic Programming (MEandE-LP 2021)

Miguel Areias miguel-areias at dcc.fc.up.pt
Wed Jun 9 19:17:10 CEST 2021


=========================================================================
                           CALL FOR PAPERS
               MEandE-LP 2021: 1st Workshop on Machine Ethics
               and Explainability-The Role of Logic Programming

                   https://sites.google.com/view/meande2021
=========================================================================

    A workshop of 37th International Conference on Logic Programming
                        September 20-27, 2021
                      (the event will be virtual)

=========================================================================

AIMS AND SCOPE
**************

Machine Ethics, Explainability are two recent topics that have been
attracting a lot of attention and concern in the last years. This
global concern has manifested in many initiatives at different
levels. There is an intrinsic relation between these two topics. It is
not enough for an autonomous agent to behave ethically, it should also
be able to explain its behavior, i.e. there is a need for both ethical
component and explanation component. Furthermore, an explainable
behavior is obviously not acceptable if it is not ethical (i.e., does
not follow the ethical norms of the society).

In many application domains especially when human lives are involved
(and ethical decisions must be made), users need to understand well
the system recommendations, so as to be able to explain the reasons
for their decisions to other people.One of the most important ultimate
goals of explainable AI systems is the efficient mapping between
explainability and causality. Explainability is the system ability to
explain itself in natural language to average user by being able to
say, "I generated this output because x,y,z". In other words, the
ability of the system to state the causes behind its decision is
central for explainability.

However, when critical systems (ethical decisions) are concerned, is
it enough to explain system's decisions to the human user? Do we need
to go beyond the boundaries of the predictive model to be able to
observe a cause and effect within the system?

There exists a big corpus of research work on explainability, trying
to explain the output of some blackbox model following different
approaches. Some of them try to generate logical rules as
explanations. However, It is worth noting that most methods for
generating post-hoc explanations are themselves based on statistical
tools, that are subject to uncertainty or errors. Many of the post-hoc
explainability techniques try to approximate deep-learning black-box
models with simpler interpretable models that can be inspected to
explain the black-box models. However, these approximate models are
not provably loyal with respect to the original model, as there are
always trade-offs between explainability and fidelity.

On the other side, a good corpus of researchers have used inherently
interpretable approaches to design and implement their ethical
autonomous agents. Most of them are based on logic programming, from
deontic logics to non-monotonic logics and other formalisms.

Logic Programming has a great potential in these two emerging areas of
research, as logic rules are easily comprehensible by humans, and
favors causality which is crucial for ethical decision making .

Anyway, in spite of the significant amount of interest that machine
ethics has received over the last decade mainly from ethicists and
artificial intelligence experts, the question "are artificial moral
agents possible?" is still roaming around.There have been several
attempts for implementing ethical decision making into intelligent
autonomous agents using different approaches. But, so far, no fully
descriptive and widely acceptable model of moral judgment and decision
making exists. None of the developed solutions seem to be fully
convincing to provide a trusted moral behavior. The same goes for
explainability, in spite of the global concern about the
explainability of the autonomous agents' behaviour, existing
approaches do not seem to be satisfactory enough. There are many
questions that remain open in these two exciting, expanding fields.

This workshop aims to bring together researchers working in all
aspects of machine ethics and explainability, including theoretical
work, system implementations, and applications. The co-location of
this workshop with ICLP is intended also to encourage more
collaboration with researchers from different fields of logic
programming.This workshop provides a forum to facilitate discussions
regarding these topics and a productive exchange of ideas.

Topics of interest include (but not limited to):
************************************************

* New approaches to programming machine ethics;
* New approaches to explainability of blackbox models;
* Evaluation and comparison of existing approaches;
* Approaches to verification of ethical behavior;
* Logic programming applications in machine ethics;
* Integrating logic programing with methods for machine ethics;
* Integrating logic programing with methods for explainability.


SUBMISSIONS
***********

The workshop invites two types of submissions:

* original papers describing original research.
* non-original paper already published on formal proceedings or journals.

Original papers must be formatted using the Springer LNCS style 
available here:

* regular papers must not exceed 14 pages (including references)
* extended abstract must not exceed 4 pages (excluding references)

Authors are requested to clearly specify whether their submission is
original or not with a footnote on the first page. Authors are invited
to submit their manuscripts in PDF via the EasyChair system at the
link:

https://easychair.org/conferences/?conf=meandelp2021

IMPORTANT DATES
****************

* Paper submission deadline:  August 2, 2021
* Author Notification:        August 18, 2021
* Camera-ready articles due:  August 25, 2021

Workshop:                     TBA (in September 20-27, 2021)

PROCEEDINGS
***********

Authors of all accepted original contributions can opt for to publish
their work on formal proceedings.  Accepted non-original contributions
will be given visibility on the workshop web site including a link to
the original publication, if already published.

Accepted original papers will be published (details will be added
soon).

LOCATION
********

Fully Virtual.

WORKSHOP CHAIRS
***************

* Abeer Dyoub, DISIM, University of L'Aquila.
* Fabio Aurelio D’Asaro, Logic Group, Department of Philosophy,
University of Milan.
* Ari Saptawijaya, Faculty of Computer Science, University of
   Indonesia.

PROGRAM COMMITTEE
*****************

TBA

=========================================================================


More information about the ecoop-info mailing list