[ecoop-info] 1st CFP COMPARE2012 Workshop @IJCAR: Comparative Empirical Evaluation of Reasoning Systems

Vladimir Klebanov klebanov at kit.edu
Mon Feb 27 17:50:20 CET 2012

COMPARE2012 - Call for papers

1st International Workshop on
Comparative Empirical Evaluation of Reasoning Systems

@IJCAR 2012 - 30 June 2012, Manchester, UK



Submission deadline: April 16th, 2012
Notification of acceptance: May 14th, 2012
Final version due: May 28th, 2012
Workshop: June 30, 2012


Benchmark libraries and competitions are two popular approaches to
comparative empirical evaluation of reasoning systems. After a
significant increase in comparative evaluation activity recently, we
feel that it is time to compare notes.
    What are the proper empirical approaches and criteria for effective
comparative evaluation of reasoning systems? What are the appropriate
hardware and software environments? How to assess usability of reasoning
systems? How to design, acquire, structure, publish, and use benchmarks
and problem collections?
    The workshop aims to advance comparative empirical evaluation by
bringing together current and future competition organizers and
participants, maintainers of benchmark collections, as well as
practitioners and the general scientific public interested in the topic.
Furthermore, the workshop intends to reach out to researchers
specializing in empirical studies in computer science outside of
automated reasoning.


The scope of the workshop includes (but is not limited to) topics such
as the following. All topics apply to comparative evaluation of
reasoning systems. Reports on evaluating a single system (such as case
studies done with a particular system) are not in scope of the workshop.

* Comparative case studies
* Criteria for empirical evaluation
* Design, organisation, and conclusions from competitions
* Design, acquisition, execution, and dissemination of benchmarks
* Experience reports
* Hardware and software environments
* Inter-community collaboration
* Languages and language standards
* Practitioner perspectives
* Software and code quality evaluation
* Surveys and questionnaires
* Usability studies


Papers can be submitted either as regular papers (6-15 pages in LNCS
style) or as discussion papers (2-4 pages). Regular papers should
present previously unpublished work (completed or in progress),
including proposals for system evaluation, descriptions of benchmarks,
and experience reports. Discussion papers are intended to initiate
discussions, should address controversial issues, and may include
provocative statements.

All submitted papers will be refereed by the program committee and will
be selected in accordance with the referee reports. The collection of
accepted papers will be distributed at the workshop and will also be
published in the CEUR Workshop Proceedings series.

Submissions are now accepted via EasyChair.


* Bernhard Beckert (co-chair), Karlsruhe Institute of Technology, Germany
* Christoph Benzmueller, Free University Berlin, Germany
* Dirk Beyer, University of Passau, Germany
* Armin Biere (co-chair), Johannes Kepler University Linz, Austria
* Vinay Chaudhri, SRI International, USA
* Koen Claessen, Chalmers Technical University, Sweden
* Alberto Griggio, Fondazione Bruno Kessler, Italy
* Marieke Huisman, University of Twente, the Netherlands
* Radu Iosif, Verimag/CNRS/University of Grenoble, France
* Vladimir Klebanov (co-chair), Karlsruhe Institute of Technology, Germany
* Rosemary Monahan, National University of Ireland Maynooth
* Michal Moskal, Microsoft Research, USA
* Jens Otten, University of Potsdam, Germany
* Franck Pommereau, University of Évry, France
* Sylvie Putot, CEA-LIST, France
* Olivier Roussel, CNRS, France
* Albert Rubio, Universitat Politècnica de Catalunya, Spain
* Aaron Stump, University of Iowa, USA
* Geoff Sutcliffe (co-chair), University of Miami, USA


compare2012 at verifythis.org

Vladimir Klebanov
Postdoctoral Researcher, Application-oriented Formal Verification
Karlsruhe Institute of Technology

More information about the ecoop-info mailing list