Visible to the public CfP: The AAAI's Workshop on Artificial Intelligence Safety (SafeAI 2021)Conflict Detection Enabled

No replies
Anonymous
Anonymous's picture

The AAAI's Workshop on Artificial Intelligence Safety (SafeAI)

Feb 8/9, 2021 | Virtual @AAAI-21 | https://safeai.webs.upv.es/

Important DATES (Extended)

  • Paper Submission: Nov 9 - AOE time
  • Acceptance Notification: Nov 30 - AOE time
  • Camera Ready Version: Dec 15

SafeAI aims to explore new ideas on AI safety engineering, ethically aligned design, regulation and standards for AI-based systems.

The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering Safety as a design principle rather than an option. However, theoreticians and practitioners of AI and Safety are confronted with different levels of safety, different ethical standards and values, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. These choices can only be analyzed holistically if the technological and ethical perspectives are integrated into the engineering problem, while considering both the theoretical and practical challenges of AI safety. A new and comprehensive view of AI Safety must cover a wide range of AI paradigms, including systems that are application-specific as well as those that are more general, considering potentially unanticipated risks. In this workshop, we want to explore ways to bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to build, evaluate, deploy, operate and maintain AI-based systems that are demonstrably safe.

This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:

  • What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety, and what are the gaps?
  • How can we engineer trustable AI software architectures?
  • How can we make AI-based systems more ethically aligned?
  • What safety engineering considerations are required to develop safe human-machine interaction?
  • What AI safety considerations and experiences are relevant from industry?
  • How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
  • How can we develop solid technical visions and new paradigms about AI Safety?
  • How do metrics of capability and generality, and the trade-offs with performance affect safety?

The main interest of the proposed workshop is to look at a new perspective of system engineering where multiple disciplines such as AI and safety engineering are viewed as a larger whole, while considering ethical and legal issues, in order to build trustable intelligent autonomy.

Contributions are sought in (but are not limited to) the following topics:

  • Safety in AI-based system architectures
  • Continuous V&V and predictability of AI safety properties
  • Runtime monitoring and (self-)adaptation of AI safety
  • Accountability, responsibility and liability of AI-based systems
  • Effect of uncertainty in AI safety
  • Avoiding negative side effects in AI-based systems
  • Role and effectiveness of oversight: corrigibility and interruptibility
  • Loss of values and the catastrophic forgetting problem
  • Confidence, self-esteem and the distributional shift problem
  • Safety of Artificial General Intelligence (AGI) systems and the role of generality
  • Reward hacking and training corruption
  • Self-explanation, self-criticism and the transparency problem
  • Human-machine interaction safety
  • Regulating AI-based systems: safety standards and certification
  • Human-in-the-loop and the scalable oversight problem
  • Evaluation platforms for AI safety
  • AI safety education and awareness
  • Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others

You are invited to submit:

  • Full technical papers (6-8 pages), or
  • proposals for technical Talks (up to one-page abstract including short Bio of the main speaker), without associated paper,
  • Position papers (4-6 pages),

Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=safeai2021

Please keep your paper format according to AAAI Formatting Instructions (two-column format). The AAAI author kit can be downloaded from: https://www.aaai.org/Publications/Templates/AuthorKit21.zip

Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.

We are happy to receive papers that have not been accepted for AAAI, and we welcome the review comments if the authors want to send them as additional material.

The workshop proceedings will be published on CEUR-WS. CEUR-WS is "archival" in the sense that a paper cannot be removed once it's published. Authors will keep the copyright of their papers as per CC BY 4.0. In other words, CEUR-WS is similar to arxiv. In any case, authors of accepted papers can opt out and decide not to include their paper in the proceedings. We will inform the authors about the procedure in due term.

We are also planning a Special Issue in a Journal, after the workshop.

For any questions, please send an email to: SafeAI Contact

Workshop Format

At SafeAI, we believe that to deliver a truly memorable event, we need a highly interactive format with more much-needed debate to keep people engaged and energized throughout the workshop. The workshop sessions are structured into short paper presentations and a common panel slot to discuss both individual paper contributions and shared topic issues.

Three specific roles are part of this format: session chairs, presenters and session discussants.

  • Session Chairs introduce sessions and participants. The Chair moderates session and plenary discussions, takes care of the time, and gives the word to speakers in the audience during discussions.
  • Presenters give a paper presentation in 10 minutes and then participate in the debate slot. You should not worry about the short time for your talk! Please note a workshop talk can, at best, be an advert for the paper, which is the durable record of what was done. In addition, let's not forget that we are in an inter-disciplinary workshop. Longer, more detailed talks may alienate some of the attendees.
  • Session Discussants prepare the discussion of individual papers and the plenary debate. The discussant gives a critical review of the session papers. Paper discussants could be typically presenters of papers in other sessions or PC members.

Organizing Committee

  • Huascar Espinoza, CEA LIST, France
  • Jose Hernandez-Orallo, Universitat Politecnica de Valencia, Spain
  • Xin Cynthia Chen, University of Hong Kong, China
  • Sean O hEigeartaigh, University of Cambridge, UK
  • Xiaowei Huang, University of Liverpool, UK
  • Mauricio Castillo-Effen, Lockheed Martin, USA
  • Richard Mallah, Future of Life Institute, USA
  • John McDermid, University of York, UK
  • Program Committee
  • Stuart Russell, UC Berkeley, USA
  • Francesca Rossi, IBM and University of Padova, Italy
  • Raja Chatila, Sorbonne University, France
  • Roman V. Yampolskiy, University of Louisville, USA
  • Gereon Weiss, Fraunhofer ESK, Germany
  • Mark Nitzberg, Center for Human-Compatible AI, USA
  • Roman Nagy, Autonomous Intelligent Driving GmbH, Germany
  • Francois Terrier, CEA LIST, France
  • Helene Waeselynck, LAAS-CNRS, France
  • Siddartha Khastgir, University of Warwick, UK
  • Orlando Avila-Garcia, Atos, Spain
  • Nathalie Baracaldo, IBM Research, USA
  • Peter Eckersley, Partnership on AI, USA
  • Andreas Theodorou, Umea University, UK
  • Yang Liu, Webank, China
  • Philip Koopman, Carnegie Mellon University, USA
  • Chokri Mraidha, CEA LIST, France
  • Heather Roff, Johns Hopkins University, USA
  • Bernhard Kaiser, ANSYS, Germany
  • Brent Harrison, University of Kentucky, USA
  • Jose M. Faria, Safe Perspective, UK
  • Toshihiro Nakae, DENSO Corporation, Japan
  • John Favaro, Trust-IT, Italy
  • Rob Ashmore, Defence Science and Technology Laboratory, UK
  • Jonas Nilsson, NVIDIA, USA
  • Michael Paulitsch, Intel, Germany
  • Philippa Ryan Conmy, Adelard, UK
  • Stefan Kugele, Technische Hochschule Ingolstadt, Germany
  • Richard Cheng, California Institute of Technology, USA
  • Javier Ibanez-Guzman, Renault, France
  • Mehrdad Saadatmand, RISE SICS, Sweden
  • Alessio R. Lomuscio, Imperial College London, UK
  • Rick Salay, University of Waterloo, Canada
  • Jeremie Guiochet, LAAS-CNRS, France
  • Sandhya Saisubramanian, University of Massachusetts Amherst, USA
  • Mario Gleirscher, University of York, UK
  • Chris Allsopp, Frazer-Nash Consultancy, UK
  • Daniela Cancila, CEA LIST, France
  • Vahid Behzadan, University of New Haven, USA
  • Simos Gerasimou, University of York, UK
  • Brian Tse, Affiliate at University of Oxford, China
  • Peter Flach, University of Bristol, UK
  • Gopal Sarma, Broad Institute of MIT and Harvard, USA
  • Rob Alexander, University of York, UK
  • Simon Fuerst, BMW Group, Germany
  • Roman Yampolskiy, University of Louisville, USA
  • Javier Garcia, Universidad Carlos III de Madrid, Spain
  • Ramana Kumar, DeepMind, UK