Bridging the Gap Between AI Planning and Reinforcement Learning (PRL)

ICAPS'23 Workshop
Prague, Czech Republic
July 9-10, 2023

Aim and Scope of the Workshop

While AI Planning and Reinforcement Learning communities focus on similar sequential decision-making problems, these communities remain somewhat unaware of each other on specific problems, techniques, methodologies, and evaluations.

This workshop aims to encourage discussion and collaboration between researchers in the fields of AI planning and reinforcement learning. We aim to bridge the gap between the two communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration across the fields. We solicit interest from AI researchers that work in the intersection of planning and reinforcement learning, in particular, those that focus on intelligent decision-making. This is the fifth edition of the PRL workshop series that started at ICAPS 2020.

Topics of Interest

We invite submissions at the intersection of AI Planning and Reinforcement Learning. The topics of interest include, but are not limited to, the following

  • Reinforcement learning (model-based, Bayesian, deep, hierarchical, etc.)
  • Safe RL
  • Monte Carlo planning
  • Model representation and learning for planning
  • Planning using approximated/uncertain (learned) models
  • Learning search heuristics for planner guidance
  • Theoretical aspects of planning and reinforcement learning
  • Action policy analysis or certification
  • Reinforcement Learning and planning competition(s)
  • Multi-agent planning and learning
  • Applications of both reinforcement learning and planning

Important Dates

Please refer to the PRL workshop website for the latest information.

Paper submission deadline: March 30th, AOE (EXTENDED)

Paper acceptance notification: April 27th, AOE

Submission Details

We solicit workshop paper submissions relevant to the above call of the following types:

  • Long papers – up to 8 pages + unlimited references / appendices
  • Short papers – up to 4 pages + unlimited references / appendices
  • Extended abstracts – up to 2 pages + unlimited references/appendices

Please format submissions in AAAI style (see instructions in the Author Kit). Authors submitting papers rejected from other conferences, please ensure you do your utmost to address the comments given by the reviewers. Please do not submit papers that are already accepted for the main ICAPS conference to the workshop.

Some accepted long papers will be invited for contributed talks. All accepted papers (long as well as short) and extended abstracts will be given a slot in the poster presentation session. Extended abstracts are intended as brief summaries of already published papers, preliminary work, position papers, or challenges that might help bridge the gap.

As the main purpose of this workshop is to solicit discussion, the authors are invited to use the appendix of their submissions for that purpose.

Paper submissions should be made through OpenReview.

Organizing Committee

  • Cameron Allen, Brown University, RI, USA
  • Timo P. Gros, Saarland University, Germany
  • Michael Katz, IBM T.J. Watson Research Center, NY, USA
  • Harsha Kokel, University of Texas at Dallas, TX, USA
  • Hector Palacios, ServiceNow Research, Montreal, Canada
  • Sarath Sreedharan, Colorado State University, CO, USA

Please send your inquiries to prl.theworkshop@gmail.com

List of Accepted Papers

  • pyRDDLGym: From RDDL to Gym Environments (Ayal Taitler, Michael Gimelfarb, Jihwan Jeong, Sriram Gopalakrishnan, Martin Mladenov, Xiaotian Liu, Scott Sanner)
  • Inapplicable Actions Learning for Knowledge Transfer in Reinforcement Learning (Leo Ardon, Alberto Pozanco, Daniel Borrajo, Sumitra Ganesh)
  • Meta-operators for Enabling Parallel Planning Using Deep Reinforcement Learning (Ángel Aso Mollar, Eva Onaindia)
  • Model Learning to Solve Minecraft Tasks (Yarin Benyamin, Argaman Mordoch, Roni Stern, Shahaf S. Shperberg)
  • Towards a Unified Framework for Sequential Decision Making (Carlos Núñez-Molina, Pablo Mesejo, Juan Fernández-Olivares)
  • Policy Refinement with Human Feedback for Safe Reinforcement Learning (Ali Baheri)
  • Learning Hierarchical Policies by Iteratively Reducing the Width of Sketch Rules (Dominik Drexler, Jendrik Seipp, Hector Geffner)
  • Learning General Policies with Policy Gradient Methods (Simon Ståhlberg, Blai Bonet, Hector Geffner)
  • Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning (Marin Vlastelica, Sebastian Blaes, Cristina Pinneri, Georg Martius)
  • Joint Learning of Policy with Unknown Temporal Constraints for Safe Reinforcement Learning (Ali Baheri)
  • Multi-Agent Reinforcement Learning with Epistemic Priors (Thayne T. Walker, Jaime S. Ide, Minkyu Choi, Michael John Guarino, Kevin Alcedo)
  • Preemptive Restraining Bolts (Giovanni Varricchione, Natasha Alechina, Mehdi Dastani, Giuseppe De Giacomo, Brian Logan, Giuseppe Perelli)
  • Hierarchical Planning for Rope Manipulation using Knot Theory and a Learned Inverse Model (Matan Sudry, Tom Jurgenson, Aviv Tamar, Erez Karpas)
  • Value Function Learning via Prolonged Backward Heuristic Search (Zlatan Ajanovic, Bakir Lacevic, Jens Kober)