*4th Workshop on Semantic Policy and Action Representations for Autonomous Robots (SPAR)*
November 8, 2019 - IEEE/RSJ International Conference on Intelligent Robots and Systems - Macau, China
----*Call for Papers*
We are calling for contributions to our IROS 2019 workshop. We would like to invite the attendees of this workshop to submit an extended abstract explaining their current work or developed systems on the areas of reasoning, perception, control, planning, and learning applied to robotic systems.
This workshop is intended for roboticists interested in improving the reliability and autonomy of robots. We hope to bring together outstanding researchers and graduate students to discuss current trends, problems, and opportunities in semantic action (policy) representations, encouraging communication and common practices such as sharing data-sets among scientists in this field.
We encourage 1-2 pages extended abstract of relevant work that has been previously published, or is to be presented at the main conference. The accepted abstracts will be posted on the workshop website in a compiled yearbook and will not appear in the official IEEE proceedings. We encourage researchers as well as companies (both hardware and software companies) to contribute to the workshop. The reviewing is single blind, and will be carried out by the workshop chairs.
Please submit your abstract to the workshop official email address firstname.lastname@example.org before *September 27*, 2019.
The accepted papers will have the opportunity to present their work/ideas in a poster session. Two selective submissions will give a 15 min talk at the workshop. Please indicate in your email if you want to present as a poster or oral presentation.
Paper submission deadline: September 27, 2019
Notification of acceptance: October 20, 2019
Camera ready submission: October 31, 2019
Workshop day: November 08, 2019
Contact email: email@example.com
It has been a long-standing question whether robots can reach human level of intelligence that understands the essence of observed actions and imitates them even under different circumstances. Contemporary research in robotics and machine learning has attempted to solve this question from two different perspectives: One in a bottom-up manner by, for instance, solely relying on perceived continuous sensory data, whereas the other by approaching rather from the symbolic level in a top-down fashion. Although there have been shown encouraging results in both flows, understanding and imitation of actions have yet to be fully solved.
Action semantics stands as a potential glue for bridging the gap between a symbolic action representation and its corresponding continuous signal level description. Semantic representation provides a tool for capturing the essence of action by revealing the inherent characteristics. Thus, semantic features help robots to understand, learn, and generate policies to imitate actions even in various styles with different objects. Thus, more descriptive semantics yields robots with greater capability and autonomy. In this full-day workshop, we aim at answering two major questions.
1. What have we learned from action semantics? In recent years, there has been a substantial contribution in semantic policy and action representation in the fields of robotics, computer vision, and machine learning. In this respect, we would like to invite experts in academia and motivate them to comment on the recent advances in semantic reasoning by addressing the problem of linking continuous sensory experiences and symbolic constructions to couple perception and execution of actions. This is of fundamental importance to ease the symbol grounding problem in robotics.
2. How much of semantic policy and action representation have been transferred from controlled lab setups to industrial environments? We would like to invite researchers from industry and initiate a discussion between academic and industrial communities. Such a provocative discussion catalyzes the interaction between the two communities by addressing the scalability and generalization problems which still remain unsolved. In this respect, we would like to discuss how to transfer our current knowledge and experience about semantic policies to new domains, for instance, industrial assembly tasks, with very little human intervention.
This workshop focuses on new technologies that allow robots to learn generic semantic models for different tasks. In this workshop, we will bring together researchers from diverse fields, including robotics, computer vision, and machine learning in order to overview the most recent scientific achievements and the next break-through topics, and also to propose new directions in the field.
----The topics that are indicative but by no means exhaustive are as follows:
● AI-Based Methods
o Learning and adaptive systems
o Probability and statistical methods
o Action grammars/libraries
o Machine learning techniques for semantic representations
o Spatiotemporal event encoding
● Reasoning Methods in Robotics and Automation
o Signal to symbol transition (Symbol grounding/Object anchoring)
o Different levels of abstraction
o Semantics of manipulation actions
o Semantic policy representation
o Context modeling methods
o Concept formulation
● Human Behavior Recognition
o Learning from demonstration
o Object-action relations
o Bottom-up and top-down perception
● Task, Geometric, and Dynamic Level Plans and Policies
o PDDL high-level planning
o Task and motion planning methods
● Human-Robot Interaction
o Prediction of human intentions
o Linking linguistic and visual data
----*Invited Speakers (all confirmed)*
* Kei Okada, The University of Tokyo. Japan. http://www.jsk.t.u-tokyo.ac.jp/~k-okada/index-e.html
* Tanja Schultz, University of Bremen. https://www.uni-bremen.de/en/csl/team/staff/prof-dr-ing-tanja-schultz/
* Georg von Wichert, Siemens. Germany. https://www.linkedin.com/in/georg-von-wichert-7a74796/
* Stefanos Nikolaidis, University of Southern California. USA https://stefanosnikolaidis.net/
* Xiaoping Chen, University of Science and Technology of China. http://ai.ustc.edu.cn/en/people/xpchen.php
* Joseph Lim, University of Southern California. https://viterbi-web.usc.edu/~limjj/
* Darius Burschka, Technical University of Munich. http://robvis01.informatik.tu-muenchen.de/
* Chris Paxton, Nvidia Robotics lab. https://research.nvidia.com/person/chris-paxton
* Jesse Thomason, University of Washington https://jessethomason.com/
* Karinne Ramirez-Amaro, Chalmers University of Technology, Sweden
* Eren Erdal Aksoy, Halmstad University, Sweden
* Yezhou Yang, Arizona State University, USA
* Shiqi Zhang, SUNY Binghamton, USA
* Michael Beetz, University Bremen, Germany
* Yiannis Aloimonos, University of Maryland, USA
* Tamim Asfour, Karlsruhe Institute of Technology, Germany
* Florentin Wörgötter, University of Göttinngen, Germany