==========================================
RELEASE OF LU4R
*http://sag.art.uniroma2.it/lu4r.html http://sag.art.uniroma2.it/lu4r.html*
==========================================
We are happy to announce the release of LU4R, an adaptive spoken Language Understanding system For(4) Robots, that is the result of the collaboration between the Semantic Analytics Group (*SAG*) at the University of Roma, Tor Vergata, and the Laboratory of Cognitive Cooperating Robots (*Lab.Ro.Co.Co http://Lab.Ro.Co.Co.*) at Sapienza, University of Rome.
LU4R receives as input one or more transcriptions of a spoken command and produces a logical form made of one or more linguistic predicates reflecting the actions intended by the user. Predicates, as well as their arguments, are consistent with a linguistically motivated representation and coherent with the environment perceived by the robot. The interpretation process is sensitive to different configurations of the environment (possibly synthesized through a Semantic Map or different approaches) that represent the whole information about possible entities populating the operating context.
LU4R consists of a cascade of morphological, syntactic and semantic processes relying on external libraries (e.g. Stanford NLP chain) and statistical semantic role labeling specific components. It is actually released for English ([1]) but a version for Italian is already available ([2]). The language understanding components have been trained using realistic robotic commands, which are also derived from the RoboCup@Home Corpus [3]. The chain is fully implemented in Java and released according to a Client/Server architecture, in order to decouple the chain from the specific robotic platform that will use it. The robot engineer just needs to initialize the LU4R server and communication is supported through standard HTTP requests. The current release contains additional facilities to use third-party state-of-the-art speech to text services. These make the integration of a ROS operated robot and a full operational chain very simple: LU4R can be also invoked from a smartphone!
LU4R has been presented at IJCAI 2016 [1] and at the International Conference of the Italian Association on Artificial Intelligence (AI*IA), 2016. [2], i.e.,
[1] "*A Discriminative Approach to Grounded Spoken Language Understanding in Interactive Robotics*", *Emanuele Bastianelli, Danilo Croce, Andrea Vanzo, Roberto Basili, Daniele Nardi*, Proceedings of IJCAI '16, NY (USA), 2016.
[2] "*Spoken Language Understanding for Service Robotics in Italian*", *Andrea Vanzo, Danilo Croce, Giuseppe Castellucci, Roberto Basili, Daniele Nardi*, Proceedings of AI*IA 2016: 477-489.
[3] “RoboCup@Home Spoken Corpus: Using Robotic Competitions for Gathering Datasets http://fei.edu.br/rcs/2014/SpecialTrackDev/robocupsymposium2014_submission_33.pdf”, *Emanuele Bastianelli, Luca Iocchi, Daniele Nardi, Giuseppe Castellucci, Danilo Croce, Roberto Basili, *Proceedings of the RoboCup Symposium (2014).
You can find more information about LU4R and download it at: *http://sag.art.uniroma2.it/lu4r.html http://sag.art.uniroma2.it/lu4r.html*
For any question or support, please refer to: c***e@info.uniroma2.it <c***e@info.uniroma2.it?Subject=LU4R%20support> or v***o@dis.uniroma1.it
Enjoy!
The LU4R team
participants (1)
-
Danilo Croce