Dear Fellow Roboticists,
We are thrilled to announce the release of the UT Campus Object Dataset (CODa) for egocentric perception on urban-scale autonomous mobile robots. CODa is the largest multiclass, multimodal urban mobile robotics dataset to date, with 1.3 million 3D bounding box annotations for 53 object classes, 204 million annotated points for 24 terrain classes, and globally consistent pseudo-ground truth poses.
CODa contains 8.5 hours of multimodal sensor data: hardware synchronized high resolution 3D point clouds and stereo RGB cameras, RGB-D videos, and 9-DOF IMU data. We provide 58 minutes of ground-truth annotations containing 3D bounding box annotations with instance IDs, 3D semantic annotations for urban terrain, and globally consistent pseudo-ground truth localization. We repeatedly traverse geographic locations across weather conditions and times of the day. The CODa data collection routes include various types of environments including large indoor atriums, busy food courts, outdoor sidewalks, and mixed-traffic roads.
The CODa release includes the labeled multimodal dataset, pretrained 3D object detection models, and dataset development kit.
Project website: https://amrl.cs.utexas.edu/coda
Paper preprint: https://arxiv.org/abs/2309.13549
Regards, Joydeep
---------------- Joydeep Biswas Associate Professor Department of Computer Science The University of Texas at Austin