Closing the Gap Between Sensor Inputs and Driving Properties: A Scene Graph Generator for CARLA

less than 1 minute read

Published:

Authors: Trey Woodlief, Felipe Toledo, Sebastian Elbaum, and Matthew B. Dwyer

Venue: International Conference on Software Engineering (ICSE)

Abstract:

The software engineering community has increasingly taken up the task of assuring safety in autonomous driving systems by applying software engineering principles to create techniques to develop, validate, and verify these systems. However, developing and analyzing these techniques requires extensive sensor datasets and execution infrastructure with the relevant features and known semantics for the task at hand. While the community has invested substantial effort in gathering and cultivating large-scale datasets and developing simulation infrastructure with varying features, semantic understanding of this data has remained out of reach, relying on limited, manually-crafted datasets or bespoke simulation environments to ensure the desired semantics are met. To address this, we developed a plugin for the widely-used autonomous driving simulator CARLA called CARLASGG, that extracts relevant ground-truth spatial and semantic information from the simulator state at runtime in the form of scene graphs, enabling online and post-hoc automatic reasoning about the semantics of the scenario and associated sensor data. The tool has been successfully deployed in multiple previous software engi-neering approach evaluations which we describe to demonstrate the utility of the tool. The tool enables the client to adjust the pre-cision of the semantic information captured in the scene graph to suit client application needs. We provide a detailed description of the tool's design, capabilities, and configurations, with additional documentation available accompanying the tool's online source: https://github.com/less-lab-uva/carla_scene_graphs.

Download: [Pre-print] [Paper] [Artifact]