SD4AS: Safe Driving for Autonomous Systems

less than 1 minute read

Published:

Authors: Felipe Toledo

Venue: Ph.D. Dissertation, University of Virginia

Abstract:

Autonomous Driving Systems (ADSs) are becoming increasingly widespread, with companies deploying them for various tasks such as taxi services, delivery, and personal transportation. As these systems become integral to daily life, ensuring their safe operation is crucial. However, recent high-profile safety incidents, including fatal collisions and traffic violations, have exposed the limitations of current validation approaches. These failures stem from three fundamental problems that span the entire ADS lifecycle. First, there exists a semantic gap between the high-dimensional sensor data that ADSs consume—such as camera images and LiDAR point clouds—and the high-level safe driving properties against which they must be validated, preventing the formal specification and automated evaluation of safety requirements. Second, safety property violations are often discovered late in ADS development, requiring costly remediation efforts such as data collection and model retraining. Third, during deployment, ADS lack mechanisms for continuous monitoring and real-time correction to prevent safe driving properties violations and avoid potential accidents. To address these problems, this dissertation introduces SD4AS (Safe Driving for Autonomous Systems), a framework designed to improve safe driving property conformance across the entire ADS lifecycle. SD4AS bridges the semantic gap by leveraging scene graphs—structured representations that capture entities and their spatial relationships—as an abstraction of raw sensor data. Combined with a specification language, this abstraction enables the definition and automated evaluation of complex safety properties, capturing 76% of the rules in the Virginia driving code. To improve conformance during development, SD4AS first quantifies the extent to which datasets contain scenarios necessary to validate properties, enabling developers to identify gaps in training data. Beyond data adequacy, for ADS that rely on machine learning components, SD4AS introduces property-aware optimization that treats safety rule violations as training errors, guiding the learning process to produce components that exhibit safe behaviors by construction, demonstrated by the fine-tuning two ADSs to reduce driving infractions. To maintain conformance during deployment, SD4AS synthesizes runtime monitors directly from property specifications, enabling continuous evaluation of safety rules during the ADS operation. To enable real-world monitoring, SD4AS integrates scene graph generators to extract entities and their spatial relationships in the driving domain directly from camera images. Lastly, SD4AS introduces a correction mechanism that proactively adjusts control outputs when violations are imminent, maintaining property conformance through real-time interventions, successfully reducing infractions of three different ADS architectures. Through the development of these approaches and their empirical evaluation across multiple ADS, this dissertation advances the state of the art in ADS safety assurance. The contributions span the entire validation pipeline—from bridging sensor inputs and safety specifications, through improving conformance during development, to enabling continuous monitoring and correction during deployment—bringing us closer to safe autonomous vehicles.

Download: [Pre-print] [Paper]