Deeper Notions of Correctness in Image-Based DNNs: Lifting Properties from Pixel to Entities

less than 1 minute read

Published:

Authors: Felipe Toledo, David Shriver, Sebastian Elbaum, and Matthew B. Dwyer

Venue: 2023 ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (FSE)

Abstract:

Deep Neural Networks (DNNs) that process images are being widely used for many safety-critical tasks, from autonomous vehicles to medical diagnosis. Currently, DNN correctness properties are defined at the pixel level over the entire input. Such properties are useful to expose system failures related to sensor noise or adversarial attacks, but they cannot capture features that are relevant to domain-specific entities and reflect richer types of behaviors. To overcome this limitation, we envision the specification of properties based on the entities that may be present in image input, capturing their semantics and how they change. Creating such properties today is difficult as it requires determining where the entities appear in images, defining how each entity can change, and writing a specification that is compatible with each particular V\&V client. We introduce an initial framework structured around those challenges to assist in the generation of Domain-specific Entity-based properties automatically by leveraging object detection models to identify entities in images and creating properties based on entity features. Our feasibility study provides initial evidence that the new properties can uncover interesting system failures, such as changes in skin color can modify the output of a gender classification network. We conclude by analyzing the framework potential to implement the vision and by outlining directions for future work.

Download: [Pre-print] [Paper] [Video]