Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Bio and research interests of David Shriver.

Posts

artifacts

DNNF: Deep Neural Network Falsification

DNNF is a tool for applying falsification methods such as adversarial attacks to the checking of DNN correctness problems. Adversarial attacks provide a powerful repertoire of scalable algorithms for property falsification. DNNF leverages these techniques by employing reductions to automatically transform correctness problems into equivalent sets of adversarial robustness problems, to which these attacks can then be applied.

Links: [Github] [Documentation] [Video] [Docker]

DNNV: Deep Neural Network Verification

DNNV is a framework for verifying deep neural networks (DNN). DNN verification takes in a neural network, and a property over that network, and checks whether the property is true, or false. DNNV standardizes the network and property input formats to enable multiple verification tools to run on a single network and property. This facilitates both verifier comparison, and artifact re-use.

Links: [Github] [Documentation] [Video] [Docker]

publications

At the End of Synthesis: Narrowing Program Candidates

David Shriver, Sebastian G. Elbaum, Kathryn T. Stolee

Program synthesis is succeeding in supporting the generation of programs within increasingly complex domains. The use of weaker specifications, such as those consisting of input/output examples or test cases, has helped to fuel the success of program synthesis by lowering adoption barriers. Yet, employing weaker specifications has the side effect…
Read more

Download: [Paper]

Assessing the Quality and Stability of Recommender Systems

David Shriver

Recommender systems help users to find products they may like when lacking personal experience or facing an overwhelmingly large set of items. However, assessing the quality and stability of recommender systems can present challenges for developers. First, traditional accuracy metrics, such as precision and recall, for validating the quality of…
Read more

Download: [Paper]

Toward the development of richer properties for recommender systems

David Shriver

The performance of recommender systems is commonly characterized by metrics such as precision and recall. However, these metrics can only provide a coarse characterization of the system, as they offer limited intuition and insights on potential system anomalies, and may fail to provide a developer with an understanding of the…
Read more

Download: [Paper] [Poster]

Evaluating Recommender System Stability with Influence-Guided Fuzzing

David Shriver, Sebastian Elbaum, Matthew B. Dwyer, David S. Rosenblum

Recommender systems help users to find products or services they may like when lacking personal experience or facing an overwhelming set of choices. Since unstable recommendations can lead to distrust, loss of profits, and a poor user experience, it is important to test recommender system stability. In this work, we…
Read more

Download: [Paper] [Poster]

Refactoring Neural Networks for Verification

David Shriver, Dong Xu, Sebastian Elbaum, Matthew B. Dwyer

Deep neural networks (DNN) are growing in capability and applicability. Their effectiveness has led to their use in safety critical and autonomous systems, yet there is a dearth of cost-effective methods available for reasoning about the behavior of a DNN. In this paper, we seek to expand the applicability and…
Read more

Download: [Paper]

Systematic Generation of Diverse Benchmarks for DNN Verification

Dong Xu, David Shriver, Matthew B. Dwyer, Sebastian Elbaum

The field of verification has advanced due to the interplay of theoretical development and empirical evaluation. Benchmarks play an important role in this by supporting the assessment of the state-of-the-art and comparison of alternative verification approaches. Recent years have witnessed significant developments in the verification of deep neural networks, but…
Read more

Download: [Paper]

Reducing DNN Properties to Enable Falsification with Adversarial Attacks

David Shriver, Sebastian Elbaum, Matthew B. Dwyer

Deep Neural Networks (DNN) are increasingly being deployed in safety-critical domains, from autonomous vehicles to medical devices, where the consequences of errors demand techniques that can provide stronger guarantees about behavior than just high test accuracy. This paper explores broadening the application of existing adversarial attack techniques for the falsification…
Read more

Download: [Paper] [Artifact] [Tool] [Video]

DNNV: A Framework for Deep Neural Network Verification

David Shriver, Sebastian Elbaum, Matthew B. Dwyer

Despite the large number of sophisticated deep neural network (DNN) verification algorithms, DNN verifier developers, users, and researchers still face several challenges. First, verifier developers must contend with the rapidly changing DNN field to support new DNN operations and property types. Second, verifier users have the burden of selecting a…
Read more

Download: [Paper] [Tool] [Video]

Distribution Models for Falsification and Verification of DNNs

Felipe Toledo, David Shriver, Sebastian Elbaum, Matthew B. Dwyer

DNN validation and verification approaches that are input distribution agnostic waste effort on irrelevant inputs and report false property violations. Drawing on the large body of work on model-based validation and verification of traditional systems, we introduce the first approach that leverages environmental models to focus DNN falsification and verification…
Read more

Download: [Paper] [Artifact]