Tue 25 Jun 2019 14:00 - 14:20 at 224AB - Reasoning and Optimizing ML Models Chair(s): Martin Maas

Despite the tremendous advances that have been made in the last decade on developing useful machine-learning applications, their wider adoption has been hindered by the lack of strong assurance guarantees that can be made about their behavior. In this paper, we consider how formal verification techniques developed for traditional software systems can be repurposed for verification of reinforcement learning-enabled ones, a particularly important class of machine learning systems. Rather than enforcing safety by examining and altering the structure of a complex neural network implementation, our technique uses blackbox methods to synthesizes deterministic programs, simpler, more interpretable, approximations of the network that can nonetheless guarantee desired safety properties are preserved, even when the network is deployed in unanticipated or previously unobserved environments. Our methodology frames the problem of neural network verification in terms of a counterexample and syntax-guided inductive synthesis procedure over these programs. The synthesis procedure searches for both a deterministic program and an inductive invariant over an infinite state transition system that represents a specification of an application's control logic. Additional specifications defining environment-based constraints can also be provided to further refine the search space. Synthesized programs deployed in conjunction with a neural network implementation dynamically enforce safety conditions by monitoring and preventing potentially unsafe actions proposed by neural policies. Experimental results over a wide range of cyber-physical applications demonstrate that software-inspired formal verification techniques can be used to realize trustworthy reinforcement learning systems with low overhead.

Conference Day
Tue 25 Jun

Displayed time zone: Tijuana, Baja California change

14:00 - 15:30
Reasoning and Optimizing ML ModelsPLDI Research Papers at 224AB
Chair(s): Martin MaasGoogle
14:00
20m
Talk
An Inductive Synthesis Framework for Verifiable Reinforcement Learning
PLDI Research Papers
He ZhuRutgers University, USA, Zikang XiongPurdue University, Stephen Magill, Suresh JagannathanPurdue University
Media Attached
14:20
20m
Talk
Programming Support for Autonomizing Software
PLDI Research Papers
Wen-Chuan LeePurdue University, Peng LiuPurdue University, Yingqi LiuPurdue University, USA, Shiqing MaPurdue University, USA, Xiangyu ZhangPurdue University
14:40
20m
Talk
Wootz: A Compiler-Based Framework for Fast CNN Pruning via Composability
PLDI Research Papers
Hui GuanNorth Carolina State University, Xipeng ShenNorth Carolina State University, Seung-Hwan LimOak Ridge National Laboratory, USA
Media Attached File Attached
15:00
20m
Talk
Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness
PLDI Research Papers
Greg AndersonUniversity of Texas at Austin, USA, Shankara PailoorUniversity of Texas at Austin, USA, Isil DilligUT Austin, Swarat ChaudhuriRice University
Media Attached