Souradeep Dutta: Sherlock - A Tool For Verification Of Neural Network Feedback Systems
University of Colorado, Boulder | Website
We present an approach for the synthesis and verification of neural network controllers for closed loop dynamical systems, modelled as an ordinary differential equation. Feedforward neural networks are ubiquitous when it comes to approximating functions, especially in the machine learning literature. The proposed verification technique tries to construct an over-approximation of the system trajectories using a combination of tools, such as, Sherlock and Flow*. In addition to computing reach sets, we incorporate counter examples or bad traces into the synthesis phase of the controller as well. We go back and forth between verification and counter example generation until the system outputs a fully verified controller, or the training fails to terminate in a neural network which is compliant with the desired specifications. We demonstrate the effectiveness of our approach over a suite of benchmarks ranging from $2$ to $17$ variables.
[slides]
Taylor Johnson: Challenges for Perception Verification in Autonomy
Vanderbilt University | Website
The ongoing renaissance in artificial intelligence (AI) has led to the advent of machine learning (ML) methods deployed within components for sensing, actuation, and control in safety-critical cyber-physical systems (CPS). This is enabling autonomy in such systems like autonomous vehicles, swarm robots, and other CPS with such learning-enabled components (LECs). However, as demonstrated in part through recent accidents in semi-autonomous/autonomous CPS and by adversarial ML attacks, ensuring such components operate reliably in all scenarios is extraordinarily challenging, particularly those used for sensing and perception. We will briefly overview methods for assuring specifications in autonomous CPS using our nnv (neural network verification) software tool (https://github.com/verivital/nnv). Next, we will focus on challenges in specification and verification of perception systems, including recent results on using nnv to prove robustness of neural networks used in perception tasks, such as image classification, applied to the VGG16/VGG19 networks that achieve high levels of accuracy on ImageNet.
Galen Edward Mullins: Machine Learning for Testing and and Evaluating Autonomous Systems
Johns Hopkins' Applied Physics Laboratory
Autonomous systems present a unique challenge to the testing community that cannot be fully addressed by existing methods for verification and validation. Traditional testing methods for unmanned systems rely heavily on field-testing according to a handful of vignettes designed by subject matter experts. However, as illustrated by recent self-driving car accidents, autonomous systems can often fail in unexpected ways. Given the complexity of these systems coupled with the complexity and uncertainty in the operating environment, real-world testing alone will never be sufficient to appropriately calibrate trust in an autonomous system. Ongoing research conducted at the JHU/APL seeks to enhance this process through a combination of machine learning (ML), mixed-reality simulation, and surrogate modeling whereby we aim to discover adversarial test cases in simulation to inform real-world experimentation. We have applied these ML-enabled techniques to real-world cyber-physical systems across a variety of domains including air, ground, sea surface, and underwater vehicles. This talk will discuss JHU/APL's framework for applying this family of techniques to the test and evaluation of autonomous systems with particular focus on the challenges associated with translating results across application domains.
Constance Heitmeyer: Assuring the Safety and Correctness of Autonomous Systems: Challenges and Promising New Methods
U.S. Naval Research Laboratory
Increasingly autonomous systems are being deployed in safety-critical applications. How to obtain assurance that these systems behave safely and correctly is a critical problem. This talk organizes potential challenges in obtaining assurance of autonomy into three categories: 1) challenges that have been largely solved for both autonomous systems and other critical systems, 2) challenges that remain unsolved for autonomous systems and for other critical systems, and 3) challenges that are unsolved and unique to autonomous systems. It also describes methods we have introduced to assure the safety and correctness of a space robot that will perform many critical operations autonomously and summarizes promising new research in assuring autonomy that relies on machine learning.
Kristin Rozier: Safe Learning: A Challenge Talk
Iowa State University | Website
This workshop is on safe learning; but what is safe learning? I can think of at least five definitions, all of which would benefit the future of cyber-physical system design and deployment. This talk will propose a formal definition for safe learning, highlight the state of the art for each facet of this definition, and issue challenge questions. When it comes to safety, we still have a lot to learn.
[slides]
Georgios Fainekos: Gray-box Adversarial Testing for Control Systems with Machine Learning Components
Arizona State University | Website
Neural Networks (NN) have been proposed in the past as an effective means for both modeling and control of systems with very complex dynamics. However, despite the extensive research, NN-based controllers have not been adopted by the industry for safety critical systems. The primary reason is that systems with learning based controllers are notoriously hard to test and verify. Even harder is the analysis of such systems against system-level specifications. In this paper, we provide a gradient based method for searching the input space of a closed-loop control system in order to find adversarial samples against some system-level requirements. Our experimental results show that combined with randomized search, our method outperforms nonlinear global optimization methods.
[slides]
Jim Kapinski: Application of light-weight verification methods to delivery robots
Amazon
Lightweight verification methods, such as property monitoring, enable automated evaluation of system-level behaviors of autonomous cyber-physical systems. This talk describes how lightweight methods can be used to improve the autonomy and reliability of delivery robots. A description of how these methods can be used in a detailed virtual environment to perform validation is provided. Also, we describe methods to post-process data from field tests to automate performance evaluation and to perform real-time data analysis, which can provide diagnostic information at runtime. The talk concludes with a discussion of specifications for autonomous systems, along with ongoing research challenges.
Yasser Shoukry: Formal Verification of End-to-End Deep Reinforcement Learning
University of California, Irvine | Website
From simple logical constructs to complex deep neural network models, Artificial Intelligence (AI)-agents are increasingly controlling physical/mechanical systems. Self-driving cars, drones, and smart cities are just examples of such systems to name a few. However, regardless of the explosion in the use of AI within a multitude of cyber-physical systems (CPS) domains, the safety, and reliability of these AI-enabled CPS is still an understudied problem. Mathematically based techniques for the specification, development, and verification of software and hardware systems, also known as formal methods, hold the promise to provide appropriate rigorous analysis of the reliability and safety of AI-enabled CPS. In this talk, I will discuss our work on applying formal verification techniques to provide formal certification of the safety of autonomous vehicles controlled by end-to-end machine learning models and the synthesis of certifiable end-to-end neural network architectures.
[slides]
Nathan Fulton: Safe Learning with Visual Inputs
MIT-IBM Watson AI Lab | Website
Formally constrained reinforcement learning (FCRL) algorithms ensure that control systems only take safe actions by enforcing formally specified constraints on the action space. FCRL enables the use of RL in safety-critical settings, and also substantially increases the sample efficiency of RL by focusing exploration on only the safe subsets of state-action space. Existing approaches toward formally constrained RL assume an oracle mapping from raw inputs (e.g., images) into a symbolic state space. Both safety constraints and reward signals are then expressed in the symbolic state space. This talk introduces an end-to-end deep RL algorithm that uses a learned template-matching model to map visual inputs into a symbolic state space. An empirical evaluation of this approach demonstrates that it preserves safety constraints even while optimizing for control objectives that are not captured by the symbolic mapping.
Hussein Sibai: Accelerating formal verification of autonomous multi-agent systems using symmetry transformations
University of Illinois at Urbana-Champaign | Website
The state space explosion problem poses a hard barrier in the safety verification of autonomous systems in multi-agent environments. Is complete state information about every vehicle really needed for verification? Our common sense suggests otherwise, as the behavior of an autonomous vehicle is often invariant under different positions, orientations, and permutations, etc. Symmetry in dynamical systems formalizes this notion of similarity of states. In this work, we show how symmetry transformations can accelerate safety verification and tackle state space explosion. Symmetry transformations of equivariant systems map solutions to other solutions. We build upon this result, producing reachsets from other previously computed reachsets. We show first how this can be used for accelerating the verification of a single system by augmenting the standard simulation-based verification algorithm with a new procedure that attempts to verify the safety of the system starting from a new initial set of states by transforming previously computed reachsets. We show significant improvements in verification time on several benchmarks. Finally, we demonstrate online monitoring using reachability analysis for an autonomous car that is being built at UIUC. We envision applying the above theory for verifying the safety of the car against many vehicles, pedestrians, and fixed obstacles.
[slides]
Lujo Bauer: On evasion attacks against machine learning in practical settings
Carnegie Mellon University | Website
Much research over the past decade has shown that machine learning algorithms are susceptible to adversarial examples---carefully crafted, minimal perturbations to training-time inputs that lead to misclassification at test time. The majority of such research, however, has been carried out with toy datasets such as MNIST and without consideration for practical constraints that need to be overcome when attacking a real-world system. In this talk I'll examine two real-world uses of machine learning algorithms---for face recognition and for malware classification. I'll describe the constraints that attacks on these systems would have to overcome in practice, and I'll show that overcoming these constraints is, unfortunately, well within attackers' capabilities. I'll also reflect on the inadequacy of many of the state-of-the-art defenses against adversarial examples.
[slides]
Bjorn Andersson: Mixed-Trust Computing - Guarding Untrusted Components in Learning Systems
Carnegie Mellon University | Website
Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To address these challenges, this paper proposes a real-time mixed-trust computing framework that combines verification and protection. The frame-work introduces a new task model, where an application task can have both an untrusted and a trusted part. The untrusted part allows complex computations supported by a full OS with a real-time scheduler running in a VM hosted by a trusted hypervisor. The trusted part is executed by another scheduler within the hypervisor and is thus protected from the untrusted part. If the untrusted part fails to finish by a specific time, the trusted part is activated to preserve safety (e.g., prevent a crash) including its timing guarantees. This framework is the first allowing the use of untrusted components for CPS critical functions while preserving logical and timing guarantees, even in the presence of malicious attackers. We present the framework design and implementation along with the schedulability analysis and the coordination protocol between the trusted and untrusted parts. We also present our Raspberry Pi 3 implementation along with experiments showing the behavior of the system under failures of untrusted components, and a drone application to demonstrate its practicality.