Runtime and Complex Systems Verification
Sagar Chaki: Verifying Cyber-Physical Systems by Combining Software Model Checking with Hybrid Systems Reachability
Carnegie Mellon University | Website
Cyber-physical systems (CPS) span the communication, computation and control domains. Creating a single, complete, and detailed model of a CPS is not only difficult, but, in terms of verification, probably not useful; current verification algorithms are likely intractable for such all-encompassing models. However, specific CPS domains have specialized formal reasoning methods that can successfully analyze certain aspects of the integrated system. To prove overall system correctness, however, care must be taken to ensure the interfaces of the proofs are consistent and leave no gaps, which can be difficult since they may use different model types and describe different aspects of the CPS. This work proposes a bridge between two important verification methods, software model checking and hybrid systems reachability. A contract automaton (CA) expresses both (1) the restrictions on the interactions between the application and the controller, and (2) the desired system invariants. A sound assume-guarantee style compositional proof rule decomposes the verification into two parts -- one verifies the application against the CA using software model checking, and another verifies the controller against the CA using hybrid systems reachability analysis. In this way, the proposed method avoids state-space explosion due to the composition of discrete (application) and continuous (controller) behavior, and can leverage verification tools specialized for each domain. The power of the approach is demonstrated by verifying collision avoidance using models of a distributed group of communicating quadcopters, where the provided models are software code and continuous 2-d quadcopter dynamics. This is joint work with Dr. Stanley Bak from the Air Force Research Lab.
[ slides ]
Taylor Johnson: Real-Time Reachability for Safety Verification of Autonomous Cyber-Physical Systems
Vanderbilt University | Website
The Simplex Architecture ensures the safe use of an unverifiable, complex controller such as those arising in autonomous systems by executing it in conjunction with a formally verified safety controller and a formally verified supervisory controller. Simplex enables the safe use of high-performance, untrusted, and complex control algorithms without requiring complex controllers to be formally verified or certified. The supervisory controller should take over control from an unverified complex controller if it misbehaves and transfer control to a safety controller. The supervisory controller should (1) guarantee the system never enters an unsafe state (safety), but should also (2) use the complex controller as much as possible (minimize conservatism). The problem of precisely and correctly defining the supervisory controller has previously been considered either using a control-theoretic optimization approach (LMIs), or through an offline hybrid systems reachability computation. In this work, we show that a combined online/offline approach that uses aspects of the two earlier methods in conjunction with a real-time reachability computation also maintains safety, but with significantly less conservatism, allowing the complex controller to be used more frequently. We demonstrate the advantages of this unified approach on a saturated inverted pendulum, where the verifiable region of attraction is over twice as large compared to the earlier approach. We present results of embedded hardware studies using both ARM processors on Beaglebone Black and Atmel AVR (Arduino) microcontrollers. This is the first ever demonstration of a hybrid systems reachability computation in real-time on actual embedded platforms, and required addressing significant technical challenges. We will conclude with ongoing research on formally modeling and verifying CPS, including swarm robotics controlled with distributed algorithms, automotive CPS, aerospace CPS include groups of UAVs, and developing fundamental new modeling abstractions for designing CPS using extensions of Signal Temporal Logic (STL) done in conjunction with Toyota.
Bio: Taylor T. Johnson is an Assistant Professor of Electrical Engineering and Computer Science (EECS) at Vanderbilt University (since August 2016), where he directs the Verification and Validation for Intelligent and Trustworthy Autonomy Laboratory (VeriVITAL) and is a Senior Research Scientist in the Institute for Software Integrated Systems. Taylor was previously an Assistant Professor of Computer Science and Engineering (CSE) at the University of Texas at Arlington (September 2013 to August 2016). Taylor is a 2016 recepient of the AFOSR Young Investigator Research Program (YIP) award. Taylor earned a PhD in Electrical and Computer Engineering (ECE) at the University of Illinois at Urbana-Champaign in 2013, an MSc in ECE at Illinois in 2010, and a BSEE in ECE from Rice University in 2008. Taylor's research focus is developing formal verification techniques and software tools for cyber-physical systems (CPS) with goals of improving safety, reliability, and security. Taylor has published over two-dozen papers on these verification and validation methods and their applications across domain areas such as power and energy systems, aerospace, transportation systems, and robotics, two of which were recognized with best paper awards, from the IEEE and IFIP, respectively. Taylor gratefully acknowledges the support of his group's research by AFRL, AFOSR, ARO, NSF (CISE CCF/SHF, CNS/CPS; ENG ECCS/EPCN), NVIDIA, ONR, Toyota, and USDOT.
Stefan Mitsch: ModelPlex: Verified Runtime Monitors and Verified Test Oracles for Safety of Cyber-Physical Systems
Carnegie Mellon University | Website
Formal methods play a crucial role in making cyber-physical systems (CPS) safe. They make strong guarantees about the system behavior if accurate models of the system can be obtained. In CPS, models are essential; but any model we could possibly build necessarily deviates from the real world, while at the same time any system we build according to the model blueprint will interact with an environment of potentially unverified third-party systems or even be itself built from unverified subcomponents. As a result, there is a crucial gap between the strong formal guarantees we obtain about the modeled behavior and the actual system behavior at runtime. This gap is rooted in the assumptions that a model makes about physics and the environment, which means that not even synthesizing systems from verified models in a correct-by-construction approach will guarantee safety.
This talk addresses the gap between models and reality by linking formal verification with system execution through verified monitoring. It describes ModelPlex, a method to summarize the safety-relevant features of verified models by proof as a comprehensive set of monitoring conditions, so that the conditions provably guarantee system safety at runtime. The talk highlights two use cases of ModelPlex monitoring conditions: as system safety envelopes, and as a test oracles.
[ slides ]
Aircraft and AI Verification and Certification
David Dill: Towards formal verification of Deep Neural Networks
Stanford University | Website
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique can also be used to measure a network's robustness to adversarial inputs. Our approach is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
[Based on joint work with Guy Katz, Clark Barrett, Kyle Julian and Mykel Kochenderfer. Link]
[ slides ]
Kristin Rozier: Safe CPS Implementation: How Do We Measure That?
Iowa State University | Website
With the explosion of formal methods capable of successfully proving safety properties about even the most advanced cyber-physical systems (CPS), this is a good time to assess: where are the holes? Safe CPS implementation requires some degree of automation, yet as we replace tasks previously performed by humans, our notion of certification is challenged; in some cases it is no longer defined at all. Safe CPS operation requires runtime verification, yet the burgeoning collection of RV technologies remain comparatively untested due to a dearth of benchmarks, competitions, or other means of objectively evaluating their performance compared to design-time techniques. This is not for a lack of effort; it is due to a glaring gap in our understanding of what the new solutions would need to look like, of what exactly we now need to measure. This talk will point out and name the elephants in the room, filling the spaces between our current advanced CPS verification techniques.
Communication and Memory Security
Stephen Magill: Formal Verification of a Vehicle-to-Vehicle (V2V) Messaging System
Galois | Website
Vehicle-to-Vehicle (V2V) communications is a connected vehicles standard that will likely be mandated in the U.S. within the coming decade. V2V, in which automobiles broadcast to one another, promises improved safety by providing collision warnings, but it also poses a security risk. At the heart of V2V is the communication messaging system, specified in SAE J2735 using Abstract Syntax Notation One (ASN.1). ASN.1 is a data-description language, used in everything from email, to internet protocols, to cellular telephony systems. However, ASN.1 is large and complex and has implementations that have been the source of nearly 100 MITRE CVE vulnerability reports. We present the formal verification of an ASN.1 encoder/decoder (i.e., parser/serializer) pair. We describe generating an encoder/decoder pair, implemented in C, using the High-Assurance ASN.1 Workbench. We define self-consistency for encoder/decoder pairs that approximates functional correctness without requiring a separate specification of correctness. We then verify self-consistency and memory-safety using symbolic simulation via the Software Analysis Workbench.
Saman Zonouz: Trustworthy Critical Infrastructures via Physics-Aware Just-Ahead-Of-Time Verification
Rutgers University | Website
Critical cyber-physical infrastructures, such as the power grid, integrate networks of computational and physical processes to provide the people across the globe with essential functionalities and services. Protecting these critical infrastructures is a vital necessity because the failure of these systems would have a debilitating impact on economic security and public health and safety. Our research and development projects aim at provision of real-world solutions to facilitate the secure and reliable operation of next-generation critical infrastructures and require interdisciplinary research efforts across adaptive systems and network security, cyber-physical systems, and trustworthy real-time detection and response mechanisms. In this talk, I will focus on real past and potential future threats against critical infrastructures and embedded devices, and discuss the challenges in design, implementation, and analysis of security solutions to protect cyber-physical platforms. I will introduce novel classes of working systems that we have developed to overcome these challenges in practice, and finally conclude with several concrete directions for future research. Additionally, I will briefly go over our other projects on x86 malware/memory analysis and embedded systems security solutions to support access control applications in cyber-physical settings.
Bio: Saman Zonouz is an Assistant Professor in the Electrical and Computer Engineering Department at Rutgers University since September 2014 and the Director of the 4N6 Cyber Security and Forensics Laboratory. His research has been awarded NSF CAREER Award in 2015, Google Security Award in 2015, Top-3 Demo at IEEE SmartGridComm 2015, the Faculty Fellowship Award by AFOSR in 2013, the Best Student Paper Award at IEEE SmartGridComm 2013, the University EARLY CAREER Research award in 2012 as well as the Provost Research Award in 2011. The 4N6 research supporters include National Science Foundation (NSF), Department of Homeland Security (DHS), Office of Naval Research (ONR), Department of Energy (DOE), Advanced Research Projects Agency Energy (ARPA-E), Department of Education (DOE), Siemens Research Labs, WinRiver, GrammaTech, Google, ETAP, and Fortinet Corporation. In addition to research publications, Samans research efforts have resulted in several tech-to-market transition initiatives such as the founded Kaedago Inc. startup company (as the result of the ARPAE project), and the Siemens-funded project to adopt his developed controller program analysis algorithms (originally supported by NSF CPS program) for programmable logic controllers (PLCs). Samans current research focuses on systems security and privacy, trustworthy cyber-physical critical infrastructures and embedded platforms, binary/malware analysis and reverse engineering, as well as adaptive intrusion tolerance architectures. Saman has served as the chair, program committee member, guest editor and a reviewer for top international conferences and journals. Saman serves on Editorial Board for IEEE Transactions on Smart Grid. He obtained his Ph.D. in Computer Science, specifically, intrusion tolerance architectures for the cyber-physical infrastructures, from the University of Illinois at Urbana-Champaign in 2011.
Lennart Beringer: Verification using the VST
Princeton University | Website
The Verified Software Toolchain (VST) is an expressive verification environment for C embedded in the Coq proof assistant, with a machine-checked connection to the CompCert compiler and automation support for modularly verifying functional correctness properties using forward symbolic execution for separation logic. The talk will sketch how the embedding in a proof assistant enables linking proofs of code correctness with proofs of model-level properties, using the example of cryptographic security, and briefly discuss a recent verification of a nonblocking implementation of a communication protocol from a vehicle control system that uses low-level concurrency primitives such as atomic exchange.
[ slides ]
Simulation- and Testing-Based Methods
Jim Kapinski: Emerging cyber-physical system V&V challenges
Toyota | Website
Cyber-physical systems (CPSs) are used in many safety critical applications, such as automobiles, aircraft, and medical devices. Test and verification activities are a critical and costly aspect of modern CPS development processes. In an ideal world, test and verification activities would prove that a CPS design will never violates safety properties, but this is not a practical expectation. In practice, the main goals of test and verification activities are to increase confidence in system correctness and to provide insights that can be used in the test and calibration phases of development. To achieve these goals, one promising direction is to use simulation-based test and verification methods, which can be applied to many practical systems, but these methods are hindered by various limitations, such as scalability. This talk provides an overview of simulation-based approaches to V&V and some emerging challenges for test and verification of industrial CPS designs, including requirement engineering issues and difficulties in verifying systems that have components based on machine-learning.
Patricia Derler: From Requirements to Testing, Validation and Verification
National Instruments | Website
The increased complexity, distribution, heterogeneity and tighter time-to-market requirements of Cyber-Physical Systems (CPS), as well as the need to rapidly change and highly customize such systems, leads to a growing demand for efficient testing, validation and verification.
Industrial solutions that target a wide variety of CPS typically enable the development of tests and test sequences by providing suitable abstractions and APIs to low level functionality. While such tools often come with carefully assembled and comprehensive guidelines and best practices, they still require significant input and in-depth system knowledge from engineers. We envision a framework that takes, as input, high level requirements for a CPS, ideally specified in an unambiguous, mathematical model, and, using these requirements, automates the process of generating tests that verify and validate the behavior of the CPS.
This talk provides insights into industrial tools for V&V and outlines the challenges in the automation of this process. We will also discuss recent efforts around verifying the timing behavior of a CPS, which is often crucial for optimal or even safe operation. We describe the design and initial implementation of a distributed testbed to verify the timing of a distributed CPS through a systematic framework.
[ slides ]
Sridhar Duggirala: Scalable Dynamic Analysis for Large Linear Systems
University of Connecticut | Website
An interesting development in dynamic analysis is the development of a verification technique where an n-dimensional system can be verified using a mere n+1 sample simulations. This technique greatly improves the scalability of verification. In this talk, I will present the basic algorithm for efficient dynamic analysis, its enhancements, and present verification results for systems with dimensions ranging from 10 - 1000.
[ slides ]
Ivan Ruchkin: IPL: A Language for Model Integration Properties in Cyber-Physical Systems
Carnegie Mellon University | Website
A core challenge in designing and verifying cyber-physical systems (CPS) is overcoming the inherent heterogeneity of models and analyses from multiple disciplines. To bridge semantic gaps between models, recent approaches map these models to simpler and more flexible abstractions (such as architectural views) and integrate models at a higher level of abstraction. This integration, however, comes at a cost of reduced expressiveness because complex behavioral semantics of the models are ignored. For such approaches it may be impossible to automatically verify important behavioral properties across multiple models, leaving systems vulnerable to subtle bugs.
This talk discusses work-in-progress on regain that lost expressiveness of integration by introducing an Integration Property Language (IPL) for multi-model specifications. IPL enables specification and verification of properties that depend on detailed behavioral semantics of models, while retaining the ability for abstract system-wide reasoning. IPL formulas are automatically decidable without loss of semantic modularity, by combining satisfiability solvers and model checkers.
[ slides ]