CPS V&V I&F Workshop 2018: Talks

  1. LS Lab
  2. >>
  3. Events
  4. >>
  5. CPS V&V '18
  6. >>
  7. Talks
Table of Contents
  1. Jean-Baptiste Jeannin: Verifying Aircraft Collision Avoidance Neural Networks
  2. Taylor T. Johnson: Safety Assurance in Cyber-Physical Systems built with Learning-Enabled Components
  3. Kristin Rozier: Formal Methods for Common Sense: A Challenge
  4. Brandon Bohrer: Verified Runtime Monitoring: From Foundations to Practice
  5. Taisa Sima Kushner : Robust Data-Driven Control for Artificial Pancreas Systems Using Neural Networks
  6. Grant Passmore: An Introduction to Imandra - Cloud-native Automated Reasoning for the Masses
  7. Sriharsha Etigowni: I Look at Physics and Predict Control Flow! Just-Ahead-Of-Time Controller Recovery
  8. Aaron Kane: Towards Formalizing the System Safety Argument
  9. Jim Kapinski: Training and Verification of Learning-Enabled CPS
  10. Christof Budnik: Fertilized Testing of Train Braking Systems using Formal Models as Test Oracles
  11. Nathan Fulton: Verifiably Safe Reinforcement Learning
  12. Chris McKinnon

Jean-Baptiste Jeannin: Verifying Aircraft Collision Avoidance Neural Networks

University of Michigan | Website

The next generation of aircraft collision avoidance systems leverage Markov decision problems to optimize alerting logic. The resulting system uses a large lookup table to determine advisories given to pilots, but these tables can grow very large. To enable the system to operate on limited hardware, the table was compressed using a deep neural network representation. However, ensuring that the neural network always issues safe advisories is paramount to certifying a neural network in a safety critical system. In this talk we will show how to define linearized regions where each advisory can be safely given, allowing Reluplex, a neural network verification tool, to check if unsafe advisories are ever given.


Taylor T. Johnson: Safety Assurance in Cyber-Physical Systems built with Learning-Enabled Components

Vanderbilt University | Website

The ongoing renaissance in artificial intelligence (AI) has led to the advent of machine learning methods deployed within components for sensing, actuation, and control in safety-critical cyber-physical systems (CPS), and is enabling autonomy in such systems, such as autonomous vehicles and swarm robots. However, as demonstrated in part through recent accidents in semi-autonomous/autonomous CPS and by adversarial machine learning, ensuring such components operate reliably in all scenarios is extraordinarily challenging. This talk will overview several approaches for verifying learning-enabled components (LECs) in autonomous CPS, such as neural networks and in closed-loop CPS incorporating LECs. We will conclude with some architectural solutions that enhance trust and safety assurance in autonomous CPS, building on supervisory control with the Simplex architecture, and will discuss future research directions for enhancing trust of machine learning components within CPS that we are exploring within recently started DARPA Assured Autonomy and NSA/DoD Science of Security Lablet projects.

[slides]


Kristin Rozier: Formal Methods for Common Sense: A Challenge

Iowa State University | Website

Safe learning and safe acting pose future foundational challenges in cyber-physical system (CPS) V&V (verification & validation).

Sanity checks are required to ensure integration of learning components with safety guarantees that result in provably correct actuation in CPS. They provide essential bounds to constrain acting to within safe limits. Operational practicality also demands sanity checks be dynamic: they must be able to change with different mission modes, re-planning, or unexpected environmental conditions. Human interaction provides a wide range of additional challenges including how to explain the purpose behind hard-stops to humans onboard, and how to create and monitor for additional sanity checks per human request during a mission.

We explore the challenges posed by bounding safe learning and safe acting and delve into the research questions that arise when we strive toward using formal methods to guarantee common sense.


Brandon Bohrer: Verified Runtime Monitoring: From Foundations to Practice

Carnegie Mellon University | Website

In this talk, we discuss recent results and ongoing work bringing correct-by-construction runtime monitoring to bear on a realistic ground robot.

In the first part of the talk, we discuss the recently-developed VeriPhy toolchain in depth, which not only automatically inspects a safety proof of a differential dynamic logic (dL) model to generate monitoring conditions that are guaranteed to be correct, but compiles the conditions down to executable code while maintaining formal safety guarantees, which support end-to-end correctness on practical systems. Some of the hurdles involved will include soundly implementing real arithmetic, resolving non-determinism, and reconciling simplified physical models with the complexity of the real world

In the second part of the talk, we will discuss the ongoing effort to apply the VeriPhy toolchain to ensure end-to-end safety of a realistic ground robot. Topics discussed will include:

[slides]



Taisa Sima Kushner : Robust Data-Driven Control for Artificial Pancreas Systems Using Neural Networks

University of Colorado Boulder | Website

Artificial pancreas systems, which adjust insulin delivery to regulate blood glucose levels, present the most promising treatment strategy for individuals with type-1 diabetes. However, the human glucose-insulin system is highly nonlinear, and optimal control is limited by an inability to accurately predict future blood glucose levels for individuals. Multiple machine-learning based approaches for blood glucose prediction have been proposed, however, these "black-box" methods lack guarantees necessary for use in safety critical artificial pancreas systems. In this talk, we present a novel data-driven approach to control for artificial pancreas systems by learning patient-specific models of human glucose-insulin physiology, and using a mixed integer optimization approach to control blood glucose levels in real-time using the inferred models. In order to provide guarantees on the resulting models, we use quantile regression to fit multiple neural networks which predict upper and lower quantiles of future blood glucose levels, in addition to the mean. Using these models, we formulate a model-predictive control scheme that adjusts both basal and bolus insulin delivery to ensure the risk of harmful hypoglycemia and hyperglycemia are bounded until the quantile models, while mean prediction remains close to the desired target level. We demonstrate how the scheme can handle disturbances from large unannounced meals and sensor errors, as well as infeasibilities that result from situations where the uncertainties in future glucose predictions are too high. The approach is experimentally evaluated on data obtained from a set of 17 patients over a course of 40 nights per patient, as well as on 6 virtual patients obtained through the UVA-Padova simulator for type-1 diabetes.

[slides]


Grant Passmore: An Introduction to Imandra - Cloud-native Automated Reasoning for the Masses

Aesthetic Integration | Website

Imandra is a cloud-native automated reasoning system powering next-generation tools for the design, verification and validation of critical systems. Imandra's logic is computational, built on a formal semantics for a pure, higher-order subset of OCaml (and its sibling ReasonML), and integrated with powerful reasoning automation. Imandra's Reasoning as a Service APIs make it easy to integrate Imandra into bespoke products and verification toolchains.

Imandra has many advanced features, including first-class computable counterexamples, symbolic model checking, support for polymorphic higher-order recursive functions, automated induction, a powerful simplifier and symbolic execution engine with lemma-based conditional rewriting and forward-chaining, first-class state-space decompositions, decision procedures for algebraic data types, nonlinear arithmetic, and more.

In this talk, I'll give an overview of Imandra and (time-permitting) demo some exciting applications, including the formal verification of financial algorithms (dark pool matching logics), RobotOS nodes, autonomous vehicle controllers and even web apps. To install Imandra, visit http://docs.imandra.ai/ All examples I'll present are available to run and experiment with in the browser at http://try.imandra.ai/.


Sriharsha Etigowni: I Look at Physics and Predict Control Flow! Just-Ahead-Of-Time Controller Recovery

Rutgers University

Recent major attacks against unmanned aerial vehicles (UAV) and their controller software necessitate domain-specific cyber-physical security protection. Existing offline formal methods for (untrusted) controller code verification usually face state-explosion. On the other hand, runtime monitors for cyber-physical UAVs often lead to too-late notifications about unsafe states that makes timely safe operation recovery impossible.

In this talk, we present a just-ahead-of-time control flow predictor and proactive recovery for UAVs. The execution state of the flight controller is monitored and the future control flows ahead of time-based on the UAV's physical dynamics is predicted. We emulate the UAV physical dynamical model and predicts future sensor measurements (controller inputs) and upcoming feasible controller's execution paths. Rather than traditional complete exploration, we explore only until a time horizon. The future unreachable states are pruned. This drives model-checking exploration away from unreachable future states. The selective model checking saves computational time to stay ahead of execution by concentrating on relevant upcoming control flows only. Just-ahead-of-time verification explores the future control flows in parallel ahead of the UAV's actual operation by some time margin. The introduced time margin enables to accommodate the operator's feedback latency by the time the actual execution reaches to the identified unsafe state. The operator's countermeasures are deployed proactively in case of an upcoming unsafe state.

[slides]


Aaron Kane: Towards Formalizing the System Safety Argument

Edge Case Research

The increasing complexity of safety-critical cyber-physical systems and their operating environments, especially autonomous vehicles, requires increasingly complicated safety analysis and verification for their safe use. The scale of both the systems themselves and the possible environments they may operate within leads to a complexity that is not managed well by existing standards and processes. In the process of performing safety analysis and assessments for many different safety critical cyber-physical systems, including autonomous ground vehicles, we have begun to construct a process which integrates system and safety concept modeling into the safety argument, providing the opportunity for more formalisms and tool support surrounding the overall system safety argument, from concept to verification.

This talk discusses our current system safety analysis process and underlying ideas, highlighting challenges we have identified while trying to integrate additional structure, modeling, and formalisms into the safety analysis and argument process. We also look at how traditional and formal V&V techniques integrate with these types of safety case arguments.


Jim Kapinski: Training and Verification of Learning-Enabled CPS

Toyota

Cyber-physical systems (CPSs) are used in many mission critical applications, such as automobiles, aircraft, and medical devices; therefore, it is vital to ensure that these systems behave correctly. Designs for modern CPSs often include learning-enabled (LE) components, such as neural networks, the behaviors of which emerge as a result of processing training data; however, it is difficult to design and verify LE aspects of CPS designs to achieve a desired level of assurance. To address these challenges, new training and analysis techniques are needed to increase confidence in LE CPS designs. This talk presents new approaches to address these challenges, including a falsification-based approach, which uses best-effort methods to automatically identify system behaviors that fail to satisfy requirements. In addition, the talk describes an approach that provides a formal proof of correctness for the system by learning candidate safety certificates from simulation traces, which are then confirmed using automated reasoning tools. Lastly, the talk will touch on a new approach to perform training of the LE components in a manner that is guaranteed to satisfy safety properties. The new techniques are presented in the context of autonomous driving application examples.

[slides]


Christof Budnik: Fertilized Testing of Train Braking Systems using Formal Models as Test Oracles

Siemens

Train control technology enhances the safety and ef?ciency of railroad operation by safeguarding the motion of trains to prevent them from leaving designated areas of operation and colliding with other trains. The use of formal methods, such as theorem proving and model checking, can support the analysis of such systems by proving their safety properties. Another crucial part of quality assurance for industrial products is software testing. However, in most cases these approaches are applied apart from each other.

This presentation reports on how testing could be fertilized by using formal models as test oracle and identifies challenges to overcome in order to mutually benefit from their integration. The formal model can be used to derive a reference implementation as strong evidence of correctness for a hand-crafted implementation under test. A hand-crafted code can be either desired to include additional considerations that were not expressed in the formal model such as optimizations, tracing of requirements, logging artifacts or might be existing from previous projects and have been proven in the ?eld for several years. A guided exploration of the formal model would then enable the generation of interesting test-vectors for the hand-crafted system and definition of coverage criteria of the generated test vectors in order to identify stopping criteria for testing.


Nathan Fulton: Verifiably Safe Reinforcement Learning

MIT-IBM Watson AI Lab | Website

Designers of self-driving cars and other autonomous cyber-physical systems must provide compelling evidence that their use of learning algorithms is safe and robust. Formal verification provides a high degree of confidence in safe system operation, but only if reality matches a verified model. Although a good model will be accurate most of the time, even the best models are incomplete. This is especially true for autonomous vehicles and other Cyber-Physical Systems because high-fidelity physical models of these systems are expensive to develop and often intractable to verify. Conversely, reinforcement learning-based controllers are lauded for their flexibility in unmodeled environments but do not provide guarantees of safe operation. This paper presents an approach for provably safe learning that provides the best of both worlds: the exploration and optimization capabilities of learning along with the safety guarantees of formal verification.


Chris McKinnon

University of Toronto

Stochastic MPC provides a powerful tool for control that guarantees that safety constraints are kept with high probability in the face of model uncertainty. A key ingredient in the validity of these guarantees is an accurate estimate of model uncertainty. While there are many probabilistic methods for modelling robot dynamics that provide such an estimate, applying these methods when a robot can be deployed in an arbitrary number of novel operating conditions remains a challenge. In this talk, we present an overview of the approaches we have tried to try to address this problem and their benefits and challenges when applied to a 900 kg ground robot using a stereo camera for localisation. In particular, we present approaches for adapting and verifying our models online to ensure that the safety requirements of the controller will be satisfied with high probability.

[slides]