## Talk Abstracts

Staged Concurrent Program Analysis (invited talk), Nishant Sinha, NEC Labs

Bio. Nishant Sinha is a researcher in the System Analysis and Verification group at NEC Labs, Princeton NJ, USA. He obtained his Ph.D. in Computer Engineering from Carnegie Mellon University in 2007. His interests include program analysis, compositional methods, rewriting and decision procedures.

Is Java ready for Real-time? (invited talk), Jan Vitek, Purdue University

This talk will review advances of Java technology for hard real-time and safety-critical systems. I will overview the status of the Real-time Specification for Java and the Safety Critical Java specification and talk about latest advances in implementation techniques. I will conclude with some challenges for the verification community.

Bio. Jan Vitek is a Professor in Computer Science at Purdue. His work is on different aspects of programming language technologies with applications to real-time computing. Prof. Vitek led the Ovm project which resulted in the first open source real-time Java virtual machine to be flight-tested in 2005. He has since investigated virtual machine technologies for safety-critical embedded systems. He has been general chair of ISMM as well as program chair of ECOOP, VEE, Coordination, and TOOLS. He is a member of the JSR-302 Safety Critical Java expert group and of the IFIP 2.4 working group on compilers and software technologies.

Challenges Implementing a HOL System in Haskell or: How I Learned to Stop Worrying and Love the Monad, Evan Austin, The University of Kansas

Traditionally, LCF-style theorem provers have been implemented in the ML language or one of its derivatives. HaskHOL is an in-development Haskell library for automated theorem proving that aims to break with this tradition. Based on HOL Light, the goal of HaskHOL is to bring HOL reasoning to Haskell natively while exploring the advantages and disadvantages of using a lazy and pure language for implementation.
This short talk will address the major differences between OCaml and Haskell and how they impact the implementation of the system's kernel. Additional focus will be paid to how these differences ripple and magnify through the code base as more features and libraries are added to the system and how advanced Haskell techniques have to be applied to keep performance competitive with the original implementation.

Least and greatest fixed points, David Baelde, University of Minnesota

The notions of least and greatest fixed points, corresponding to inductive and coinductive definitions, are very widely used for specifying and reasoning with proof assistants based on type or proof theory. They are usually kept in the background in the verification community, which makes it difficult for the two communities to benefit from each other. This can be understood since the proof theoretical treatment of fixed points is a delicate question, and a poorly designed logic only obfuscates specifications and proofs. We shall see that this is not unavoidable, through the presentation of an intuitionistic logic equipped with least and greatest fixed points, and the study of its support for reasoning about finite automata.

Translucid Contracts: Expressive Specification and Modular Verification for Aspect-Oriented Interfaces, Mehdi Bagherzadeh, Iowa State University

As aspect-oriented programming techniques become more widely used, their use in critical systems, including safety-critical systems such as aircraft and mission-critical systems such as telephone networks, will become more widespread. However, careful reasoning about aspect-oriented code seems difficult with standard specification techniques. The difficulty stems from the difficulty of understanding control effects, such as advice that does not proceed to the original join point, because most common specification techniques do not make it convenient to specify such control effects. In this work we give a simple and understandable specification technique, which we call translucid contracts, that not only allows programmers to write modular specifications for advice and advised code, but also allows them to reason about the code's control effects. We show that translucid contracts support modular verification of typical interaction patterns used in aspect-oriented code. We also show that translucid contracts allow interesting control effects to be understood and enforced. Our proposed specification and verification approach is proved sound.

A dynamic analysis to explore scopes of programmer reasoning, Devin Coughlin, University of Colorado at Boulder

Static analysis has been used to find a wide variety of classes of bugs in programs. However, in order to be useful, a bugfinding tool must be able to convince the user that it has found an actual bug, i.e. that the bug is not a false positive. In this talk, we present preliminary results on a dynamic analysis that instruments running programs to determine how programmers reason about their programs. This analysis calculates the "distance" between operations that guard against runtime errors and the operations which could have directly caused those errors. We hope to use this analysis to characterize the different scopes of reasoning that programmers employ to understand their program and thus help bug finding tools better explain their results.

The K Framework and a Formal Semantics of C, Chucky Ellison, University of Illinois Urbana-Champaign

Using the K Framework, an executable formal semantics for the C programming language is given. The semantics has been thoroughly tested against the GCC torture test suite and successfully passes over 95% of 720 test programs. The semantics yields an interpreter, debugger, and state space search and model checker "for free". The semantics is shown capable of automatically finding program errors, both statically and at runtime.

Composition in the State-Based Rosetta Domain, Nicolas Frisby, The University of Kansas

We present for critique the preliminary work in codifying the semantics of the state-based machines and their composition in the Rosetta specification language. We consider composition operators such as conjunction, disjunction, asynchronous concurrency, synchronous concurrency, mixed concurrency, as well as more problem-specific refinements thereof. The predominant Rosetta semantic object for state-based machines is the state-based domain, with its primary definitions of an abstract state-space and an abstract state-transition relation per machine. We intend to specify the semantics of the various state-based composition operators using the state-based domain's vocabulary in order to (1) support later work establishing correspondences with other formalisms, e.g. process algebras, RTL design, etc.; and also to (2) support the usability of Rosetta by factoring out patterns in the semantics as expressive combinators to be including in the language or its standard library.

Increasing path-coverage of predictive analysis, Robert Frohardt, University of Colorado at Boulder

Predictive analysis is a powerful technique for searching for bugs in concurrent programs. By combining the relative simplicity of analyzing dynamic traces with tools from static analysis, predictive analysis searches for possibly erroneous schedules which 'almost happened' during a real execution. In recent years, much research has demonstrated and improved the effectiveness of this technique. However, since predictive analysis is based on dynamic traces, it suffers from the fact that it considers only a single path through the concurrent control flow graph (CFG). This means that while it has been demonstrated to be an effective bug-finding technique, it currently has limited its utility as a verification technique. We review the current state-of-the-art of predictive analysis, highlight its strengths and limitations, and describe our preliminary research on increasing the power of predictive analysis. We describe possible directions which may be capable of improving the path-coverage of predictive analysis by drawing on recent results from path-sensitive sequential analysis such as satisfiability modulo path programs (SMPP).

Compositionality Entails Sequentializability, Pranav Garg, University of Illinois Urbana-Champaign

We show that any concurrent program with a compositional proof can be effectively translated to a sequential program. More precisely, we show that concurrent programs with a rely-guarantee proof with a set of auxiliary variables (a la Jones) can be effectively translated to a sequential program whose variables are a bounded number of copies of the variables in the concurrent program.
The proposed sequential program precisely captures the semantics of the concurrent program and has no under-approximation restriction (like context-bounding). More strongly, given a rely-guarantee proof of the concurrent program there is a back-and-forth parsimonious reduction to the proofs of the sequential and concurrent programs. We use our result to automatically prove concurrent programs like Bakery and Peterson by proving their sequential programs using automatic predicate abstraction tool SLAM. Further, using the parsimonious proof reduction, we implement rely-guarantee verifiers using sequential program verifier Boogie.

Automating Cut-off for Multi-Parameterized Systems, Youssef Hanna, Iowa State University

Verifying that a parameterized system satisfies certain desired properties amounts to verifying an infinite family of the system instances. This problem is undecidable in general, and as such a number of sound and incomplete techniques have been proposed to address it. Existing techniques typically focus on parameterized systems with a single parameter, (i.e., on systems where the number of processes of exactly one type is dependent on the parameter); however, many systems in practice are multi-parameterized, where multiple parameters are used to specify the number of different types of processes in the system. In this talk, I will present an automatic verification technique for multi-parameterized systems and show that it can be applied to systems irrespective of their communication topology. I will also present a prototype realization of our technique in our tool Golok, and demonstrate its practical applicability using a number of multi-parameterized systems.

Specification-based testing for refinement, Teme Kahsai, The University of Iowa

In this talk, we present a theory for the evaluation of test cases with respect to formal specifications. In particular, we use the specification language CSP-CASL to define and evaluate black-box tests for reactive systems. CSP-CASL allows one to formalize computation systems in a combined algebraic / process algebraic notation. Using loose semantics and three-valued test oracles, our approach is well-suited to deal with the refinement of specifications. In a structured development process of computational systems, abstract specifications are gradually refined into more concrete ones. With our approach, it is possible to develop test cases already from very abstract and basic specifications, and to reuse them later on in more refined systems.

Simulation-based Verification of Hardware with Strategies, Michael Katelman, The University of Illinois at Urbana-Champaign

Functional veriﬁcation of digital hardware is a critical engineering problem. Arduous and time consuming, the veriﬁcation process can in many cases require twice as many engineers as the design process. In this talk I describe a meta-language for writing hardware testbenches that we are developing. It aims to directly address problems in contemporary hardware veriﬁcation methodology, which is almost entirely simulation- based and where engineers engaged in the veriﬁcation process spend their time writing complex testbenches. Instead of treating the testbench as an environment operating at the same level as the device under test, our language makes simulation contexts ﬁrst-class ob jects that can be manipulated programmatically at a higher level. In addition, the simu- lation contexts are symbolic so that concrete behaviors can be resolved automatically with an SMT solver; the whole thing is then embedded into a general-purpose declarative language. I will describe the language, present examples of its novelty and eﬃcacy, and demonstrate its feasible implementation via a tool we are developing for Verilog designs, called vlogsl.

Developing a competitive SAT solver with correctness verified, Duccki Oe, The University of Iowa

Satisfiability (SAT) solvers are widely used in automated theorem proving applications as their performance has improved significantly. However, because most solvers are written in C and their code is quite sophisticated, one cannot rule out the possibility of the presence of bugs. There have been some effort to have solvers produce proofs that can be checked independently. In practice, some users are worried about taking additional memory to store proofs, which may crash solvers running on a difficult instance when it could be solved without proof. So, they may prefer a slower (but, still practical) solver that is statically verified. In this talk, a SAT solver called "versat" will be presented. It is written in a functional programming language called "Guru". The Guru language is dependently typed, and desired properties can be specified and proved through Guru's type system.

Decidable logics combining heap structures and data, Xiaokang Qiu, University of Illinois at Urbana-Champaign

We define a new logic, Strand, that allows reasoning with heap-manipulating programs using deductive verification and SMT solvers. Strand logic (STRuctures ANd Data'' logic) formulas express constraints involving heap structures and the data they contain; they are defined over a class of pointer-structures R defined using MSO-defined relations over trees, and are of the form "exists x1,...xk forall y1,..ym" phi(x1,..xk, y1,..ym), where phi is a monadic second-order logic (MSO) formula with additional quantification that combines structural constraints as well as data-constraints, but where the data-constraints are only allowed to refer to x1,..xk,y1,..ym.
The salient aspects of the logic are: (a) the logic is powerful, allowing existential and universal quantification over the nodes, and complex combinations of data-constraints and structural constraints; (b) given a linear block of statements s manipulating the heap and data, and a pre-condition P and a post-condition Q for it written using a Boolean combination of "exists x1,..xk, phi" and "forall y1,..yk phi" formulas, checking the validity of the associated Hoare-triple "{P} s {Q}" reduces to satisfiability of a Strand formula, and (c) there is a powerful fragment of Strand for which satisfiability is decidable, where the decision procedure works by combining the theory of MSO over trees and the quantifier-free theory of the underlying data-logic.
We demonstrate the effectiveness and practicality of the logic by checking verification conditions generated in proving properties of several heap-manipulating programs, using a tool that combines an MSO decision procedure over trees (Mona) with an SMT solver for integer constraints (Z3).
This is joint work with P. Madhusudan and G. Parlato.

Comparing Proof Systems for Linear Real Arithmetic with LFSC, Andrew Reynolds, The University of Iowa

LFSC is a high-level declarative language for defining proof systems and proof objects for virtually any logic. One of its distinguishing features is its support for computational side conditions on proof rules. Side conditions facilitate the design of proof systems that reflect closely the sort of high-performance inferences made by SMT solvers. This paper investigates the issue of balancing declarative and computational inference in LFSC focusing on (quantifier-free) Linear Real Arithmetic. We discuss a few alternative proof systems for LRA and report on our comparative experimental results on generating and checking proofs in them.

Algebraic-Geometric Techniques for Verification of Non-Linear Hybrid Systems, Sriram Sankaranarayanan, University of Colorado at Boulder

In this talk, we present some recent progress in the use of algebraic-geometric methods for automatically inferring invariants for non-linear hybrid systems. Hybrid system models combine continuous state evolution through flows with discrete instantaneous jumps. They are commonly used to model Cyber-Physical systems wherein discrete software systems interact with a continuous physical environment. Our approach attempts to unify two classic "views" of invariance: (1) the inductive-invariants approach used for discrete programs (due to Floyd & Hoare) and (2) the standard definition of an invariant manifold used in dynamical systems theory.
We show that the unified notion allows us to formulate invariants for hybrid systems as a fixed point of a monotone operator. Abstract interpretation techniques used to over-approximate fixed points for discrete systems can also carry over to hybrid systems for the case of non-linear differential equations with polynomial right hand sides. We show how termination of this abstract interpretation can be enforced. We present some preliminary results that demonstrate the ability of our technique to derive interesting polynomial equality invariants for some non-linear benchmarks. Finally, we will sketch some of our ongoing work that extends our approach to handle polynomial inequalities.
Note: The talk will be self-contained as far as possible. No prior knowledge of differential equations/dynamical systems theory will be assumed.

Matching Logic: An Alternative to Hoare Logic, Andrei Stefanescu, University of Illinois Urbana-Champaign

Matching Logic is a framework for defining axiomatic semantics for programming languages, inspired from operational semantics. Matching Logic specifications are particular first-order formulae with constrained algebraic structure, called patterns. Program configurations satisfy patterns iff they match their algebraic structure and satisfy their constraints. To illustrate our technique we implement an automatic Matching Logic verifier for a subset of C, called MatchC, using a rewrite engine (Maude). With this tool we are able to automatically prove correct programs that manipulates list or trees, and also the most popular sorting algorithms, such as merge-sort and quick-sort.

Higher-Order Encodings in Intensional Type Theory, Eddy Westbrook, Rice University

Name-bindings constructs, such as the $\lambda$-abstraction $\lamabsnot{x}{t}$ of the $\lambda$-calculus, are well-known to be cumbersome to encode and reason about in logic and type theory, making it tedious to prove properties about programming languages and other formalisms that use name-bindings. Although there has been much research on this problem, most prior approaches are extensional, or semantic, making it difficult to use such constructs in intensional, syntactic theories like the Calculus of Inductive Constructions (CiC). In this work we introduce a new, intensional approach to higher-order encodings, a powerful technique for encoding name-bindings that allows names to be associated with meta-language types, and show how this approach can be integrated with CiC to form the Calculus of Nominal Inductive Constructions (CNIC). CNIC represents the first theory that combines higher-order encodings with many of the features of CiC, including polymorphism. We demonstrate the usefulness of CNIC by investigating a number of small examples.

Smarter Load Test Suite Generation with Input Selection Support, Pingyu Zhang, University of Nebraska, Lincoln

The objective of load tests is to validate whether a system’s performance (e.g., response time, throughput, resource utilization) is acceptable under production, projected, or worst-case inputs. Existing load test generation techniques have focused on increasing the size or rate of the input but provide no support for choosing the particular input values which can dramatically impact how a system may perform. To address this, we introduce an approach for input selection based on the observation that the most effective load tests in revealing performance anomalies often iterate extensively, through loops or via recursion, as they create or traverse data structures, or access or consume resources. We leverage this observation by adapting symbolic execution approaches to perform an iterative-deepening beam-search of the set of program paths to discover those that lead to extensive iterative computation. The resulting path conditions are then compared and solved to generate a suite of load tests. An assessment of the approach reveals that it can efficiently generate load suites with diverse tests that induce program response times that are several times worse than those found with random input generation, and that it can scale to millions of inputs.

Streett complementation made tight, Ting Zhang, Iowa State University

Automata complementation has wide applications in logic, formal languages and program verification. Automata with rich acceptance conditions, such as Rabin automata and Streett automata, are of particular importance as they can express properties much more naturally and succinctly. Obtaining tight complexity bounds on automata complementation has been difficult. For the Rabin complementation and Streett complementation, significant gaps existed between the best lower and upper bounds. In previous work, we established an essentially tight lower bound for Rabin complementation. In this talk, we show a tight bound for Streett complementation, improving from both presently known lower and upper bounds. We also show an improved upper bound for the complementation of Parity automata. This is a joint work with Yang Cai at CSAIL, MIT.