## Seminar: Advanced Approaches in Software Verification (AASV)

Lecturer: Prof. Dr. Marie-Christine Jakobs

The seminar is no longer offered. The seminar dealt with current research topics of the research group Semantics and Verification of Parallel Systems. You will learn about classical and recent research in the area of software verification (i.e., model checking, program analysis, testing, etc.).

### Course Content

Under the guidance of your supervisor you will

- use the given literature and search for additional literature to become acquainted with your topic,
- prepare and give a presentation about your topic and afterwards discuss the topic with the other participants,
- write a scientific report, which provides a summary of your topic.

At the end of the seminar, students are able to autonomously familiarize themselves with a scientific topic and are able to present this topic to a heterogeneous audience orally and in written form.

More concrete, the students can search for scientific literature and assess the relevance of found literature. They are able to identify the main content of a scientific publication and critically evaluate them. Moreover, they can compare different scientific approaches. Students can explain and defend their topic and results to a heterogeneous audience in an oral presentation. Additionally, they are able to describe their topic and results in a scientific paper.

### Previous Knowledge

**Recommended.** Knowledge about computer science and mathematics taught int the first four terms of the bachelor degree in computer science

**Helpful.** Participation in a course of the research group Semantics and Verification of Parallel Systems

## Seminar Topics in Past Terms

This seminar is about intermediate languages and specification languages for supporting verification. Intermediate languages typically simplify the verifier design, e.g., restricting the language, mapping similar input statements to a single statement in the intermediate language, making implicit behavior explicit, etc. In contrast, specification languages allow one to specify properties of a program, e.g., its functional behavior, caller-callee contracts, etc..

#### Intermediate Languages

**CIL: C Intermediate Language**

George C. Necula, Scott McPeak, Shree Prakash Rahul, Westley Weimer: CIL: Intermediate Language and Tools for Analysis and Transformation of C Programs. CC, LNCS 2304, pp. 213-228, Springer, 2002.**CIVL-C: Concurrency Intermediate Verification Language**

Stephen F. Siegel, Manchun Zheng, Ziqing Luo, Timothy K. Zirkel, Andre V. Marianiello, John G. Edenhofner, Matthew B. Dwyer, Michael S. Rogers: CIVL: The Concurrency Intermediate Verification Language. SC, pp. 61:1-61:12, ACM, 2015.**LLVM IR: Low Level Virtual Machine Intermediate Representation**

Chris Lattner, Vikram S. Adve: LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation. CGO, pp. 75-88, IEEE, 2004.

#### Specification Languages

**FQL: FShell Query Language**

Andreas Holzer, Michael Tautschnig, Christian Schallhart, Helmut Veith: An Introduction to Test Specification in FQL. HVC, LNCS 6504, pp. 9-22, Springer, 2010.**GOSPEL: Generic Ocaml SPEcification Language**

Arthur Charguéraud, Jean-Christophe Filliâtre, Cláudio Lourenço, Mário Pereira: GOSPEL – Providing Caml with a Formal Specification Language. FM, LNCS 11800, pp. 484-501, Springer, 2019.**HILARE: High-Level ACSL Requirements with MetACSL**

Virgile Robles, Nikolai Kosmatov, Virgile Prevosto, Louis Rilling, Pascale Le Gall: Tame Your Annotations with MetAcsl: Specifying, Testing and Proving High-Level Properties. TAP, LNCS , pp. 67-185, Springer, 2019.**JML: Java Modeling Language**

Patrice Chalin, Joseph R. Kiniry, Gary T. Leavens, Erik Poll: Beyond Assertions: Advanced Specification and Verification with JML and ESC/Java2. FMCO, LNCS 4111, pp. 342-363, Springer, 2005.**Spec# Language**

Mike Barnett, K. Rustan M. Leino, Wolfram Schulte: The Spec# Programming System: An Overview. CASSIS, LNCS 3362, pp. 49-69, Springer, 2004.

K. Rustan M. Leino, Peter Müller: Using the Spec# Language, Methodology, and Tools to Write Bug-Free Programs. LASER Summer School, LNCS 6029, pp. 91-139, Springer, 2008.**Specifications in the Eiffel Language**

Bertrand Meyer: Applying “Design by Contract”. Computer 25(10): 40-51, 1992.

#### Combined Intermediate and Specification Languages

**BoogiePL**

Michael Barnett, Bor-Yuh Evan Chang, Robert DeLine, Bart Jacobs, K. Rustan M. Leino: Boogie: A Modular Reusable Verifier for Object-Oriented Programs. FMCO, LNCS 4111, pp. 364-387, Springer, 2005.**Silver: Viper’s Intermediate Language**

Peter Müller, Malte Schwerhoff, Alexander J. Summers: Viper: A Verification Infrastructure for Permission-Based Reasoning. VMCAI, LNCS 9583, pp. 41-62, 2016.**WhyML**

Jean-Christophe Filliâtre, Andrei Paskevich: Why3 – Where Programs Meet Provers. ESOP, NCS 7792, pp. 125-128, Springer, 2013.

This seminar is about tools for verification and analysis of software programs. The tools will differ in the supported programming languages and the underlying techniques for analysis and verification. To this end, we will look into the following tools.

#### Frameworks for Java Analyis

- Java PathFinder

Willem Visser, Klaus Havelund, Guillaume P. Brat, Seungjoon Park, Flavio Lerda: Model Checking Programs. Automated Software Engineering 10(2), pp. 203-232, 2003. - OPAL

Michael Eichberg, Ben Hermann: A Software Product Line for Static Analyses: The OPAL Framework. SOAP@PLDI, pp. 2:1-2:6, ACM, 2014.

Dominik Helm, Florian Kübler, Michael Reif, Michael Eichberg, Mira Mezini: Modular Collaborative Program Analysis in OPAL. FSE, pp. 184-196, ACM, 2020. - Soot

Patrick Lam, Eric Bodden, Ondřej Lhotá, and Laurie Hendren: The Soot Framework for Java Program Analysis: A Retrospective (opens in new tab). Cetus Users and Compiler Infastructure Workshop, 2011.

#### Frameworks for C Analysis

- Astrée

Patrick Cousot, Radhia Cousot, Jérôm Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, Xavier Rival: The ASTREÉ Analyzer. ESOP, LNCS 3444, pp. 21-30, Springer, 2005.

Bruno Blanchet, Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, avid Monniaux, Xavier Rival: A Static Analyzer for Large Safety-Critical Software. PLDI, pp. 196-207, ACM, 2003. - CPAchecker

Dirk Beyer, M. Erkan Keremoglu: CPAchecker: A Tool for Configurable Software Verification. CAV, LNCS 6806, pp. 184-190, Springer, 2011.

Dirk Beyer, Thomas A. Henzinger, Grégory Théoduloz: Program Analysis with Dynamic Precision Adjustment. ASE, pp. 29-38, IEEE, 2008. - Frama-C

Pascal Cuoq, Florent Kirchner, Nikolai Kosmatov, Virgile Prevosto, Julien Signoles, Boris Yakobowski: Frama-C – A Software Analysis Perspective. SEFM, LNCS 7504, pp. 233-247, Springer, 2012.

Patrick Baudin, François Bobot, David Bühler, Loïc Correnson, Florent Kirchner, Nikolai Kosmatov, André Maroneze, Valentin Perrelle, Virgile Prevosto, Julien Signoles, Nicky Williams: The Dogged Pursuit of Bug-Free C Programs: The Frama-C Software Analysis Platform. Commun.\ ACM 64(8), pp.\ 56-68, 2021.

#### Verifiers for C Programs

- CBMC

Edmund M. Clarke, Daniel Kroening, Flavio Lerda: A Tool for Checking ANSI-C Programs. TACAS, LNCS 2988, pp. 168-176, Springer, 2004.

Edmund M. Clarke, Daniel Kroening, Karen Yorav: Behavioral Consistency of C and Verilog Programs Using Bounded Model Checking. DAC, pp. 368-371, ACM, 2003. - ESBMC

Lucas C. Cordeiro, Bernd Fischer, João Marques-Silva: SMT-Based Bounded Model Checking for Embedded ANSI-C Software. IEEE TSE, 38(4), pp. 957-974, 2012. Mikhail Y. R. Gadelha, Felipe R. Monteiro, Jeremy Morse, Lucas C. Cordeiro, Bernd Fischer, Denis A. Nicole: ESBMC 5.0: An Industrial-Strength C Model CheckerESBMC 5.0: An Industrial-Strength C Model Checker. ASE, pp. 888-891, ACM, 2018 - Symbiotic

Jiri Slaby, Jan Strejcek, Marek Trtík: Checking Properties Described by State Machines: On Synergy of Instrumentation, Slicing, and Symbolic Execution. FMICS, LNCS 7437, pp. 207-221, Springer, 2012.

Marek Chalupa, Jan Strejcek, Martina Vitovská: Joint Forces for Memory Safety Checking. SPIN, LNCS 10869, pp. 115-132, Springer, 2018.

Marek Chalupa, Tomás Jasek, Jakub Novák, Anna Rechtácková, Veronika Soková, Jan Strejcek: Symbiotic 8: Beyond Symbolic Execution – (Competition Contribution). TACAS, LNCS 12652, pp. 453-457, Springer, 2021. - Ultimate Automizer

Matthias Heizmann, Jürgen Christ, Daniel Dietsch, Evren Ermis, Jochen Hoenicke, Markus Lindenmann, Alexander Nutz, Christian Schilling, Andreas Podelski: Ultimate Automizer with SMTInterpol – (Competition Contribution). TACAS, LNCS 7795, pp. 641-643, Springer, 2013.

Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Refinement of Trace Abstraction. SAS, LNCS 5673, pp. 69-85, Springer, 2009.

Matthias Heizmann, Jochen Hoenicke, Andreas Podelski: Nested Interpolants. POPL, pp. 471-482, ACM, 2010.

#### Verifiers for Boogie Programs

- Boogie Verifizierer

Michael Barnett, Bor-Yuh Evan Chang, Robert DeLine, Bart Jacobs, K. Rustan M. Leino: Boogie: A Modular Reusable Verifier for Object-Oriented Programs. FMCO, LNCS 4111, pp. 364-387, Springer, 2005. - Corral

Akash Lal, Shaz Qadeer, Shuvendu K. Lahiri: A Solver for Reachability Modulo Theories. CAV, LNCS 7358, pp. 427-443, Springer, 2012.

Akash Lal, Shaz Qadeer: Powering the Static Driver Verifier Using Corral. FSE, pp. 202-212, ACM, 2014.

Akash Lal, Shaz Qadeer: DAG Inlining: A Decision Procedure for Reachability-Modulo-Theories in Hierarchical Programs. PLDI, pp. 280-290, ACM, 2015.

This seminar studies topics on automatic test-case generation. The goal of test-case generation is to find test inputs that reveal bugs in the the program or to find a set of test inputs that achieve a certain coverage value. In this seminar, we look into four different types of automatic test-case generation: (1) random input generation, (2) search-based input generation, (3) input generation with symbolic execution, and (4) input generation with reachability analyses.

#### Random Test Generation

**QuickCheck**

Koen Claessen, John Hughes: QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs. ICFP, pp. 268-279, ACM, 2000.**Randoop**

Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, Thomas Ball: Feedback-Directed Random Test Generation. ICSE, pp. 75-84, IEEE 2007.

#### Search-based Test Generation

**Korat**

Chandrasekhar Boyapati, Sarfraz Khurshid, Darko Marinov: Korat: Automated Testing Based on Java Predicates. ISSTA, pp. 123-133, ACM, 2002.**EvoSuite**

Gordon Fraser, Andrea Arcuri: Evolutionary Generation of Whole Test Suites. QSIC, pp. 31-40, IEEE, 2011.

#### Test Generation with Symbolic Execution

**KLEE**

Cristian Cadar, Daniel Dunbar, Dawson R. Engler: KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs (opens in new tab). OSDI, pp. 209-224, USENIX, 2008.**PathCrawler**

Nicky Williams, Bruno Marre, Patricia Mouy, Muriel Roger: PathCrawler: Automatic Generation of Path Tests by Combining Static and Dynamic Analysis. EDCC, pp. 281-292, Springer, 2005.**Pex**

Nikolai Tillmann, Jonathan de Halleux: Pex-White Box Test Generation for .NET. TAP, pp. 134-153, Springer, 2008.**CUTE**

Koushik Sen, Darko Marinov, Gul Agha: CUTE: A Concolic Unit Testing Engine for C. FSE, pp. 263-272, ACM, 2005.**SAGE**

Patrice Godefroid, Michael Y. Levin, David A. Molnar: Automated Whitebox Fuzz Testing. NDSS 2008

#### Test Generation with Reachability Analyses

**Blast**

Dirk Beyer, Adam Chlipala, Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar: Generating Tests from Counterexamples. ICSE, pp. 326-335, IEEE, 2004.**FShell**

Andreas Holzer, Christian Schallhart, Michael Tautschnig, Helmut Veith: FShell: Systematic Test Case Generation for Dynamic Analysis and Measurement. CAV, pp. 209-213, Springer, 2008.

Andreas Holzer, Christian Schallhart, Michael Tautschnig, Helmut Veith: Query-Driven Program Testing. VMCAI, pp. 151-166, Springer, 2009.**CoVeriTest**

Dirk Beyer, Marie-Christine Jakobs: CoVeriTest: Cooperative Verifier-Based Testing. FASE, pp. 389-408, Springer, 2019.

This seminar studies topics from our research area combination of validation of verification results. The goal of validation is to increase the trust in the validity of the result. Validation allows one to check that the verification was actually done and that the verification tool worked properly. In this seminar, we look into (1) protocols for validation, (2) validation techniques for several analyses like dataflow analysis, abstract interpretation, or model checking.

#### Validation Protocols

**Proof-Carrying Code**

George C. Necula: Enforcing Security and Safety with Proof-Carrying Code, MFPS 1999.

George C. Necula, Peter Lee: Safe, Untrusted Agents Using Proof-Carrying Code, Mobile Agents and Security 1998.**Programs from Proofs**

Daniel Wonisch, Alexander Schremmer, Heike Wehrheim: Programs from Proofs – A PCC Alternative, CAV 2013.**Certifying Algorithms**

Ross M. McConnell, Kurt Mehlhorn, Stefan Näher, Pascal Schweitzer: Certifying algorithms, Comput. Sci. Rev. 5(2), 2011.**Amanat Protocol**

Sagar Chaki, Christian Schallhart, Helmut Veith: Verification Across Intellectual Property Boundaries, CAV 2007.**Violation and Correctness Witnesses**

Dirk Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann, Andreas Stahlbauer: Witness validation and stepwise testification across software verifiers, FSE 2015.

Dirk Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann: Correctness witnesses: exchanging verification results between verifiers, FSE 2016.

#### Validating Model Checking Results

**Temporal Safety Proofs**

Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, George C. Necula, Grégoire Sutre, Westley Weimer: Temporal-Safety Proofs for Systems Code, CAV 2002.**Extreme Model Checking**

Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, Marco A. A. Sanvido: Extreme Model Checking, Verification: Theory and Practice 2003.

#### Validating Dataflow Analysis and Abstract Interpretation Results

**Lightweight Bytecode Verification**

Eva Rose: Lightweight Bytecode Verification, J. Autom. Reason. 31(3-4), 2003.**Validating Dataflow Annotations**

Wolfram Amme, Marc-André Möller, Philipp Adler: Data Flow Analysis as a General Concept for the Transport of Verifiable Program Annotations, Electron. Notes Theor. Comput. Sci. 176(3), 2007.**Hoare Proofs from Abstract Interpretation**

Sunae Seo, Hongseok Yang, Kwangkeun Yi: Automatic Construction of Hoare Proofs from Abstract Interpretation Results, APLAS 2003.**Abstraction-Carrying Code**

Elvira Albert, Germán Puebla, Manuel V. Hermenegildo: Abstraction-Carrying Code, LPAR 2004.**Configurable Program Certification**

Marie-Christine Jakobs, Heike Wehrheim: Certification for configurable program analysis, SPIN 2014.

Marie-Christine Jakobs: Speed Up Configurable Certificate Validation by Certificate Reduction and Partitioning, SEFM 2015.

This seminar studies topics from our research area combination of verification approaches, especially we look into cooperative verification approaches. Cooperative verification approaches aim at leveraging the strengths of different verification approaches. We consider three types of cooperative verification approaches: (1) testification of alarms, (2) distribution of verification effort, and (3) algorithmic integration of different verification approaches.

The goal of alarm testification is the reduction of false alarms and to focus on alarms that can be reproduced by a real test.

To distribute the verification effort among different verification approaches, different parts of the verification (e.g., program paths) need to be inspected by distinct verification tools. Often, sequential combinations are used in which the follow-up tool only inspects the remaining proof obligations.

The algorithmic integration of different verification approaches idea is to develop new verification algorithms that use existing verification approaches as components.

#### Testification of Alarms

**Check’n’Crash**

Christoph Csallner, Yannis Smaragdakis: Check 'n' crash: combining static checking and testing. ICSE 2005.**DSD-Crasher**

Christoph Csallner, Yannis Smaragdakis: DSD-Crasher: a hybrid analysis tool for bug finding. ISSTA 2006.**SANTE: Static Analysis and Testing**

Omar Chebaro, Nikolai Kosmatov, Alain Giorgetti, Jacques Julliand: Program slicing enhances a verification technique combining static and dynamic analysis. SAC 2012.**Tests from Witnesses**

Dirk Beyer, Matthias Dangl, Thomas Lemberger, Michael Tautschnig: Tests from Witnesses – Execution-Based Validation of Verification Results. TAP 2018.

#### Distribution of Verification Effort

**Failure-Directed Program Trimming**

Kostas Ferles, Valentin Wüstholz, Maria Christakis, Isil Dillig: Failure-directed program trimming. ESEC/SIGSOFT FSE 2017**Collaboration via Verification Annotations**

Maria Christakis, Peter Müller, Valentin Wüstholz: Guiding dynamic symbolic execution toward unverified program executions. ICSE 2016.**CoDiDroid: Cooperative Android Analysis**

Felix Pauck, Heike Wehrheim: Together strong: cooperative Android app analysis. ESEC/SIGSOFT FSE 2019.

#### Algorithmic Integration of Different Verification Approaches

**Combining k-Induction with Invariant Generation**

Dirk Beyer, Matthias Dangl, Philipp Wendler: Boosting k-Induction with Continuously-Refined Invariants. CAV 2015.**Synergy: Using Test Generation for Counterexample Check and Identification of Refinement State**

Bhargav S. Gulavani, Thomas A. Henzinger, Yamini Kannan, Aditya V. Nori, Sriram K. Rajamani: SYNERGY: a new algorithm for property checking. FSE 2006.- Dash: Using Test Generation for Counterexample Check and Refinement

Ashutosh Gupta, Rupak Majumdar, Andrey Rybalchenko: From Tests to Proofs. TACAS 2009. **Ufo: Integrating Under-Approximations and Over-Approximations**

Aws Albarghouthi, Arie Gurfinkel, Marsha Chechik: From Under-Approximations to Over-Approximations and Back. TACAS 2012.**kIkI: Combining k-Invariants with k-Induction**

Martin Brain, Saurabh Joshi, Daniel Kroening, Peter Schrammel: Safety Verification and Refutation by k-Invariants and k-Induction. SAS 2015.

The seminar has been cancelled (reason: no participants). The seminar topics of this semester will therefore be discussed next semester.

This seminar studies topics from our research area incremental verification. Techniques in incremental verification are a key enabler for applying formal verification in modern software project. To deal with dozens of rapid changes, incremental verification techniques aim at a more efficient reverification of a modified program. For this purpose, incremental verification reuses information of a previous verification run. The kind of information and its use distinguish the different approaches. In this seminar, we look at four different classes of incremental verification: functional equivalence checking, reuse of intermediate results, proof adaption, and skipped verification.

In functional equivalence checking, it is verified that modified program parts show the same input/output behavior as the part before the verification.

Techniques reusing intermediate results avoid reexploration of known results, e.g., invariants, satisfiability results, or abstraction level. Their assumption is that a large portion of the intermediate results is still valid and needed for the verification of the modified program.

Approaches for proof adaption check which proof parts remain valid and perform a verification step for invalid proof parts.

The fourth class skipped verification is related to the class of proof adaption. Techniques in this class stop verification as soon as modification can be reached or perform a lightweight verification as long as no modification has been reached.

#### Functional Equivalence Checking

**Regression Verification**

Benny Godlin, Ofer Strichman: Regression verification. DAC 2009**Relation Regression Verification**

Dennis Felsing, Sarah Grebing, Vladimir Klebanov, Philipp Rümmer, Mattias Ulbrich: Automating regression verification. ASE 2014**UCKlee**

David A. Ramos, Dawson R. Engler: Practical, Low-Effort Equivalence Verification of Real Code. CAV 2011

#### Reusing Intermediate Results

**Constraints Reuse**

Willem Visser, Jaco Geldenhuys, Matthew B. Dwyer: Green: reducing, reusing and recycling constraints in program analysis. FSE 2012**Precision Reuse**

Dirk Beyer, Stefan Löwe, Evgeny Novikov, Andreas Stahlbauer, Philipp Wendler: Precision reuse for efficient regression verification. ESEC/SIGSOFT FSE 2013**Reusing Trace Abstractions**

Bat-Chen Rothenberg, Daniel Dietsch, Matthias Heizmann: Incremental Verification Using Trace Abstraction. SAS 2018

#### Proof Adaption

**Incremental Upgrade Checking**

Ondrej Sery, Grigory Fedyukovich, Natasha Sharygina: Incremental upgrade checking by means of interpolation-based function summaries. FMCAD 2012**Extreme Model Checking**

Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, Marco A. A. Sanvido: Extreme Model Checking. Verification: Theory and Practice 2003

#### Skipped verification

**Directed Incremental Verification**

Suzette Person, Guowei Yang, Neha Rungta, Sarfraz Khurshid: Directed incremental symbolic execution. PLDI 2011**Memoized Symbolic Execution**

Guowei Yang, Corina S. Pasareanu, Sarfraz Khurshid: Memoized symbolic execution. ISSTA 2012**Regression Model Checking**

Guowei Yang, Matthew B. Dwyer, Gregg Rothermel: Regression model checking. ICSM 2009**iCoq**

Ahmet Çelik, Karl Palmskog, Milos Gligoric: iCoq: regression proof selection for large-scale verification projects. ASE 2017**iAlloy**

Wenxi Wang, Kaiyuan Wang, Milos Gligoric, Sarfraz Khurshid: Incremental Analysis of Evolving Alloy Models. TACAS 2019