We are hiring new doctoral researchers, student research assistants, and tutors. Apply now!
2 papers accepted at ASE 2024: BenchCloud and CoVeriTeam GUI

Dr. Thomas Lemberger

Picture of Dr. Thomas Lemberger

Software and Computational Systems Lab
Department of Computer Science
Ludwig-Maximilians-Universität München (LMU Munich)
Oettingenstraße 67
80538 Munich (Germany)

Office
Room EU 108, Oettingenstr. 67
E-Mail
thomas.lemberger@sosy.ifi.lmu.de
ORCID
0000-0003-0291-815X

GPG-Key

Please send me encrypted mails!
My GPG key: 0x033DE66F
Fingerprint: BBC4 36E1 F2BA BA4E 8E81 872E 9787 7E1F 033D E66F

Thesis Mentoring

Available topics
Integration of Verification-Task Generation into the C Build Process

This project is available as a BSc thesis and a MSc research training course with 6 ECTS. Mentoring is available in English and German.

Goal: Integrate the recording of compile-time parameters into HarnessForge.

Background: The automated software verification of real-world software poses multiple challenges: First, a verification engineer has to identify the properties to check in the program and write a verification harness that encodes the necessary pre-conditions and post-conditions for the code under analysis. Then, code that the verifier can not handle must be modelled (for example external functions, assembly code, etc.). Last, the original source code, the verification harness and the model must be assembled into a verification task that can be processed by the verifier. Even this last part can be difficult: Verifiers mostly support single input files, only, and the build of a C project can be a complicated process. Our tool HarnessForge tries to solve this: given the definition of a verification harness, model overrides and the source directories of the project, it automatically computes source-code dependencies, assembles a single-file verification task, and slices all unnecessary code from that to keep it concise. But HarnessForge currently still requires the developer to provide all necessary build parameters manually. Your task is to automate this process. With your extension, HarnessForge executes the build, records all necessary build parameters, and uses them directly for its further work. Through this, the user only has to provide a build command and not all necessary build parameters.

Details: This project consists of the following steps:

  1. Create a detailed comparison of existings tools that record build parameters. The result is a well-grounded decision for one tool.
  2. Implement the integration of the selected tool into HarnessForge.
  3. Evaluate your integration with a set of real-world case studies.

Implementation happens in Python, but basic knowledge of C and C build systems like cmake or make are helpful.

Finished topics
A JSON Export for Control-Flow Automata [1]

This topic is available as a BSc thesis and as a MSc research training course with 6/12 ECTS. Mentoring is available in English and German.

Goal: We decouple the language parser of CPAchecker by implementing a JSON-based export and import of its C control-flow automaton.

Background: CPAchecker is a large Java framework for program analysis and formal verification. CPAchecker receives a program-under-analysis as input, analyzes it, and produces some output. CPAchecker coarsely consists of two steps: First, it parses the input program into the AST and generates the control-flow automaton (CFA) from the AST. CPAchecker then uses this CFA as intermediate representation, and runs its algorithms on that representation instead of the text-based code. Both CFA-creation and algorithms are implemented within CPAchecker, in a tightly coupled sequence. This means that: (a) Any post-processings on the CFA (for example optimizations) must be implemented within CPAchecker. This makes it more difficult to integrate some existing analyses, like C++-based libraries for static analysis. (b) CPAchecker's CFA creation can not be reused by other tools, so they can not profit from the existing optimizations within CPAchecker.

Details: To solve this, we implement an export and import of the CFA in CPAchecker. As format, we use JSON. This enables other tools to read the CFA and work with it, and CPAchecker can import the result.

Knowledge of the foundations of programming languages (e.g., abstract syntax trees) is required. Knowledge in software verification is helpful. The functionality must be written in Java, as a part of CPAchecker.

In the scope of this topic, you have to design the import/export, implement this design, and evaluate its functionality on the example of C programs.

Verification of Micro Services based on OpenAPI Specifications [1]

This topic is available as a BSc/MSc thesis or as a research training course with 6/12 ECTS. Mentoring is available in English and German.

Goal: We build a tool that reads the API specification of a micro-service, extracts some program properties from it, and uses a Java verifier (formal verifier or fuzz tester) to check these properties.

Background: We want to verify Java micro-services for correct behavior. The interface of most Java micro-services are RESTful HTTP APIs. These are often defined using the OpenAPI specification language in JSON or YAML format. OpenAPI allows developers to define constraints on the allowed HTTP requests and responses. For example: 'age': { 'type': 'integer', 'format': 'int32', 'minimum': 0 }. These constraints can be used for findings bugs in micro-services and verifying that they always work as expected. The available constraints are defined in the JSON Schema Validation.

Details: You build a tool that receives an OpenAPI specification and a Java micro service. The tool reads the OpenAPI specification, extracts the constraints on response parameters (for example age.minimum: 0), turns them into a specification that the Java verifier can understand (for example response.age >= 0), and then runs the verifier to verify that the micro-service implementation always fulfills this specification (or to report test inputs that trigger a bug). The constraints on request parameters can be used to restrict the input values to the micro service and make the verification more precise. Depending on the verifier that is used, the specification may be given to the verifier or directly encoded in the Java code (e.g. to be able to use a fuzzer for verification).

Knowledge in software verification is helpful. An understanding of Java micro-services is necessary. The command-line tool can be written in one of the following languages: Java, Kotlin.

In the scope of this topic, the tool must be designed, implemented, and evaluated.

Verification of Micro Services based on API Specifications [1]

This topic is available as a BSc/MSc thesis or as a research training course with 6/12 ECTS. Mentoring is available in English and German.

Goal: We build a tool that reads the API specification of a micro-service, extracts some program properties from it, and uses a Java verifier (formal verifier or fuzz tester) to check these properties.

Background: We want to verify Java micro-services for correct behavior. The interface of most Java micro-services are RESTful HTTP APIs. These are often defined using the OpenAPI specification language in JSON or YAML format. OpenAPI allows developers to define constraints on the allowed HTTP requests and responses. For example: 'age': { 'type': 'integer', 'format': 'int32', 'minimum': 0 }. These constraints can be used for findings bugs in micro-services and verifying that they always work as expected. The available constraints are defined in the JSON Schema Validation.

Details: You build a tool that receives an OpenAPI specification and a Java micro service. The tool reads the OpenAPI specification, extracts the constraints on response parameters (for example age.minimum: 0), turns them into a specification that the Java verifier can understand (for example response.age >= 0), and then runs the verifier to verify that the micro-service implementation always fulfills this specification (or to report test inputs that trigger a bug). The constraints on request parameters can be used to restrict the input values to the micro service and make the verification more precise. Depending on the verifier that is used, the specification may be given to the verifier or directly encoded in the Java code (e.g. to be able to use a fuzzer for verification).

Knowledge in software verification is helpful. An understanding of Java micro-services is necessary. The command-line tool can be written in one of the following languages: Java, Kotlin.

In the scope of this topic, the tool must be designed, implemented, and evaluated.

A Library for Unit Verification [1]

This topic is available as a BSc/MSc thesis or as a research training course with 6/12 ECTS. Mentoring is available in English and German.

Goal: We want to build a command-line tool that builds a verification harness. The verification harness allows to use software verification like unit testing.

Background: Similar to Unit Testing, we want to use verification to find bugs in individual software methods (or proof that there are no bugs). A software verifier is a tool that checks some program code and a verification property ("the program never triggers any assertion failure"). But there are two issues: (1) For unit verification, there is no library like JUnit that provides methods for managing tests. (2) Users are not used to software verification and don't know how to write a verification harness. The second issue is tackled by the topic "Verification-Harness Synthesis for Unit Verification". The first issue is tackled by this topic.

Details: You build a C library that helps software developers write and execute verification tasks, similar to unit testing with JUnit. The library provides methods to define the range of input values (examples: int temperature = anyInt();, restrict(temperature > -273);) and to assert postconditions (example: assert(output == 0)). Ideally, the library provides utility methods that make it easier to define conditions on complex data structures like structs and/or pointer constructs. The library may also come with a small tool that runs all defined verification-methods with a verifier and report the individual outputs (like a test runner).

We focus on C code. A large selection of verifiers can be used off-the-shelf.

Knowledge in software verification and a basic understanding of C is necessary. The library must be written in C.

In the scope of this topic, the library is designed, implemented, and evaluated.

A Gradle Build Tool to Integrate Verifiers into the Project's Build Pipeline [1]

This topic is available as a BSc thesis or as a large research training course with 12 ECTS. Mentoring is available in English and German.

Goal: We want to build a tool that allows developers to hook software verifiers into their project's build pipeline.

Background: We want to find bugs in software (or proof that there are no bugs) through verification. A software verifier is a tool that checks some program code and a verification property ("the program never triggers any assertion failure"). But most software verifiers only understand already-preprocessed software code and/or only single code files. This means that users have to first preprocess their source code manually and have to point the verifier to each file to analyze. This can be a cumbersome and difficult process.

Details: You build a gradle plugin that enables everyday-developers to integrate software verifiers into their project. In the end, this will be similar to linters and other build plugins that you know: The developer adds your plugin to their build configuration. This makes the plugin automatically runs the software verifier on the software files on project build. Your plugin hooks into the software build process, retrieves all relevant files, and then passes them to the verifier. Your plugin collects the verifier's results and output them in a nice format to the user. We focus on Java code and projects that are built with gradle.

Experience with 'gradle' and a good understanding of Java is recommended. Your build tool can be written in Java or Kotlin.

In the scope of this topic, you design, implement and evaluate your plugin.

A Maven Build Tool to Integrate Verifiers into the Project's Build Pipeline [1]

This topic is available as a BSc thesis or as a large research training course with 12 ECTS. Mentoring is available in English and German.

Goal: We want to build a tool that allows developers to hook software verifiers into their project's build pipeline.

Background: We want to find bugs in software (or proof that there are no bugs) through verification. A software verifier is a tool that checks some program code and a verification property ("the program never triggers any assertion failure"). But most software verifiers only understand already-preprocessed software code and/or only single code files. This means that users have to first preprocess their source code manually and have to point the verifier to each file to analyze. This can be a cumbersome and difficult process.

Details: You build a maven plugin that enables everyday-developers to integrate software verifiers into their project. In the end, this will be similar to linters and other build plugins that you know: The developer adds your plugin to their build configuration. This makes the plugin automatically runs the software verifier on the software files on project build. Your plugin hooks into the software build process, retrieves all relevant files, and then passes them to the verifier. Your plugin collects the verifier's results and output them in a nice format to the user. We focus on Java code and projects that are built with maven.

Experience with 'maven' and a good understanding of Java is recommended. Your build tool can be written in Java or Kotlin.

In the scope of this topic, you design, implement and evaluate your plugin.

Program Transformation in CPAchecker: Design and Implementation of a Source-Respecting Translation from Control-Flow Automata to C Code [1]
A New Spin on Verification with Symbolic Execution: Symbolic Execution as Formula-Based Predicate Analysis in CPAchecker [1]
Mining a Benchmark Set for Partial Fixes in C programs [1]
Concurrent Software Verification through Block-based Task Partitioning and Continuous Summary Refinement [1]
Adjustable Block Analysis: Actor-Based Creation of Block Summaries for Scaling Formal Verification [1]
Mutation based Automatic Program Repair in CPAchecker [1]

Debugging software usually consists of three steps: 1. finding an error in the program, 2. finding the cause for that error (the fault), and 3. fixing that fault. By default, fixing the fault is done manually by a programmer, which often consumes vast amount of resources and may introduce new bugs. To ease that task, this topic is concerned with automatic program repair. After finding a potential fault in a program, automatic program repair proposes fixes and often checks the validity of these fixes. The goal is to implement and evaluate a technique for automatic program repair in C programs in the state-of-the-art verification framework CPAchecker. The complexity of the technique and the scope of the topic is based on the thesis type (project, bachelor or master). Programming is done in Java.

Conversion of counterexamples found by software verifiers into an executable test harness that exposes the bug in the program (prototype exists) [1]
Application of Software Verification to OpenBSD Network Modules [1]
Hybrid Testcase Generation with CPAchecker [1]
A Web Frontend For Visualization of Computation Steps and their Results in CPAchecker [1]
Fault Localization in Model Checking. Implementation and Evaluation of Fault-Localization Techniques with Distance Metrics [1]

There are many different ways to find out whether a program is buggy, for example testing or formal verification. Once we know that a program is buggy, i.e., that it shows some faulty behavior, a developer has to manually debug the program to find the cause of the problem and fix it - this is a difficult and long-taking process. The aim of this thesis is to help programmers with debugging through the use of distance metrics (cf. Explaining Abstract Counterexamples, 2004).

Fault Localization for Formal Verification. An Implementation and Evaluation of Algorithms based on Error Invariants and UNSAT-cores [1]

There are many different ways to find out whether a program is buggy, for example testing or formal verification. Once we know that a program is buggy, i.e., that it shows some faulty behavior, a developer has to manually debug the program to find the cause of the problem and fix it - this is a difficult and long-taking process. The aim of this thesis is to help programmers with debugging by implementing Error Invariants and bug-finding with unsat cores, techniques for bug-finding based on boolean representations of the faulty program, to (1) automatically find potential causes for faulty behavior and (2) present the found causes.

Test-based Fault Localization in the Context of Formal Verification: Implementation and Evaluation of the Tarantula Algorithm in CPAchecker [1]

There are many different ways to find out whether a program is buggy, for example testing or formal verification. Once we know that a program is buggy, i.e., that it shows some faulty behavior, a developer has to manually debug the program to find the cause of the problem and fix it - this is a difficult and long-taking process. The aim of this thesis is to help programmers with debugging by implementing Tarantula, a technique for bug-finding based on test coverage, to (1) automatically find potential causes for faulty behavior and (2) present the found causes.

Converting Test Goals to Condition Automata [1]

Conditional Model Checking and Conditional Testing are two techniques for the combination of respective verification tools. Conditional Testing describes work done through a set of covered test goals and Conditional Model Checking describes work done through condition automata. Because of this discrepancy, the two techniques can not be combined. To bridge this gap, this thesis transforms a set of test goals into a condition automaton to allow easy cooperation between Conditional Testing and Conditional Model Checking.

A Language Server and IDE Plugin for CPAchecker [1]

At the moment, there are two ways to use CPAchecker: Locally through the command-line, and in the cloud through a web interface. Both options require C developers to leave their IDE everytime they want to run CPAchecker, which disrupts their workflow. The goal of this project is to build an IDE plugin that provides a seamless experience of using CPAchecker (e.g., for Eclipse CDT, VisualStudio Code or mbeddr). We would like the plugin to focus on ease of use and immediate feedback that is easy to comprehend. The plugin should:

  1. allow developers to run CPAchecker on their current project,
  2. provide some options for customization (e.g., whether to run CPAchecker locally or in the cloud), and
  3. provide useful feedback (e.g., if a possibly failing assert-Statement was found by CPAchecker)

A code-complexity analysis on the component level on the example of CPAchecker [1]

If you're a student interested in writing your thesis at our chair, you should also have a look at our full list of currently available theses.

Publications, Theses, and Projects