We are hiring new doctoral researchers, student research assistants, and tutors. Apply now!
2 papers accepted at ASE 2024: BenchCloud and CoVeriTeam GUI

Publications by year

2024

  1. Dirk Beyer and Thomas Lemberger. Six Years Later: Testing vs. Model Checking. International Journal on Software Tools for Technology Transfer (STTT), 2024. Link to this entry Keyword(s): Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp), Software Model Checking, Software Testing Funding: DFG-COOP PDF
    Artifact(s)
    Abstract
    Six years ago, we performed the first large-scale comparison of automated test generators and software model checkers with respect to bug-finding capabilities on a benchmark set with 5 693 C programs. Since then, the International Competition on Software Testing (Test-Comp) has established standardized formats and community-agreed rules for the experimental comparison of test generators. With this new context, it is time to revisit our initial question: Model checkers or test generators—which tools are more effective in finding bugs in software? To answer this, we perform a comparative analysis on the tools and existing data published by two competitions, the International Competition on Software Verification (SV-COMP) and Test-Comp. The results provide two insights: (1) Almost all test generators that participate in Test-Comp use hybrid approaches that include formal methods, and (2) while the considered model checkers are still highly competitive, they are now outperformed by the bug-finding capabilities of the considered test generators.
    BibTeX Entry
    @article{TestStudy-STTT, author = {Dirk Beyer and Thomas Lemberger}, title = {Six Years Later: Testing vs. Model Checking}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {}, number = {}, pages = {}, year = {2024}, doi = {}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2024-STTT.Six_Years_Later_Testing_vs_Model_Checking.pdf}, presentation = {}, abstract = {Six years ago, we performed the first large-scale comparison of automated test generators and software model checkers with respect to bug-finding capabilities on a benchmark set with 5 693 C programs. Since then, the International Competition on Software Testing (Test-Comp) has established standardized formats and community-agreed rules for the experimental comparison of test generators. With this new context, it is time to revisit our initial question: Model checkers or test generators—which tools are more effective in finding bugs in software? To answer this, we perform a comparative analysis on the tools and existing data published by two competitions, the International Competition on Software Verification (SV-COMP) and Test-Comp. The results provide two insights: (1) Almost all test generators that participate in Test-Comp use hybrid approaches that include formal methods, and (2) while the considered model checkers are still highly competitive, they are now outperformed by the bug-finding capabilities of the considered test generators.}, keyword = {Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp), Software Model Checking, Software Testing}, artifact = {10.5281/zenodo.10232648}, funding = {DFG-COOP}, }
  2. Dirk Beyer, Po-Chun Chien, Marek Jankola, and Nian-Ze Lee. A Transferability Study of Interpolation-Based Hardware Model Checking for Software Verification. Proc. ACM Softw. Eng., 1(FSE), 2024. ACM. doi:10.1145/3660797 Link to this entry Keyword(s): CPAchecker, Software Model Checking Funding: DFG-CONVEY Publisher's Version PDF Presentation Supplement
    Artifact(s)
    Abstract
    Assuring the correctness of computing systems is fundamental to our society and economy, and formal verification is a class of techniques approaching this issue with mathematical rigor. Researchers have invented numerous algorithms to automatically prove whether a computational model, e.g., a software program or a hardware digital circuit, satisfies its specification. In the past two decades, Craig interpolation has been widely used in both hardware and software verification. Despite the similarities in the theoretical foundation between hardware and software verification, previous works usually evaluate interpolation-based algorithms on only one type of verification tasks (e.g., either circuits or programs), so the conclusions of these studies do not necessarily transfer to different types of verification tasks. To investigate the transferability of research conclusions from hardware to software, we adopt two performant approaches of interpolation-based hardware model checking, (1) Interpolation-Sequence-Based Model Checking (Vizel and Grumberg, 2009) and (2) Intertwined Forward-Backward Reachability Analysis Using Interpolants (Vizel, Grumberg, and Shoham, 2013), for software verification. We implement the algorithms proposed by the two publications in the software verifier CPAchecker because it has a software-verification adoption of the first interpolation-based algorithm for hardware model checking from 2003, which the two publications use as a comparison baseline. To assess whether the claims in the two publications transfer to software verification, we conduct an extensive experiment on the largest publicly available suite of safety-verification tasks for the programming language C. Our experimental results show that the important characteristics of the two approaches for hardware model checking are transferable to software verification, and that the cross-disciplinary algorithm adoption is beneficial, as the approaches adopted from hardware model checking were able to tackle tasks unsolvable by existing methods. This work consolidates the knowledge in hardware and software verification and provides open-source implementations to improve the understanding of the compared interpolation-based algorithms.
    BibTeX Entry
    @article{ItpTransfer-PACMSE, author = {Dirk Beyer and Po-Chun Chien and Marek Jankola and Nian-Ze Lee}, title = {A Transferability Study of Interpolation-Based Hardware Model Checking for Software Verification}, journal = {Proc. ACM Softw. Eng.}, volume = {1}, number = {FSE}, year = {2024}, publisher = {ACM}, doi = {10.1145/3660797}, url = {https://www.sosy-lab.org/research/dar-ismc-transferability/}, presentation = {https://www.sosy-lab.org/research/prs/2024-07-18_FSE_A_Transferability_Study_of_Interpolation-Based_HWMC_for_SV_Marek.pdf}, abstract = {Assuring the correctness of computing systems is fundamental to our society and economy, and <em>formal verification</em> is a class of techniques approaching this issue with mathematical rigor. Researchers have invented numerous algorithms to automatically prove whether a computational model, e.g., a software program or a hardware digital circuit, satisfies its specification. In the past two decades, <em>Craig interpolation</em> has been widely used in both hardware and software verification. Despite the similarities in the theoretical foundation between hardware and software verification, previous works usually evaluate interpolation-based algorithms on only one type of verification tasks (e.g., either circuits or programs), so the conclusions of these studies do not necessarily transfer to different types of verification tasks. To investigate the transferability of research conclusions from hardware to software, we adopt two performant approaches of interpolation-based hardware model checking, (1) <em>Interpolation-Sequence-Based Model Checking</em> (<a href="https://doi.org/10.1109/FMCAD.2009.5351148">Vizel and Grumberg, 2009</a>) and (2) <em>Intertwined Forward-Backward Reachability Analysis Using Interpolants</em> (<a href="https://doi.org/10.1007/978-3-642-36742-7_22">Vizel, Grumberg, and Shoham, 2013</a>), for software verification. We implement the algorithms proposed by the two publications in the software verifier CPAchecker because it has a software-verification adoption of the first interpolation-based algorithm for hardware model checking from 2003, which the two publications use as a comparison baseline. To assess whether the claims in the two publications transfer to software verification, we conduct an extensive experiment on the largest publicly available suite of safety-verification tasks for the programming language C. Our experimental results show that the important characteristics of the two approaches for hardware model checking are transferable to software verification, and that the cross-disciplinary algorithm adoption is beneficial, as the approaches adopted from hardware model checking were able to tackle tasks unsolvable by existing methods. This work consolidates the knowledge in hardware and software verification and provides open-source implementations to improve the understanding of the compared interpolation-based algorithms.}, keyword = {CPAchecker,Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2024-FSE.A_Transferability_Study_of_Interpolation-Based_Hardware_Model_Checking_for_Software_Verification.pdf}, annote = {This article received the "ACM SIGSOFT Distinguished Paper Award" and its reproduction artifact received the "ACM SIGSOFT Best Artifact Award" at <a href="https://2024.esec-fse.org/info/awards">FSE 2024</a>!}, articleno = {90}, artifact = {10.5281/zenodo.11070973}, funding = {DFG-CONVEY}, issue_date = {July 2024}, numpages = {23}, }
    Additional Infos
    This article received the "ACM SIGSOFT Distinguished Paper Award" and its reproduction artifact received the "ACM SIGSOFT Best Artifact Award" at FSE 2024!
  3. Dirk Beyer, Matthias Kettl, and Thomas Lemberger. Decomposing Software Verification using Distributed Summary Synthesis. Proc. ACM Softw. Eng., 1(FSE), 2024. ACM. doi:10.1145/3660766 Link to this entry Keyword(s): CPAchecker, Software Model Checking Funding: DFG-CONVEY, DFG-COOP Publisher's Version PDF Supplement
    Artifact(s)
    Abstract
    There are many approaches for automated software verification, but they are either imprecise, do not scale well to large systems, or do not sufficiently leverage parallelization. This hinders the integration of software model checking into the development process (continuous integration). We propose an approach to decompose one large verification task into multiple smaller, connected verification tasks, based on blocks in the program control flow. For each block, summaries are computed — based on independent, distributed, continuous refinement by communication between the blocks. The approach iteratively synthesizes preconditions to assume at the block entry (which program states reach this block) and verification conditions to check at the block exit (which program states lead to a specification violation). This separation of concerns leads to an architecture in which all blocks can be analyzed in parallel, as independent verification problems. Whenever new information (as a precondition or verification condition) is available from other blocks, the verification can decide to restart with this new information. We realize our approach as an extension of the configurable-program-analysis algorithm and implement it for the verification of C programs in the widely used verifier CPAchecker. A large experimental evaluation shows the potential of our new approach: The distribution of the workload to several processing units works well and there is a significant reduction of the response time when using multiple processing units. There are even cases in which the new approach beats the highly-tuned, existing single-threaded predicate abstraction.
    BibTeX Entry
    @article{DSS-PACMSE, author = {Dirk Beyer and Matthias Kettl and Thomas Lemberger}, title = {Decomposing Software Verification using Distributed Summary Synthesis}, journal = {Proc. ACM Softw. Eng.}, volume = {1}, number = {FSE}, year = {2024}, publisher = {ACM}, doi = {10.1145/3660766}, url = {https://www.sosy-lab.org/research/distributed-summary-synthesis/}, presentation = {}, abstract = {There are many approaches for automated software verification, but they are either imprecise, do not scale well to large systems, or do not sufficiently leverage parallelization. This hinders the integration of software model checking into the development process (continuous integration). We propose an approach to decompose one large verification task into multiple smaller, connected verification tasks, based on blocks in the program control flow. For each block, summaries are computed — based on independent, distributed, continuous refinement by communication between the blocks. The approach iteratively synthesizes preconditions to assume at the block entry (which program states reach this block) and verification conditions to check at the block exit (which program states lead to a specification violation). This separation of concerns leads to an architecture in which all blocks can be analyzed in parallel, as independent verification problems. Whenever new information (as a precondition or verification condition) is available from other blocks, the verification can decide to restart with this new information. We realize our approach as an extension of the configurable-program-analysis algorithm and implement it for the verification of C programs in the widely used verifier CPAchecker. A large experimental evaluation shows the potential of our new approach: The distribution of the workload to several processing units works well and there is a significant reduction of the response time when using multiple processing units. There are even cases in which the new approach beats the highly-tuned, existing single-threaded predicate abstraction.}, keyword = {CPAchecker,Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2024-FSE.Decomposing_Software_Verification_using_Distributed_Summary_Synthesis.pdf}, articleno = {90}, artifact = {10.5281/zenodo.11095864}, funding = {DFG-CONVEY,DFG-COOP}, issue_date = {July 2024}, numpages = {23}, }
  4. Dirk Beyer, Nian-Ze Lee, and Philipp Wendler. Interpolation and SAT-Based Model Checking Revisited: Adoption to Software Verification. Journal of Automated Reasoning, 2024. Link to this entry Keyword(s): CPAchecker, Software Model Checking PDF Supplement
    BibTeX Entry
    @article{IMC-JAR, author = {Dirk Beyer and Nian-Ze Lee and Philipp Wendler}, title = {Interpolation and SAT-Based Model Checking Revisited: Adoption to Software Verification}, journal = {Journal of Automated Reasoning}, volume = {}, number = {}, pages = {}, year = {2024}, doi = {}, url = {https://www.sosy-lab.org/research/cpa-imc/}, pdf = {https://www.sosy-lab.org/research/pub/2024-JAR.Interpolation_and_SAT-Based_Model_Checking_Revisited_Adoption_to_Software_Verification.pdf}, keyword = {CPAchecker,Software Model Checking}, _sha256 = {}, }
  5. Jan Haltermann, Marie-Christine Jakobs, Cedric Richter, and Heike Wehrheim. Parallel program analysis on path ranges. Science of Computer Programming, 238, 2024. doi:10.1016/j.scico.2024.103154 Link to this entry Keyword(s): Ranged Program Analysis, Cooperative Verification, Software Model Checking, CPAchecker Funding: DFG-COOP Publisher's Version
    Artifact(s)
    BibTeX Entry
    @article{RangedPA-journal, author = {Jan Haltermann and Marie{-}Christine Jakobs and Cedric Richter and Heike Wehrheim}, title = {Parallel program analysis on path ranges}, journal = {Science of Computer Programming}, volume = {238}, year = {2024}, doi = {10.1016/j.scico.2024.103154}, url = {}, pdf = {}, keyword = {Ranged Program Analysis, Cooperative Verification, Software Model Checking, CPAchecker}, artifact = {10.5281/zenodo.8398988}, funding = {DFG-COOP}, }
  6. Marie-Christine Jakobs, Einar Broch Johnsen, Eduard Kamburjan, and Manuel Wimmer. Preface for the special issue on "Fundamental Approaches to Software Engineering" (FASE 2022). Science of Computer Programming, 232:103055, 2024. doi:10.1016/J.SCICO.2023.103055 Link to this entry Publisher's Version
    BibTeX Entry
    @article{FASE2022-SI, author = {Marie{-}Christine Jakobs and Einar Broch Johnsen and Eduard Kamburjan and Manuel Wimmer}, title = {Preface for the special issue on "Fundamental Approaches to Software Engineering" {(FASE} 2022)}, journal = {Science of Computer Programming}, volume = {232}, pages = {103055}, year = {2024}, doi = {10.1016/J.SCICO.2023.103055}, url = {}, pdf = {}, keyword = {}, annote = {}, artifact = {}, funding = {}, }
  7. Marie-Christine Jakobs and Nian-Ze Lee. Summary of the Eighth International Workshop on CPAchecker (CPAchecker 2023). ACM SIGSOFT Software Engineering Notes, 49(2):25-26, 2024. doi:10.1145/3650142.3650150 Link to this entry Keyword(s): CPAchecker Publisher's Version PDF Supplement
    BibTeX Entry
    @article{CPAcheckerWorkshop, author = {Marie{-}Christine Jakobs and Nian{-}Ze Lee}, title = {Summary of the Eighth International Workshop on CPAchecker (CPAchecker 2023)}, journal = {{ACM} {SIGSOFT} Software Engineering Notes}, volume = {49}, number = {2}, pages = {25--26}, year = {2024}, doi = {10.1145/3650142.3650150}, url = {https://cpa.sosy-lab.org/2023/}, pdf = {}, keyword = {CPAchecker}, annote = {}, artifact = {}, funding = {}, }
  8. Thomas Lemberger. Cooperative Approaches Across Test Generation and Formal Software Verification. Softwaretechnik-Trends, 44(2):66-67, July 2024. Gesellschaft für Informatik, Berlin. Link to this entry Keyword(s): Software Model Checking, Cooperative Verification PDF
    BibTeX Entry
    @article{SW-Trends24, author = {Thomas Lemberger}, title = {Cooperative Approaches Across Test Generation and Formal Software Verification}, journal = {Softwaretechnik-Trends}, volume = {44}, number = {2}, pages = {66-67}, year = {2024}, publisher = {Gesellschaft f{\"u}r Informatik, Berlin}, pdf = {https://www.sosy-lab.org/research/pub/2024-SWT.Cooperatives_Approaches_Across_Test_Generation_and_Formal_Software_Verification.pdf}, keyword = {Software Model Checking, Cooperative Verification}, annote = {Summary of dissertation}, doinone = {DOI not available}, issn = {0720-8928}, month = {July}, }
    Additional Infos
    Summary of dissertation

2023

  1. Manfred Broy, Albrecht Schmidt, and Martin Wirsing. In memory of Heinrich Hussmann, long-time friend and SoSyM editor. Softw. Syst. Model., 22(2):453-454, 2023. doi:10.1007/s10270-023-01099-0 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @article{NachrufHussmann, author = {Manfred Broy and Albrecht Schmidt and Martin Wirsing}, title = {In memory of Heinrich Hussmann, long-time friend and SoSyM editor}, journal = {Softw. Syst. Model.}, volume = {22}, number = {2}, pages = {453--454}, year = {2023}, doi = {10.1007/s10270-023-01099-0}, pdf = {https://sosy-lab.org/research/pub/2023-SoSyM.In_memory_of_Heinrich_Hussmann_long-time_friend_and_SoSyM_editor.pdf}, }

2022

  1. Dirk Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann, Thomas Lemberger, and Michael Tautschnig. Verification Witnesses. ACM Trans. Softw. Eng. Methodol., 31(4):57:1-57:69, 2022. doi:10.1145/3477579 Link to this entry Keyword(s): CPAchecker, Ultimate, Software Model Checking, Witness-Based Validation, Witness-Based Validation (main) Publisher's Version PDF Supplement
    BibTeX Entry
    @article{Witnesses-TOSEM, author = {Dirk Beyer and Matthias Dangl and Daniel Dietsch and Matthias Heizmann and Thomas Lemberger and Michael Tautschnig}, title = {Verification Witnesses}, journal = {ACM Trans. Softw. Eng. Methodol.}, volume = {31}, number = {4}, pages = {57:1-57:69}, year = {2022}, doi = {10.1145/3477579}, url = {https://www.sosy-lab.org/research/verification-witnesses-tosem/}, keyword = {CPAchecker,Ultimate,Software Model Checking,Witness-Based Validation,Witness-Based Validation (main)}, _pdf = {https://www.sosy-lab.org/research/pub/2022-TOSEM.Verification_Witnesses.pdf}, _sha256 = {48acf3f35251df635e829b29fe8f16fd50498f8f99a082b8b9e0aa094a97a432}, }
  2. Martin Wirsing and and Alexander Knapp. A Reduction-based Cut-free Gentzen Calculus for Dynamic Epistemic Logic. Logic Journal of the IGPL, 12 2022. doi:10.1093/jigpal/jzac078 Link to this entry Publisher's Version PDF
    Abstract
    Dynamic epistemic logic (DEL) is a multi-modal logic for reasoning about the change of knowledge in multi-agent systems. It extends epistemic logic by a modal operator for actions which announce logical formulas to other agents. In Hilbert-style proof calculi for DEL, modal action formulas are reduced to epistemic logic, whereas current sequent calculi for DEL are labelled systems which internalize the semantic accessibility relation of the modal operators, as well as the accessibility relation underlying the semantics of the actions. We present a novel cut-free ordinary sequent calculus, called G4_P,A[], for propositional DEL. In contrast to the known sequent calculi, our calculus does not internalize the accessibility relations, but—similar to Hilbert style proof calculi—action formulas are reduced to epistemic formulas. Since no ordinary sequent calculus for full S5 modal logic is known, the proof rules for the knowledge operator and the Boolean operators are those of an underlying S4 modal calculus. We show the soundness and completeness of G4_P,A[] and prove also the admissibility of the cut-rule and of several other rules for introducing the action modality.
    BibTeX Entry
    @article{WirsingJ22, author = {Martin Wirsing and and Alexander Knapp}, title = {A Reduction-based Cut-free Gentzen Calculus for Dynamic Epistemic Logic}, journal = {Logic Journal of the IGPL}, year = {2022}, doi = {10.1093/jigpal/jzac078}, pdf = {https://sosy-lab.org/research/pub/2022-LogJIGPL.A_Reduction-based_Cut-free_Gentzen_Calculus_for_Dynamic_Epistemic_Logic.pdf}, abstract = {Dynamic epistemic logic (DEL) is a multi-modal logic for reasoning about the change of knowledge in multi-agent systems. It extends epistemic logic by a modal operator for actions which announce logical formulas to other agents. In Hilbert-style proof calculi for DEL, modal action formulas are reduced to epistemic logic, whereas current sequent calculi for DEL are labelled systems which internalize the semantic accessibility relation of the modal operators, as well as the accessibility relation underlying the semantics of the actions. We present a novel cut-free ordinary sequent calculus, called G4_P,A[], for propositional DEL. In contrast to the known sequent calculi, our calculus does not internalize the accessibility relations, but—similar to Hilbert style proof calculi—action formulas are reduced to epistemic formulas. Since no ordinary sequent calculus for full S5 modal logic is known, the proof rules for the knowledge operator and the Boolean operators are those of an underlying S4 modal calculus. We show the soundness and completeness of G4_P,A[] and prove also the admissibility of the cut-rule and of several other rules for introducing the action modality.}, issn = {1367-0751}, month = {12}, }

2021

  1. Dirk Beyer, Marieke Huisman, Fabrice Kordon, and Bernhard Steffen. TOOLympics II: Competitions on Formal Methods (Intro). International Journal on Software Tools for Technology Transfer (STTT), 23(6):879-881, 2021. doi:10.1007/s10009-021-00631-1 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @article{Intro-TOOLympics2-STTT, author = {Dirk Beyer and Marieke Huisman and Fabrice Kordon and Bernhard Steffen}, title = {{TOOLympics II}: {Competitions} on Formal Methods (Intro)}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {6}, pages = {879-881}, year = {2021}, doi = {10.1007/s10009-021-00631-1}, sha256 = {efda377ea8f220771b05df7a9f9273bbcf6a692fa39f2a2304e7524408b8cdee}, url = {}, pdf = {}, presentation = {}, abstract = {}, keyword = {}, issn = {1433-2787}, }
  2. Dirk Beyer. First International Competition on Software Testing. International Journal on Software Tools for Technology Transfer (STTT), 23(6):833-846, 2021. doi:10.1007/s10009-021-00613-3 Link to this entry Keyword(s): Competition on Software Testing (Test-Comp), Competition on Software Testing (Test-Comp Report), Software Testing Funding: DFG-COOP Publisher's Version PDF Supplement
    BibTeX Entry
    @article{TestComp19-STTT, author = {Dirk Beyer}, title = {First International Competition on Software Testing}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {6}, pages = {833-846}, year = {2021}, doi = {10.1007/s10009-021-00613-3}, sha256 = {cd82a853fbbf65de7f95a9e7de4f36118bb35fb516db87421a0aa38ccc863031}, url = {https://www.sosy-lab.org/research/pub/2021-STTT.First_International_Competition_on_Software_Testing.pdf}, pdf = {}, presentation = {}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, funding = {DFG-COOP}, issn = {1433-2787}, }
  3. Dirk Beyer and Marieke Huisman. TOOLympics I: Competition on Software Testing (Intro). International Journal on Software Tools for Technology Transfer (STTT), 23(6):829-832, 2021. doi:10.1007/s10009-021-00611-5 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @article{Intro-TOOLympics1-STTT, author = {Dirk Beyer and Marieke Huisman}, title = {{TOOLympics I}: {Competition} on Software Testing (Intro)}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {6}, pages = {829-832}, year = {2021}, doi = {10.1007/s10009-021-00611-5}, sha256 = {ec828ad46ee494c8fb08b462df617320f2ebce70c6bee9330c2f0eb65ef5d757}, url = {}, pdf = {}, presentation = {}, abstract = {}, keyword = {}, issn = {1433-2787}, }
  4. Dirk Beyer and Marie-Christine Jakobs. Cooperative Verifier-Based Testing with CoVeriTest. International Journal on Software Tools for Technology Transfer (STTT), 23(3):313-333, 2021. doi:10.1007/s10009-020-00587-8 Link to this entry Keyword(s): CPAchecker, Software Model Checking, Software Testing Funding: DFG-COOP Publisher's Version PDF
    Abstract
    Testing is a widely applied technique to evaluate software quality, and coverage criteria are often used to assess the adequacy of a generated test suite. However, manually constructing an adequate test suite is typically too expensive, and numerous techniques for automatic test-suite generation were proposed. All of them come with different strengths. To build stronger test-generation tools, different techniques should be combined. In this paper, we study cooperative combinations of verification approaches for test generation, which exchange high-level information. We present CoVeriTest, a hybrid technique for test-suite generation. CoVeriTest iteratively applies different conditional model checkers and allows users to adjust the level of cooperation and to configure individual time limits for each conditional model checker. In our experiments, we systematically study different CoVeriTest cooperation setups, which either use combinations of explicit-state model checking and predicate abstraction, or bounded model checking and symbolic execution. A comparison with state-of-the-art test-generation tools reveals that CoVeriTest achieves higher coverage for many programs (about 15
    BibTeX Entry
    @article{CoVeriTest-STTT, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {Cooperative Verifier-Based Testing with {CoVeriTest}}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {3}, pages = {313-333}, year = {2021}, doi = {10.1007/s10009-020-00587-8}, sha256 = {28a5bf6103296455728076e8c12902a53b3d377a296ea2ba18ac111c93330dbd}, url = {}, pdf = {}, presentation = {}, abstract = {Testing is a widely applied technique to evaluate software quality, and coverage criteria are often used to assess the adequacy of a generated test suite. However, manually constructing an adequate test suite is typically too expensive, and numerous techniques for automatic test-suite generation were proposed. All of them come with different strengths. To build stronger test-generation tools, different techniques should be combined. In this paper, we study cooperative combinations of verification approaches for test generation, which exchange high-level information. We present CoVeriTest, a hybrid technique for test-suite generation. CoVeriTest iteratively applies different conditional model checkers and allows users to adjust the level of cooperation and to configure individual time limits for each conditional model checker. In our experiments, we systematically study different CoVeriTest cooperation setups, which either use combinations of explicit-state model checking and predicate abstraction, or bounded model checking and symbolic execution. A comparison with state-of-the-art test-generation tools reveals that CoVeriTest achieves higher coverage for many programs (about 15%).}, keyword = {CPAchecker,Software Model Checking,Software Testing}, funding = {DFG-COOP}, issn = {1433-2787}, }
  5. Gidon Ernst, Sean Sedwards, Zhenya Zhang, and Ichiro Hasuo. Falsification of Hybrid Systems using Adaptive Probabilistic Search. Transact. on Modeling and Comp. Simulations (TOMACS), 31(3):1-22, 2021. ACM. Link to this entry
    BibTeX Entry
    @article{ernst:tomacs2021, author = {Gidon Ernst and Sean Sedwards and Zhenya Zhang and Ichiro Hasuo}, title = {Falsification of Hybrid Systems using Adaptive Probabilistic Search}, journal = {Transact. on Modeling and Comp. Simulations (TOMACS)}, volume = {31}, number = {3}, pages = {1--22}, year = {2021}, publisher = {ACM}, }
  6. Martin Wirsing and Dieter Frey. Agile governance for innovating higher education teaching and learning. Rivista di Digital Politics(3/2021):543-558, 2021. Società editrice il Mulino. doi:10.53227/103804 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @article{WirsingF21, author = {Martin Wirsing and Dieter Frey}, title = {Agile governance for innovating higher education teaching and learning}, journal = {Rivista di Digital Politics}, number = {3/2021}, pages = {543--558}, year = {2021}, publisher = {Società editrice il Mulino}, doi = {10.53227/103804}, pdf = {https://sosy-lab.org/research/pub/2022-Rivista.Agile_governance_for_innovating_higher_education_teaching_and_learning.pdf}, issn = {2785-0072}, urlpub = {https://www.rivisteweb.it/doi/10.53227/103804}, }
  7. Lenz Belzner and Martin Wirsing. Synthesizing safe policies under probabilistic constraints with reinforcement learning and Bayesian model checking. Sci. Comput. Program., 206:102620, 2021. doi:10.1016/j.scico.2021.102620 Link to this entry Publisher's Version
    BibTeX Entry
    @article{DBLP:journals/scp/BelznerW21, author = {Lenz Belzner and Martin Wirsing}, title = {Synthesizing safe policies under probabilistic constraints with reinforcement learning and Bayesian model checking}, journal = {Sci. Comput. Program.}, volume = {206}, pages = {102620}, year = {2021}, doi = {10.1016/j.scico.2021.102620}, }
  8. Simon Bliudze, Panagiotis Katsaros, Saddek Bensalem, and Martin Wirsing. On methods and tools for rigorous system design. Int. J. Softw. Tools Technol. Transf., 23(5):679-684, 2021. doi:10.1007/s10009-021-00632-0 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @article{DBLP:journals/sttt/BliudzeKBW21, author = {Simon Bliudze and Panagiotis Katsaros and Saddek Bensalem and Martin Wirsing}, title = {On methods and tools for rigorous system design}, journal = {Int. J. Softw. Tools Technol. Transf.}, volume = {23}, number = {5}, pages = {679--684}, year = {2021}, doi = {10.1007/s10009-021-00632-0}, }

2020

  1. Dirk Beyer and Marieke Huisman. Tools for the Construction and Analysis of Systems (Intro). International Journal on Software Tools for Technology Transfer (STTT), 22(6):685-687, 2020. doi:10.1007/s10009-020-00581-0 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @article{Intro-TACAS18-STTT, author = {Dirk Beyer and Marieke Huisman}, title = {Tools for the Construction and Analysis of Systems (Intro)}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {22}, number = {6}, pages = {685-687}, year = {2020}, doi = {10.1007/s10009-020-00581-0}, sha256 = {467e6c9c70fd0c0728a8abed5d9162a1467a906e79d367a8a44dd102e8dc0de6}, url = {}, pdf = {}, presentation = {}, abstract = {}, keyword = {}, issn = {1433-2787}, }
  2. Dirk Beyer and Marieke Huisman. Selected and Extended Papers from TACAS 2018: Preface. Journal of Automated Reasoning, 64(7):1331-1332, 2020. doi:10.1007/s10817-020-09575-8 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @article{Intro-TACAS18-JAR, author = {Dirk Beyer and Marieke Huisman}, title = {Selected and Extended Papers from {TACAS} 2018: Preface}, journal = {Journal of Automated Reasoning}, volume = {64}, number = {7}, pages = {1331-1332}, year = {2020}, doi = {10.1007/s10817-020-09575-8}, sha256 = {252d10dfb4cd4c93db83e431938420d0927be076e598c176db1ea85514a5ba0a}, url = {}, pdf = {}, presentation = {}, abstract = {}, keyword = {}, }
  3. Gidon Ernst. A Complete Approach to Loop Verification with Invariants and Summaries. arXiv preprint arXiv:2010.05812, 2020. Link to this entry
    BibTeX Entry
    @article{ernst:loops2020, author = {Gidon Ernst}, title = {A Complete Approach to Loop Verification with Invariants and Summaries}, journal = {arXiv preprint arXiv:2010.05812}, year = {2020}, }
  4. Thomas Lemberger. Plain random test generation with PRTest. International Journal on Software Tools for Technology Transfer (STTT), 2020. Springer. doi:10.1007/s10009-020-00568-x Link to this entry Keyword(s): Software Testing Publisher's Version PDF Presentation
    Abstract
    Automatic test-suite generation tools are often complex and their behavior is not predictable. To provide a minimum baseline that test-suite generators should be able to surpass, we present PRTest, a random black-box test-suite generator for C programs: To create a test, PRTest natively executes the program under test and creates a new, random test value whenever an input value is required. After execution, PRTest checks whether any new program branches were covered and, if this is the case, the created test is added to the test suite. This way, tests are rapidly created either until a crash is found, or until the user aborts the creation. While this naive mechanism is not competitive with more sophisticated, state-of-the-art test-suite generation tools, it is able to provide a good baseline for Test-Comp and a fast alternative for automatic test-suite generation for programs with simple control flow. PRTest is publicly available and open source.
    BibTeX Entry
    @article{PRTEST19, author = {Thomas Lemberger}, title = {Plain random test generation with {PRTest}}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {}, number = {}, pages = {}, year = {2020}, publisher = {Springer}, doi = {10.1007/s10009-020-00568-x}, sha256 = {2e5ae7091b6adb758c123dfe62d3fab57203f930883539beb20f6e91391ebc77}, pdf = {https://www.sosy-lab.org/research/pub/2020-STTT.Plain_random_test_generation_with_PRTest.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-04-06_TestComp19_PRTest_Thomas.pdf}, abstract = {Automatic test-suite generation tools are often complex and their behavior is not predictable. To provide a minimum baseline that test-suite generators should be able to surpass, we present PRTest, a random black-box test-suite generator for C programs: To create a test, PRTest natively executes the program under test and creates a new, random test value whenever an input value is required. After execution, PRTest checks whether any new program branches were covered and, if this is the case, the created test is added to the test suite. This way, tests are rapidly created either until a crash is found, or until the user aborts the creation. While this naive mechanism is not competitive with more sophisticated, state-of-the-art test-suite generation tools, it is able to provide a good baseline for Test-Comp and a fast alternative for automatic test-suite generation for programs with simple control flow. PRTest is publicly available and open source.}, keyword = {Software Testing}, annote = {Publication appeared first online in July 2020.<BR/> PRTest is available at <a href="https://gitlab.com/sosy-lab/software/prtest"> https://gitlab.com/sosy-lab/software/prtest</a>}, }
    Additional Infos
    Publication appeared first online in July 2020.
    PRTest is available at https://gitlab.com/sosy-lab/software/prtest
  5. Rocco De Nicola, Stefan Jähnichen, and Martin Wirsing. Rigorous engineering of collective adaptive systems: special section. Int. J. Softw. Tools Technol. Transf., 22(4):389-397, 2020. doi:10.1007/s10009-020-00565-0 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @article{DBLP:journals/sttt/NicolaJW20, author = {Rocco De Nicola and Stefan J{\"{a}}hnichen and Martin Wirsing}, title = {Rigorous engineering of collective adaptive systems: special section}, journal = {Int. J. Softw. Tools Technol. Transf.}, volume = {22}, number = {4}, pages = {389--397}, year = {2020}, doi = {10.1007/s10009-020-00565-0}, }

2019

  1. Dirk Beyer, Stefan Löwe, and Philipp Wendler. Reliable Benchmarking: Requirements and Solutions. International Journal on Software Tools for Technology Transfer (STTT), 21(1):1-29, 2019. doi:10.1007/s10009-017-0469-y Link to this entry Keyword(s): Benchmarking Publisher's Version PDF Presentation Supplement
    Abstract
    Benchmarking is a widely used method in experimental computer science, in particular, for the comparative evaluation of tools and algorithms. As a consequence, a number of questions need to be answered in order to ensure proper benchmarking, resource measurement, and presentation of results, all of which is essential for researchers, tool developers, and users, as well as for tool competitions. We identify a set of requirements that are indispensable for reliable benchmarking and resource measurement of time and memory usage of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework can (on Linux systems) currently only be done by using the cgroup and namespace features of the kernel. We developed BenchExec, a ready-to-use, tool-independent, and open-source implementation of a benchmarking framework that fulfills all presented requirements, making reliable benchmarking and resource measurement easy. Our framework is able to work with a wide range of different tools, has proven its reliability and usefulness in the International Competition on Software Verification, and is used by several research groups worldwide to ensure reliable benchmarking. Finally, we present guidelines on how to present measurement results in a scientifically valid and comprehensible way.
    BibTeX Entry
    @article{Benchmarking-STTT, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Reliable Benchmarking: {R}equirements and Solutions}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {21}, number = {1}, pages = {1--29}, year = {2019}, doi = {10.1007/s10009-017-0469-y}, sha256 = {a50fbc212af394b32166d6354f986e7b1d5bc87220bdc50df899d6a46fedf33c}, url = {https://www.sosy-lab.org/research/benchmarking/}, presentation = {https://www.sosy-lab.org/research/prs/Latest_ReliableBenchmarking.pdf}, abstract = {Benchmarking is a widely used method in experimental computer science, in particular, for the comparative evaluation of tools and algorithms. As a consequence, a number of questions need to be answered in order to ensure proper benchmarking, resource measurement, and presentation of results, all of which is essential for researchers, tool developers, and users, as well as for tool competitions. We identify a set of requirements that are indispensable for reliable benchmarking and resource measurement of time and memory usage of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework can (on Linux systems) currently only be done by using the cgroup and namespace features of the kernel. We developed BenchExec, a ready-to-use, tool-independent, and open-source implementation of a benchmarking framework that fulfills all presented requirements, making reliable benchmarking and resource measurement easy. Our framework is able to work with a wide range of different tools, has proven its reliability and usefulness in the International Competition on Software Verification, and is used by several research groups worldwide to ensure reliable benchmarking. Finally, we present guidelines on how to present measurement results in a scientifically valid and comprehensible way.}, keyword = {Benchmarking}, _pdf = {https://www.sosy-lab.org/research/pub/2019-STTT.Reliable_Benchmarking_Requirements_and_Solutions.pdf}, annote = {Publication appeared first online in November 2017<BR/> BenchExec is available at: <a href="https://github.com/sosy-lab/benchexec"> https://github.com/sosy-lab/benchexec</a>}, }
    Additional Infos
    Publication appeared first online in November 2017
    BenchExec is available at: https://github.com/sosy-lab/benchexec

2018

  1. Dirk Beyer, Matthias Dangl, and Philipp Wendler. A Unifying View on SMT-Based Software Verification. Journal of Automated Reasoning, 60(3):299-335, 2018. doi:10.1007/s10817-017-9432-6 Link to this entry Keyword(s): CPAchecker, Software Model Checking Publisher's Version PDF Presentation Supplement
    Abstract
    After many years of successful development of new approaches for software verification, there is a need to consolidate the knowledge about the different abstract domains and algorithms. The goal of this paper is to provide a compact and accessible presentation of four SMT-based verification approaches in order to study them in theory and in practice. We present and compare the following different "schools of thought" of software verification: bounded model checking, k-induction, predicate abstraction, and lazy abstraction with interpolants. Those approaches are well-known and successful in software verification and have in common that they are based on SMT solving as the back-end technology. We reformulate all four approaches in the unifying theoretical framework of configurable program analysis and implement them in the verification framework CPAchecker. Based on this, we can present an evaluation that thoroughly compares the different approaches, where the core differences are expressed in configuration parameters and all other variables are kept constant (such as parser front end, SMT solver, used theory in SMT formulas). We evaluate the effectiveness and the efficiency of the approaches on a large set of verification tasks and discuss the conclusions.
    BibTeX Entry
    @article{AlgorithmComparison-JAR, author = {Dirk Beyer and Matthias Dangl and Philipp Wendler}, title = {A Unifying View on {SMT}-Based Software Verification}, journal = {Journal of Automated Reasoning}, volume = {60}, number = {3}, pages = {299--335}, year = {2018}, doi = {10.1007/s10817-017-9432-6}, sha256 = {5fab3eafacd7fef9c655afc9cd78bbb419ea47361a81633fb551fbf496875d84}, url = {https://www.sosy-lab.org/research/k-ind-compare/}, presentation = {https://www.sosy-lab.org/research/prs/Latest_UnifyingViewSmtBasedSoftwareVerification.pdf}, abstract = {After many years of successful development of new approaches for software verification, there is a need to consolidate the knowledge about the different abstract domains and algorithms. The goal of this paper is to provide a compact and accessible presentation of four SMT-based verification approaches in order to study them in theory and in practice. We present and compare the following different ``schools of thought'' of software verification: bounded model checking, k-induction, predicate abstraction, and lazy abstraction with interpolants. Those approaches are well-known and successful in software verification and have in common that they are based on SMT solving as the back-end technology. We reformulate all four approaches in the unifying theoretical framework of configurable program analysis and implement them in the verification framework CPAchecker. Based on this, we can present an evaluation that thoroughly compares the different approaches, where the core differences are expressed in configuration parameters and all other variables are kept constant (such as parser front end, SMT solver, used theory in SMT formulas). We evaluate the effectiveness and the efficiency of the approaches on a large set of verification tasks and discuss the conclusions.}, keyword = {CPAchecker,Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2018-JAR.A_Unifying_View_on_SMT-Based_Software_Verification.pdf}, annote = {Publication appeared first online in December 2017<BR/> CPAchecker is available at: <a href="https://cpachecker.sosy-lab.org/"> https://cpachecker.sosy-lab.org/</a>}, issn = {1573-0670}, }
    Additional Infos
    Publication appeared first online in December 2017
    CPAchecker is available at: https://cpachecker.sosy-lab.org/
  2. Yuyan Bao, Gary Leavens, and Gidon Ernst. Unifying separation logic and region logic to allow interoperability. Formal Aspects of Computing, 30(3--4):381-441, 2018. Springer. Link to this entry
    BibTeX Entry
    @article{ernst:bao2018, author = {Yuyan Bao and Gary Leavens and Gidon Ernst}, title = {Unifying separation logic and region logic to allow interoperability}, journal = {Formal Aspects of Computing}, volume = {30}, number = {3--4}, pages = {381--441}, year = {2018}, publisher = {Springer}, }
  3. Zhenya Zhang, Gidon Ernst, Sean Sedwards, Paolo Arcaini, and Ichiro Hasuo. Two-Layered Falsification of Hybrid Systems Guided by Monte Carlo Tree Search. Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 37(11):2894-2905, 2018. IEEE. Link to this entry Nominated for best paper PDF
    BibTeX Entry
    @article{ernst:emsoft2018, author = {Zhenya Zhang and Gidon Ernst and Sean Sedwards and Paolo Arcaini and Ichiro Hasuo}, title = {{Two-Layered Falsification of Hybrid Systems Guided by Monte Carlo Tree Search}}, journal = {Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)}, volume = {37}, number = {11}, pages = {2894--2905}, year = {2018}, publisher = {IEEE}, pdf = {https://www.sosy-lab.org/research/pub/2018-TCAD.Two-Layered_Falsification_of_Hybrid_Systems_guided_by_Monte_Carlo_Tree_Search.pdf}, note = {Nominated for best paper}, }
  4. Gerhard Schellhorn, Gidon Ernst, Jörg Pfähler, Stefan Bodenmüller, and Wolfgang Reif. Symbolic execution for a clash-free subset of ASMs. Science of Computer Programming (SCP), 158:21-40, 2018. Elsevier. Link to this entry PDF
    BibTeX Entry
    @article{ernst:scp2017, author = {Gerhard Schellhorn and Gidon Ernst and Jörg Pfähler and Stefan Bodenmüller and Wolfgang Reif}, title = {{Symbolic execution for a clash-free subset of ASMs}}, journal = {Science of Computer Programming (SCP)}, volume = {158}, pages = {21--40}, year = {2018}, publisher = {Elsevier}, pdf = {https://www.sosy-lab.org/research/pub/2017-SCP.Symbolic_Execution_for_a_Clash-Free_Subset_of_ASMs.pdf}, }
  5. Dirk Beyer, Sumit Gulwani, and David Schmidt. Combining Model Checking and Data-Flow Analysis. In E. M. Clarke, T. A. Henzinger, H. Veith, and R. Bloem, editors, Handbook on Model Checking, pages 493-540, 2018. Springer. doi:10.1007/978-3-319-10575-8_16 Link to this entry Publisher's Version PDF
    BibTeX Entry
    @incollection{HBMC18, author = {Dirk Beyer and Sumit Gulwani and David Schmidt}, title = {Combining Model Checking and Data-Flow Analysis}, booktitle = {Handbook on Model Checking}, editor = {E.~M.~Clarke and T.~A.~Henzinger and H.~Veith and R.~Bloem}, pages = {493-540}, year = {2018}, publisher = {Springer}, isbn = {978-3-319-10574-1}, doi = {10.1007/978-3-319-10575-8_16}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2018-HBMC.Combining_Model_Checking_and_Data-Flow_Analysis.pdf}, annote = {<a href="https://www.sosy-lab.org/research/pub/2018-HBMC.Combining_Model_Checking_and_Data-Flow_Analysis.Errata.txt"> Errata</a> available.}, }
    Additional Infos
    Errata available.
  6. Marie-Christine Jakobs. Spontane Sicherheitsprüfung mittels individualisierter Programmzertifizierung oder Programmrestrukturierung. In S. Hölldobler, editors, Ausgezeichnete Informatikdissertationen 2017, LNI, pages 91-100, 2018. Gesellschaft für Informatik (GI). Link to this entry Keyword(s): CPAchecker, Software Model Checking Publisher's Version PDF
    Abstract
    Korrekt funktionierende Software gewinnt immer mehr an Bedeutung. Im Vergleich zu früher ist es heutzutage schwieriger einzuschätzen, wie gut eine Software funktioniert. Dies liegt unter anderem daran, dass Endnutzer häufiger Software unbekannter Hersteller installieren. Endnutzer sollten sich also aktiv von der Softwarekorrektheit überzeugen, zum Beispiel in Form einer spontanen Sicherheitsprüfung. Übliche Verifikationstechniken zur Korrektheitsprüfung kommen für Endnutzer, in der Regel Laien, nicht in Frage. Die zentrale Frage ist daher, wie man einem Laien eine solche spontane Sicherheitsprüfung ermöglicht. Die Antwort der Dissertation sind einfache, automatische und generelle Verfahren zur Sicherheitsprüfung. In der Dissertation werden verschiedene Verfahren vorgeschlagen und sowohl theoretisch als auch praktisch untersucht. Die vorgeschlagenen Verfahren lassen sich in zwei Forschungsrichtungen einsortieren, nämlich in die Gruppe der Proof-Carrying Code Verfahren bzw. in die Gruppe des alternativen Programs from Proofs Verfahren. Einige Verfahren kombinieren beide Forschungsrichtungen.
    BibTeX Entry
    @incollection{DissZusammenfassungJakobs, author = {Marie-Christine Jakobs}, title = {Spontane Sicherheitspr{\"{u}}fung mittels individualisierter Programmzertifizierung oder Programmrestrukturierung}, booktitle = {Ausgezeichnete Informatikdissertationen 2017}, editor = {S. H{\"{o}}lldobler}, volume = {{D-18}}, pages = {91-100}, year = {2018}, series = {{LNI}}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, isbn = {978-3885799771}, pdf = {https://dl.gi.de/bitstream/handle/20.500.12116/19486/invited_paper_14.pdf?sequence=1&isAllowed=y}, abstract = {Korrekt funktionierende Software gewinnt immer mehr an Bedeutung. Im Vergleich zu fr&uuml;her ist es heutzutage schwieriger einzusch&auml;tzen, wie gut eine Software funktioniert. Dies liegt unter anderem daran, dass Endnutzer h&auml;ufiger Software unbekannter Hersteller installieren. Endnutzer sollten sich also aktiv von der Softwarekorrektheit &uuml;berzeugen, zum Beispiel in Form einer spontanen Sicherheitspr&uuml;fung. &Uuml;bliche Verifikationstechniken zur Korrektheitspr&uuml;fung kommen f&uuml;r Endnutzer, in der Regel Laien, nicht in Frage. Die zentrale Frage ist daher, wie man einem Laien eine solche spontane Sicherheitspr&uuml;fung erm&ouml;glicht. Die Antwort der Dissertation sind einfache, automatische und generelle Verfahren zur Sicherheitspr&uuml;fung. In der Dissertation werden verschiedene Verfahren vorgeschlagen und sowohl theoretisch als auch praktisch untersucht. Die vorgeschlagenen Verfahren lassen sich in zwei Forschungsrichtungen einsortieren, n&auml;mlich in die Gruppe der Proof-Carrying Code Verfahren bzw. in die Gruppe des alternativen Programs from Proofs Verfahren. Einige Verfahren kombinieren beide Forschungsrichtungen.}, keyword = {CPAchecker,Software Model Checking}, annote = {This is a German summary of the dissertation On-The-Fly Safety Checking - Customizing Program Certification and Program Restructuring.}, doifalse = {20.500.12116/19486}, urlpub = {https://dl.gi.de/handle/20.500.12116/19486}, }
    Additional Infos
    This is a German summary of the dissertation On-The-Fly Safety Checking - Customizing Program Certification and Program Restructuring.
  7. Philipp Wendler. Beiträge zu praktikabler Prädikatenanalyse. In S. Hölldobler, editors, Ausgezeichnete Informatikdissertationen 2017, LNI, pages 261-270, 2018. Gesellschaft für Informatik (GI). Link to this entry Keyword(s): Benchmarking, CPAchecker, Software Model Checking Publisher's Version PDF Presentation Supplement
    Abstract
    Der Stand der Forschung im Bereich der automatischen Software-Verifikation ist fragmentiert. Verschiedene Verfahren existieren nebeneinander in unterschiedlichen Darstellungen und mit wenig Bezug zueinander, aussagekräftige Vergleiche sind selten. Die Dissertation adressiert dieses Problem. Ein konfigurierbares und flexibles Rahmenwerk zur Vereinheitlichung solcher Verfahren wird entwickelt und mehrere vorhandene Verfahren werden in diesem Rahmenwerk ausgedrückt. Dies bringt neue Erkenntnisse über die Kernideen dieser Verfahren, ermöglicht experimentelle Studien in einer neuartigen Qualität, und erleichtert die Forschung an Kombinationen und Weiterentwicklungen dieser Verfahren. Die Implementierung dieses Rahmenwerks im erfolgreichen Verifizierer CPAchecker wird in der bisher größten derartigen experimentellen Studie (120 verschiedene Konfigurationen, 671280 Ausführungen) evaluiert. Hierzu wird ein Benchmarking-System präsentiert, das mit Hilfe moderner Technologien signifikante qualitative Messfehler existierender Systeme vermeidet.
    BibTeX Entry
    @incollection{DissZusammenfassungWendler, author = {Philipp Wendler}, title = {Beitr{\"{a}}ge zu praktikabler Pr{\"{a}}dikatenanalyse}, booktitle = {Ausgezeichnete Informatikdissertationen 2017}, editor = {S. H{\"{o}}lldobler}, volume = {{D-18}}, pages = {261-270}, year = {2018}, series = {{LNI}}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, isbn = {978-3885799771}, url = {https://www.sosy-lab.org/research/phd/wendler/}, pdf = {https://dl.gi.de/bitstream/handle/20.500.12116/19476/invited_paper_4.pdf?sequence=1&isAllowed=y}, presentation = {https://www.sosy-lab.org/research/prs/2018-05-08_GiDiss_BeitraegeZuPraktikablerPraedikatenanalyse.pdf}, abstract = {Der Stand der Forschung im Bereich der automatischen Software-Verifikation ist fragmentiert. Verschiedene Verfahren existieren nebeneinander in unterschiedlichen Darstellungen und mit wenig Bezug zueinander, aussagekr&auml;ftige Vergleiche sind selten. Die Dissertation adressiert dieses Problem. Ein konfigurierbares und flexibles Rahmenwerk zur Vereinheitlichung solcher Verfahren wird entwickelt und mehrere vorhandene Verfahren werden in diesem Rahmenwerk ausgedr&uuml;ckt. Dies bringt neue Erkenntnisse &uuml;ber die Kernideen dieser Verfahren, erm&ouml;glicht experimentelle Studien in einer neuartigen Qualit&auml;t, und erleichtert die Forschung an Kombinationen und Weiterentwicklungen dieser Verfahren. Die Implementierung dieses Rahmenwerks im erfolgreichen Verifizierer CPAchecker wird in der bisher gr&ouml;&szlig;ten derartigen experimentellen Studie (120 verschiedene Konfigurationen, 671280 Ausf&uuml;hrungen) evaluiert. Hierzu wird ein Benchmarking-System pr&auml;sentiert, das mit Hilfe moderner Technologien signifikante qualitative Messfehler existierender Systeme vermeidet.}, keyword = {Benchmarking,CPAchecker,Software Model Checking}, annote = {This is a German summary of the dissertation <a href="https://www.sosy-lab.org/research/bib/Year/2017.complete.html#PhilippPredicateAnalysis">Towards Practical Predicate Analysis</a>.}, doifalse = {20.500.12116/19476}, urlpub = {https://dl.gi.de/handle/20.500.12116/19476}, }
    Additional Infos
    This is a German summary of the dissertation Towards Practical Predicate Analysis.

2017

  1. Dirk Beyer, Rolf Hennicker, Martin Hofmann, Tobias Nipkow, and Martin Wirsing. Software-Verifikation. In A. Bode, M. Broy, H.-J. Bungartz, and F. Matthes, editors, 50 Jahre Universitäts-Informatik in München, pages 75-86, 2017. Springer. doi:10.1007/978-3-662-54712-0_5 Link to this entry Keyword(s): Software Model Checking Publisher's Version PDF
    BibTeX Entry
    @incollection{InfMUC17, author = {Dirk Beyer and Rolf Hennicker and Martin Hofmann and Tobias Nipkow and Martin Wirsing}, title = {Software-Verifikation}, booktitle = {50 Jahre Universit{\"a}ts-Informatik in M{\"u}nchen}, editor = {A.~Bode and M.~Broy and H.-J.~Bungartz and F.~Matthes}, pages = {75-86}, year = {2017}, publisher = {Springer}, isbn = {978-3-662-54711-3}, doi = {10.1007/978-3-662-54712-0_5}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2017-50JahreInfMUC.Software-Verifikation.pdf}, keyword = {Software Model Checking}, }

2016

  1. Gidon Ernst, Jörg Pfähler, Gerhard Schellhorn, and Wolfgang Reif. Modular, crash-safe refinement for ASMs with submachines. Science of Computer Programming (SCP), 131:3-21, 2016. Elsevier. Link to this entry PDF
    BibTeX Entry
    @article{ernst:scp2016, author = {Gidon Ernst and Jörg Pfähler and Gerhard Schellhorn and Wolfgang Reif}, title = {{Modular, crash-safe refinement for ASMs with submachines}}, journal = {Science of Computer Programming (SCP)}, volume = {131}, pages = {3--21}, year = {2016}, publisher = {Elsevier}, pdf = {https://www.sosy-lab.org/research/pub/2016-SCP.Modular_Crash-Safe_Refinement_for_ASMs_with_Submachines.pdf}, }

2015

  1. Gidon Ernst, Gerhard Schellhorn, and Wolfgang Reif. Verification of B+ trees by integration of shape analysis and interactive theorem proving. Software & Systems Modeling (SoSyM), 14(1):27-44, 2015. Springer. Link to this entry
    BibTeX Entry
    @article{ernst:sosym2015, author = {Gidon Ernst and Gerhard Schellhorn and Wolfgang Reif}, title = {{Verification of B+ trees by integration of shape analysis and interactive theorem proving}}, journal = {Software \& Systems Modeling (SoSyM)}, volume = {14}, number = {1}, pages = {27--44}, year = {2015}, publisher = {Springer}, }
  2. Gidon Ernst, Jörg Pfähler, Gerhard Schellhorn, Dominik Haneberg, and Wolfgang Reif. KIV-Overview and VerifyThis competition. Software Tools for Technology Transfer (STTT), 17(6):677-694, 2015. Springer. Link to this entry
    BibTeX Entry
    @article{ernst:sttt2015, author = {Gidon Ernst and Jörg Pfähler and Gerhard Schellhorn and Dominik Haneberg and Wolfgang Reif}, title = {{KIV---Overview and VerifyThis competition}}, journal = {Software Tools for Technology Transfer (STTT)}, volume = {17}, number = {6}, pages = {677--694}, year = {2015}, publisher = {Springer}, }

2014

  1. Dirk Beyer and Andreas Stahlbauer. BDD-Based Software Verification: Applications to Event-Condition-Action Systems. International Journal on Software Tools for Technology Transfer (STTT), 16(5):507-518, 2014. doi:10.1007/s10009-014-0334-1 Link to this entry Keyword(s): CPAchecker, Software Model Checking Publisher's Version PDF Supplement
    BibTeX Entry
    @article{STTT14-BDD, author = {Dirk Beyer and Andreas Stahlbauer}, title = {{BDD}-Based Software Verification: Applications to Event-Condition-Action Systems}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {16}, number = {5}, pages = {507--518}, year = {2014}, doi = {10.1007/s10009-014-0334-1}, sha256 = {}, url = {https://doi.org/10.1007/s10009-014-0334-1}, pdf = {https://www.sosy-lab.org/research/pub/2014-STTT.BDD-Based_Software_Verification.pdf}, keyword = {CPAchecker,Software Model Checking}, }
  2. Falk Howar, Malte Isberner, Maik Merten, Bernhard Steffen, Dirk Beyer, and Corina S. Pasareanu. Rigorous examination of reactive systems: The RERS challenges 2012 and 2013. International Journal on Software Tools for Technology Transfer (STTT), 16(5):457-464, 2014. doi:10.1007/s10009-014-0337-y Link to this entry Publisher's Version PDF Supplement
    BibTeX Entry
    @article{STTT14-Intro, author = {Falk Howar and Malte Isberner and Maik Merten and Bernhard Steffen and Dirk Beyer and Corina S. Pasareanu}, title = {Rigorous examination of reactive systems: The {RERS} challenges 2012 and 2013}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {16}, number = {5}, pages = {457--464}, year = {2014}, doi = {10.1007/s10009-014-0337-y}, sha256 = {}, url = {https://doi.org/10.1007/s10009-014-0337-y}, pdf = {https://www.sosy-lab.org/research/pub/2014-STTT.Rigorous_Examination_of_Reactive_Systems.pdf}, }
  3. Dirk Beyer, Marieke Huisman, Vladimir Klebanov, and Rosemary Monahan. Evaluating Software Verification Systems: Benchmarks and Competitions (Dagstuhl Reports 14171). Dagstuhl Reports, 4(4):1-19, 2014. doi:10.4230/DagRep.4.4.1 Link to this entry Publisher's Version Supplement
    BibTeX Entry
    @article{Dagstuhl14, author = {Dirk Beyer and Marieke Huisman and Vladimir Klebanov and Rosemary Monahan}, title = {Evaluating Software Verification Systems: Benchmarks and Competitions (Dagstuhl Reports 14171)}, journal = {Dagstuhl Reports}, volume = {4}, number = {4}, pages = {1-19}, year = {2014}, doi = {10.4230/DagRep.4.4.1}, sha256 = {}, url = {https://doi.org/10.4230/DagRep.4.4.1}, }
  4. Bogdan Tofan, Gerhard Schellhorn, Gidon Ernst, Jörg Pfähler, and Wolfgang Reif. Compositional verification of a lock-free stack with RGITL. Electronic Communications of the Automated Verification of Critical Systems (EASST), 66, 2014. Link to this entry
    BibTeX Entry
    @article{ernst:east2014, author = {Bogdan Tofan and Gerhard Schellhorn and Gidon Ernst and Jörg Pfähler and Wolfgang Reif}, title = {{Compositional verification of a lock-free stack with RGITL}}, journal = {Electronic Communications of the Automated Verification of Critical Systems (EASST)}, volume = {66}, year = {2014}, }
  5. Gerhard Schellhorn, Bogdan Tofan, Gidon Ernst, Jörg Pfähler, and Wolfgang Reif. RGITL: A temporal logic framework for compositional reasoning about interleaved programs. Annals of Mathematics and Artificial Intelligence (AMAI), 71:1-44, 2014. Springer. Link to this entry
    BibTeX Entry
    @article{ernst:amai2014, author = {Gerhard Schellhorn and Bogdan Tofan and Gidon Ernst and Jörg Pfähler and Wolfgang Reif}, title = {{RGITL: A temporal logic framework for compositional reasoning about interleaved programs}}, journal = {Annals of Mathematics and Artificial Intelligence (AMAI)}, volume = {71}, pages = {1--44}, year = {2014}, publisher = {Springer}, issue = {1--3}, }

2013

  1. Gidon Ernst and Axel Habermaier. Garantiert fehlerfrei!. Mechatroniknews, Februar 2013. Cluster Mechatronik & Automation e.V.. Link to this entry
    BibTeX Entry
    @article{ernst:mechatronik2013, author = {Gidon Ernst and Axel Habermaier}, title = {Garantiert fehlerfrei!}, journal = {Mechatroniknews}, year = {2013}, publisher = {Cluster Mechatronik \& Automation e.V.}, month = {Februar}, }

2007

  1. Dirk Beyer, Thomas A. Henzinger, Ranjit Jhala, and Rupak Majumdar. The Software Model Checker Blast: Applications to Software Engineering. International Journal on Software Tools for Technology Transfer (STTT), 9(5-6):505-525, 2007. doi:10.1007/s10009-007-0044-z Link to this entry Invited to special issue of selected papers from FASE 2004/05 Keyword(s): Software Model Checking Publisher's Version PDF Presentation
    Abstract
    BLAST is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, BLAST either statically proves that the program satisfies the safety property, or provides an execution path that exhibits a violation of the property (or, since the problem is undecidable, does not terminate). BLAST constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. This paper gives an introduction to BLAST and demonstrates, through two case studies, how it can be applied to program verification and test-case generation. In the first case study, we use BLAST to statically prove memory safety for C programs. We use CCured, a type-based memory-safety analyzer, to annotate a program with run-time assertions that check for safe memory operations. Then, we use BLAST to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate execution scenarios that violate the assertions for the remaining run-time checks. In our second case study, we use BLAST to automatically generate test suites that guarantee full coverage with respect to a given predicate. Given a C program and a target predicate p, BLAST determines the program locations q for which there exists a program execution that reaches q with p true, and automatically generates a set of test vectors that cause such executions. Our experiments show that BLAST can provide automated, precise, and scalable analysis for C programs.
    BibTeX Entry
    @article{STTT07, author = {Dirk Beyer and Thomas A. Henzinger and Ranjit Jhala and Rupak Majumdar}, title = {The Software Model Checker {{\sc Blast}}: Applications to Software Engineering}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {9}, number = {5-6}, pages = {505-525}, year = {2007}, doi = {10.1007/s10009-007-0044-z}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2007-STTT.The_Software_Model_Checker_BLAST.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2008-05-08_EPFL_BLAST_Dirk.pdf}, abstract = {BLAST is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, BLAST either statically proves that the program satisfies the safety property, or provides an execution path that exhibits a violation of the property (or, since the problem is undecidable, does not terminate). BLAST constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. This paper gives an introduction to BLAST and demonstrates, through two case studies, how it can be applied to program verification and test-case generation. In the first case study, we use BLAST to statically prove memory safety for C programs. We use CCured, a type-based memory-safety analyzer, to annotate a program with run-time assertions that check for safe memory operations. Then, we use BLAST to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate execution scenarios that violate the assertions for the remaining run-time checks. In our second case study, we use BLAST to automatically generate test suites that guarantee full coverage with respect to a given predicate. Given a C program and a target predicate p, BLAST determines the program locations q for which there exists a program execution that reaches q with p true, and automatically generates a set of test vectors that cause such executions. Our experiments show that BLAST can provide automated, precise, and scalable analysis for C programs.}, keyword = {Software Model Checking}, annote = {BLAST is available at: <a href="http://www.sosy-lab.org/~dbeyer/Blast/"> http://www.sosy-lab.org/~dbeyer/Blast</a>}, note = {Invited to special issue of selected papers from FASE 2004/05}, }
    Additional Infos

2005

  1. Dirk Beyer, Andreas Noack, and Claus Lewerentz. Efficient Relational Calculation for Software Analysis. IEEE Transactions on Software Engineering (TSE), 31(2):137-149, 2005. doi:10.1109/TSE.2005.23 Link to this entry Invited to special issue of selected papers from WCRE 2003 Keyword(s): Structural Analysis and Comprehension Publisher's Version PDF
    Abstract
    Calculating with graphs and relations has many applications in the analysis of software systems, for example, the detection of design patterns or patterns of problematic design and the computation of design metrics. These applications require an expressive query language, in particular, for the detection of graph patterns, and an efficient evaluation of the queries even for large graphs. In this paper, we introduce RML, a simple language for querying and manipulating relations based on predicate calculus, and CrocoPat, an interpreter for RML programs. RML is general because it enables the manipulation not only of graphs (i.e., binary relations), but of relations of arbitrary arity. CrocoPat executes RML programs efficiently because it internally represents relations as binary decision diagrams, a data structure that is well-known as a compact representation of large relations in computer-aided verification. We evaluate RML by giving example programs for several software analyses and CrocoPat by comparing its performance with calculators for binary relations, a Prolog system, and a relational database management system.
    BibTeX Entry
    @article{TSE05, author = {Dirk Beyer and Andreas Noack and Claus Lewerentz}, title = {Efficient Relational Calculation for Software Analysis}, journal = {IEEE Transactions on Software Engineering (TSE)}, volume = {31}, number = {2}, pages = {137-149}, year = {2005}, doi = {10.1109/TSE.2005.23}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2005-TSE.Efficient_Relational_Calculation_for_Software_Analysis.pdf}, abstract = {Calculating with graphs and relations has many applications in the analysis of software systems, for example, the detection of design patterns or patterns of problematic design and the computation of design metrics. These applications require an expressive query language, in particular, for the detection of graph patterns, and an efficient evaluation of the queries even for large graphs. In this paper, we introduce RML, a simple language for querying and manipulating relations based on predicate calculus, and CrocoPat, an interpreter for RML programs. RML is general because it enables the manipulation not only of graphs (i.e., binary relations), but of relations of arbitrary arity. CrocoPat executes RML programs efficiently because it internally represents relations as binary decision diagrams, a data structure that is well-known as a compact representation of large relations in computer-aided verification. We evaluate RML by giving example programs for several software analyses and CrocoPat by comparing its performance with calculators for binary relations, a Prolog system, and a relational database management system.}, keyword = {Structural Analysis and Comprehension}, annote = {Also available as postprint at the eScholarship Repository, University of California: <BR> <a href="http://repositories.cdlib.org/postprints/687"> http://repositories.cdlib.org/postprints/687</a> <BR> CrocoPat is available at: <a href="http://www.sosy-lab.org/~dbeyer/CrocoPat/"> http://www.sosy-lab.org/~dbeyer/CrocoPat</a>}, note = {Invited to special issue of selected papers from WCRE 2003}, }
    Additional Infos
    Also available as postprint at the eScholarship Repository, University of California:
    http://repositories.cdlib.org/postprints/687
    CrocoPat is available at: http://www.sosy-lab.org/~dbeyer/CrocoPat

2003

  1. Dirk Beyer. Formale Verifikation von Realzeit-Systemen mittels Cottbus Timed Automata (Zusammenfassung). Softwaretechnik-Trends, 23(2):4, May 2003. Gesellschaft für Informatik, Berlin. Link to this entry
    BibTeX Entry
    @article{SW-Trends03, author = {Dirk Beyer}, title = {Formale Verifikation von Realzeit-Systemen mittels Cottbus Timed Automata (Zusammenfassung)}, journal = {Softwaretechnik-Trends}, volume = {23}, number = {2}, pages = {4}, year = {2003}, publisher = {Gesellschaft f{\"u}r Informatik, Berlin}, annote = {Summary of dissertation}, doinone = {DOI not available}, issn = {0720-8928}, month = {May}, }
    Additional Infos
    Summary of dissertation

Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Last modified: Mon Nov 18 19:35:54 2024 UTC