Funding by DFG-COOP
Articles in journal or book chapters
-
Six Years Later: Testing vs. Model Checking.
International Journal on Software Tools for Technology Transfer (STTT),
2024.
Keyword(s):
Benchmarking,
Competition on Software Verification (SV-COMP),
Competition on Software Testing (Test-Comp),
Software Model Checking,
Software Testing
Funding:
DFG-COOP
PDF
Artifact(s)
Abstract
Six years ago, we performed the first large-scale comparison of automated test generators and software model checkers with respect to bug-finding capabilities on a benchmark set with 5 693 C programs. Since then, the International Competition on Software Testing (Test-Comp) has established standardized formats and community-agreed rules for the experimental comparison of test generators. With this new context, it is time to revisit our initial question: Model checkers or test generators—which tools are more effective in finding bugs in software? To answer this, we perform a comparative analysis on the tools and existing data published by two competitions, the International Competition on Software Verification (SV-COMP) and Test-Comp. The results provide two insights: (1) Almost all test generators that participate in Test-Comp use hybrid approaches that include formal methods, and (2) while the considered model checkers are still highly competitive, they are now outperformed by the bug-finding capabilities of the considered test generators.BibTeX Entry
@article{TestStudy-STTT, author = {Dirk Beyer and Thomas Lemberger}, title = {Six Years Later: Testing vs. Model Checking}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {}, number = {}, pages = {}, year = {2024}, doi = {}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2024-STTT.Six_Years_Later_Testing_vs_Model_Checking.pdf}, presentation = {}, abstract = {Six years ago, we performed the first large-scale comparison of automated test generators and software model checkers with respect to bug-finding capabilities on a benchmark set with 5 693 C programs. Since then, the International Competition on Software Testing (Test-Comp) has established standardized formats and community-agreed rules for the experimental comparison of test generators. With this new context, it is time to revisit our initial question: Model checkers or test generators—which tools are more effective in finding bugs in software? To answer this, we perform a comparative analysis on the tools and existing data published by two competitions, the International Competition on Software Verification (SV-COMP) and Test-Comp. The results provide two insights: (1) Almost all test generators that participate in Test-Comp use hybrid approaches that include formal methods, and (2) while the considered model checkers are still highly competitive, they are now outperformed by the bug-finding capabilities of the considered test generators.}, keyword = {Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp), Software Model Checking, Software Testing}, artifact = {10.5281/zenodo.10232648}, funding = {DFG-COOP}, } -
Decomposing Software Verification using Distributed Summary Synthesis.
Proc. ACM Softw. Eng., 1(FSE),
2024.
ACM.
doi:10.1145/3660766
Keyword(s):
CPAchecker,
Software Model Checking
Funding:
DFG-CONVEY,
DFG-COOP
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
There are many approaches for automated software verification, but they are either imprecise, do not scale well to large systems, or do not sufficiently leverage parallelization. This hinders the integration of software model checking into the development process (continuous integration). We propose an approach to decompose one large verification task into multiple smaller, connected verification tasks, based on blocks in the program control flow. For each block, summaries are computed — based on independent, distributed, continuous refinement by communication between the blocks. The approach iteratively synthesizes preconditions to assume at the block entry (which program states reach this block) and verification conditions to check at the block exit (which program states lead to a specification violation). This separation of concerns leads to an architecture in which all blocks can be analyzed in parallel, as independent verification problems. Whenever new information (as a precondition or verification condition) is available from other blocks, the verification can decide to restart with this new information. We realize our approach as an extension of the configurable-program-analysis algorithm and implement it for the verification of C programs in the widely used verifier CPAchecker. A large experimental evaluation shows the potential of our new approach: The distribution of the workload to several processing units works well and there is a significant reduction of the response time when using multiple processing units. There are even cases in which the new approach beats the highly-tuned, existing single-threaded predicate abstraction.BibTeX Entry
@article{DSS-PACMSE, author = {Dirk Beyer and Matthias Kettl and Thomas Lemberger}, title = {Decomposing Software Verification using Distributed Summary Synthesis}, journal = {Proc. ACM Softw. Eng.}, volume = {1}, number = {FSE}, year = {2024}, publisher = {ACM}, doi = {10.1145/3660766}, url = {https://www.sosy-lab.org/research/distributed-summary-synthesis/}, presentation = {}, abstract = {There are many approaches for automated software verification, but they are either imprecise, do not scale well to large systems, or do not sufficiently leverage parallelization. This hinders the integration of software model checking into the development process (continuous integration). We propose an approach to decompose one large verification task into multiple smaller, connected verification tasks, based on blocks in the program control flow. For each block, summaries are computed — based on independent, distributed, continuous refinement by communication between the blocks. The approach iteratively synthesizes preconditions to assume at the block entry (which program states reach this block) and verification conditions to check at the block exit (which program states lead to a specification violation). This separation of concerns leads to an architecture in which all blocks can be analyzed in parallel, as independent verification problems. Whenever new information (as a precondition or verification condition) is available from other blocks, the verification can decide to restart with this new information. We realize our approach as an extension of the configurable-program-analysis algorithm and implement it for the verification of C programs in the widely used verifier CPAchecker. A large experimental evaluation shows the potential of our new approach: The distribution of the workload to several processing units works well and there is a significant reduction of the response time when using multiple processing units. There are even cases in which the new approach beats the highly-tuned, existing single-threaded predicate abstraction.}, keyword = {CPAchecker,Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2024-FSE.Decomposing_Software_Verification_using_Distributed_Summary_Synthesis.pdf}, articleno = {90}, artifact = {10.5281/zenodo.11095864}, funding = {DFG-CONVEY,DFG-COOP}, issue_date = {July 2024}, numpages = {23}, } -
Parallel program analysis on path ranges.
Science of Computer Programming, 238,
2024.
doi:10.1016/j.scico.2024.103154
Keyword(s):
Ranged Program Analysis,
Cooperative Verification,
Software Model Checking,
CPAchecker
Funding:
DFG-COOP
Publisher's Version
Artifact(s)
BibTeX Entry
@article{RangedPA-journal, author = {Jan Haltermann and Marie{-}Christine Jakobs and Cedric Richter and Heike Wehrheim}, title = {Parallel program analysis on path ranges}, journal = {Science of Computer Programming}, volume = {238}, year = {2024}, doi = {10.1016/j.scico.2024.103154}, url = {}, pdf = {}, keyword = {Ranged Program Analysis, Cooperative Verification, Software Model Checking, CPAchecker}, artifact = {10.5281/zenodo.8398988}, funding = {DFG-COOP}, } -
First International Competition on Software Testing.
International Journal on Software Tools for Technology Transfer (STTT), 23(6):833-846,
2021.
doi:10.1007/s10009-021-00613-3
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@article{TestComp19-STTT, author = {Dirk Beyer}, title = {First International Competition on Software Testing}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {6}, pages = {833-846}, year = {2021}, doi = {10.1007/s10009-021-00613-3}, sha256 = {cd82a853fbbf65de7f95a9e7de4f36118bb35fb516db87421a0aa38ccc863031}, url = {https://www.sosy-lab.org/research/pub/2021-STTT.First_International_Competition_on_Software_Testing.pdf}, pdf = {}, presentation = {}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, funding = {DFG-COOP}, issn = {1433-2787}, } -
Cooperative Verifier-Based Testing with CoVeriTest.
International Journal on Software Tools for Technology Transfer (STTT), 23(3):313-333,
2021.
doi:10.1007/s10009-020-00587-8
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Abstract
Testing is a widely applied technique to evaluate software quality, and coverage criteria are often used to assess the adequacy of a generated test suite. However, manually constructing an adequate test suite is typically too expensive, and numerous techniques for automatic test-suite generation were proposed. All of them come with different strengths. To build stronger test-generation tools, different techniques should be combined. In this paper, we study cooperative combinations of verification approaches for test generation, which exchange high-level information. We present CoVeriTest, a hybrid technique for test-suite generation. CoVeriTest iteratively applies different conditional model checkers and allows users to adjust the level of cooperation and to configure individual time limits for each conditional model checker. In our experiments, we systematically study different CoVeriTest cooperation setups, which either use combinations of explicit-state model checking and predicate abstraction, or bounded model checking and symbolic execution. A comparison with state-of-the-art test-generation tools reveals that CoVeriTest achieves higher coverage for many programs (about 15BibTeX Entry
@article{CoVeriTest-STTT, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {Cooperative Verifier-Based Testing with {CoVeriTest}}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {3}, pages = {313-333}, year = {2021}, doi = {10.1007/s10009-020-00587-8}, sha256 = {28a5bf6103296455728076e8c12902a53b3d377a296ea2ba18ac111c93330dbd}, url = {}, pdf = {}, presentation = {}, abstract = {Testing is a widely applied technique to evaluate software quality, and coverage criteria are often used to assess the adequacy of a generated test suite. However, manually constructing an adequate test suite is typically too expensive, and numerous techniques for automatic test-suite generation were proposed. All of them come with different strengths. To build stronger test-generation tools, different techniques should be combined. In this paper, we study cooperative combinations of verification approaches for test generation, which exchange high-level information. We present CoVeriTest, a hybrid technique for test-suite generation. CoVeriTest iteratively applies different conditional model checkers and allows users to adjust the level of cooperation and to configure individual time limits for each conditional model checker. In our experiments, we systematically study different CoVeriTest cooperation setups, which either use combinations of explicit-state model checking and predicate abstraction, or bounded model checking and symbolic execution. A comparison with state-of-the-art test-generation tools reveals that CoVeriTest achieves higher coverage for many programs (about 15%).}, keyword = {CPAchecker,Software Model Checking,Software Testing}, funding = {DFG-COOP}, issn = {1433-2787}, }
Articles in conference or workshop proceedings
-
BenchCloud: A Platform for Scalable Performance Benchmarking.
In Proc. ASE,
2024.
ACM.
doi:10.1145/3691620.3695358
Keyword(s):
Benchmarking,
Competition on Software Verification (SV-COMP),
Competition on Software Testing (Test-Comp)
Funding:
DFG-CONVEY,
DFG-COOP
Publisher's Version
PDF
Video
Supplement
Artifact(s)
Abstract
Performance evaluation is a crucial method for assessing automated-reasoning tools. Evaluating automated tools requires rigorous benchmarking to accurately measure resource consumption, including time and memory, which are essential for understanding the tools' capabilities. BenchExec, a widely used benchmarking framework, reliably measures resource usage for tools executed locally on a single node. This paper describes BenchCloud, a solution for elastic and scalable job distribution across hundreds of nodes, enabling large-scale experiments on distributed and heterogeneous computing environments. BenchCloud seamlessly integrates with BenchExec, allowing BenchExec to delegate the actual execution to BenchCloud. The system has been employed in several prominent international competitions in automated reasoning, including SMT-COMP, SV-COMP, and Test-Comp, underscoring its importance in rigorous tool evaluation across various research domains. It helps to ensure both internal and external validity of the experimental results. This paper presents an overview of BenchCloud's architecture and high- lights its primary use cases in facilitating scalable benchmarking.
Demonstration video: https://youtu.be/aBfQytqPm0U
Running system: https://benchcloud.sosy-lab.org/BibTeX Entry
@inproceedings{ASE24a, author = {Dirk Beyer and Po-Chun Chien and Marek Jankola}, title = {{BenchCloud}: {A} Platform for Scalable Performance Benchmarking}, booktitle = {Proc.\ ASE}, pages = {}, year = {2024}, series = {}, publisher = {ACM}, doi = {10.1145/3691620.3695358}, url = {https://benchcloud.sosy-lab.org/}, pdf = {https://www.sosy-lab.org/research/pub/2024-ASE24.BenchCloud_A_Platform_for_Scalable_Performance_Benchmarking.pdf}, presentation = {}, abstract = {Performance evaluation is a crucial method for assessing automated-reasoning tools. Evaluating automated tools requires rigorous benchmarking to accurately measure resource consumption, including time and memory, which are essential for understanding the tools' capabilities. BenchExec, a widely used benchmarking framework, reliably measures resource usage for tools executed locally on a single node. This paper describes BenchCloud, a solution for elastic and scalable job distribution across hundreds of nodes, enabling large-scale experiments on distributed and heterogeneous computing environments. BenchCloud seamlessly integrates with BenchExec, allowing BenchExec to delegate the actual execution to BenchCloud. The system has been employed in several prominent international competitions in automated reasoning, including SMT-COMP, SV-COMP, and Test-Comp, underscoring its importance in rigorous tool evaluation across various research domains. It helps to ensure both internal and external validity of the experimental results. This paper presents an overview of BenchCloud's architecture and high- lights its primary use cases in facilitating scalable benchmarking. <br> Demonstration video: <a href="https://youtu.be/aBfQytqPm0U">https://youtu.be/aBfQytqPm0U</a> <br> Running system: <a href="https://benchcloud.sosy-lab.org/">https://benchcloud.sosy-lab.org/</a>}, keyword = {Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp)}, artifact = {10.5281/zenodo.13742756}, funding = {DFG-CONVEY, DFG-COOP}, video = {https://youtu.be/aBfQytqPm0U}, } -
Software Verification with CPAchecker 3.0: Tutorial and User Guide.
In Proceedings of the 26th International Symposium on Formal Methods (FM 2024, Milan, Italy, September 9-13),
LNCS 14934,
pages 543-570,
2024.
Springer.
doi:10.1007/978-3-031-71177-0_30
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Funding:
DFG-COOP,
DFG-CONVEY,
DFG-IDEFIX
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, k-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. An extended version also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at https://cpachecker.sosy-lab.org/doc.php.BibTeX Entry
@inproceedings{FM24a, author = {Daniel Baier and Dirk Beyer and Po-Chun Chien and Marie-Christine Jakobs and Marek Jankola and Matthias Kettl and Nian-Ze Lee and Thomas Lemberger and Marian Lingsch-Rosenfeld and Henrik Wachowitz and Philipp Wendler}, title = {Software Verification with {CPAchecker} 3.0: {Tutorial} and User Guide}, booktitle = {Proceedings of the 26th International Symposium on Formal Methods (FM~2024, Milan, Italy, September 9-13)}, pages = {543-570}, year = {2024}, series = {LNCS~14934}, publisher = {Springer}, doi = {10.1007/978-3-031-71177-0_30}, url = {https://cpachecker.sosy-lab.org}, pdf = {https://www.sosy-lab.org/research/pub/2024-FM.Software_Verification_with_CPAchecker_3.0_Tutorial_and_User_Guide.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-09-10_FM24_CPAchecker_Tutorial.pdf}, abstract = {This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, <i>k</i>-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. An extended version also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at <a href="https://cpachecker.sosy-lab.org/doc.php">https://cpachecker.sosy-lab.org/doc.php</a>.}, keyword = {CPAchecker, Software Model Checking, Software Testing}, annote = {An <a href="https://www.sosy-lab.org/research/bib/All/index.html#TechReport24c">extended version</a> of this article is available on <a href="https://doi.org/10.48550/arXiv.2409.02094">arXiv</a>.}, artifact = {10.5281/zenodo.13612338}, funding = {DFG-COOP, DFG-CONVEY, DFG-IDEFIX}, }Additional Infos
An extended version of this article is available on arXiv. -
Fault Localization on Verification Witnesses.
In Proceedings of the 30th International Symposium on Model Checking Software (SPIN 2024, Luxembourg City, Luxembourg, April 10-11),
LNCS 14624,
pages 205-224,
2024.
Springer.
doi:10.1007/978-3-031-66149-5_12
Keyword(s):
Software Model Checking,
Witness-Based Validation,
CPAchecker
Funding:
DFG-CONVEY,
DFG-IDEFIX,
DFG-COOP
Publisher's Version
PDF
Artifact(s)
Abstract
When verifiers report an alarm, they export a violation witness (exchangeable counterexample) that helps validate the reachability of that alarm. Conventional wisdom says that this violation witness should be very precise: the ideal witness describes a single error path for the validator to check. But we claim that verifiers overshoot and produce large witnesses with information that makes validation unnecessarily difficult. To check our hypothesis, we reduce violation witnesses to that information that automated fault-localization approaches deem relevant for triggering the reported alarm in the program. We perform a large experimental evaluation on the witnesses produced in the International Competition on Software Verification (SV-COMP 2023). It shows that our reduction shrinks the witnesses considerably and enables the confirmation of verification results that were not confirmable before.BibTeX Entry
@inproceedings{SPIN24b, author = {Dirk Beyer and Matthias Kettl and Thomas Lemberger}, title = {Fault Localization on Verification Witnesses}, booktitle = {Proceedings of the 30th International Symposium on Model Checking Software (SPIN~2024, Luxembourg City, Luxembourg, April 10-11)}, pages = {205-224}, year = {2024}, series = {LNCS~14624}, publisher = {Springer}, doi = {10.1007/978-3-031-66149-5_12}, pdf = {https://sosy-lab.org/research/pub/2024-SPIN.Fault_Localization_on_Verification_Witnesses.pdf}, abstract = {When verifiers report an alarm, they export a violation witness (exchangeable counterexample) that helps validate the reachability of that alarm. Conventional wisdom says that this violation witness should be very precise: the ideal witness describes a single error path for the validator to check. But we claim that verifiers overshoot and produce large witnesses with information that makes validation unnecessarily difficult. To check our hypothesis, we reduce violation witnesses to that information that automated fault-localization approaches deem relevant for triggering the reported alarm in the program. We perform a large experimental evaluation on the witnesses produced in the International Competition on Software Verification (SV-COMP 2023). It shows that our reduction shrinks the witnesses considerably and enables the confirmation of verification results that were not confirmable before.}, keyword = {Software Model Checking, Witness-Based Validation, CPAchecker}, annote = {Nominated for best paper.<br> This work was also presented with a poster at the 46th International Conference on Software Engineering (ICSE 2024, Lisbon, Portugal, April 14-20): <a href="https://sosy-lab.org/research/pst/2024-03-05_ICSE24_Fault_Localization_on_Verification_Witnesses_Poster.pdf">Extended Abstract</a>.}, artifact = {10.5281/zenodo.10794627}, funding = {DFG-CONVEY,DFG-IDEFIX,DFG-COOP}, }Additional Infos
Nominated for best paper.
This work was also presented with a poster at the 46th International Conference on Software Engineering (ICSE 2024, Lisbon, Portugal, April 14-20): Extended Abstract. -
CoVeriTeam Service: Verification as a Service.
In Proc. ICSE,
pages 21-25,
2023.
IEEE.
doi:10.1109/ICSE-Companion58688.2023.00017
Keyword(s):
Software Model Checking,
Incremental Verification,
Cooperative Verification
Funding:
DFG-CONVEY,
DFG-COOP
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
The research community has developed numerous tools for solving verification problems, but we are missing a common interface for executing them. This means users have to spend considerable effort on the installation and parameter setup, for each new tool (version) they want to execute. The situation could make a verification researcher wanting to experiment with a new verification tool turn away from it. We aim to make it easier for users to execute verification tools, as well as provide mechanism for tool developers to make their tools easily accessible. Our solution combines a web service and a common interface for verification tools. The presented service has been used during the 2023 competitions on software verification and testing, for integration testing. As another use- case, we developed a service for incremental verification on top of the CoVeriTeam Service and demonstrate its use in a continuous-integration process.BibTeX Entry
@inproceedings{ICSE23, author = {Dirk Beyer and Sudeep Kanav and Henrik Wachowitz}, title = {{CoVeriTeam Service}: {Verification} as a Service}, booktitle = {Proc.\ ICSE}, pages = {21-25}, year = {2023}, publisher = {IEEE}, doi = {10.1109/ICSE-Companion58688.2023.00017}, url = {https://coveriteam-service.sosy-lab.org/static/index.html}, pdf = {https://www.sosy-lab.org/research/pub/2023-ICSE.CoVeriTeam_Service_Verification_as_a_Service.pdf}, abstract = {The research community has developed numerous tools for solving verification problems, but we are missing a common interface for executing them. This means users have to spend considerable effort on the installation and parameter setup, for each new tool (version) they want to execute. The situation could make a verification researcher wanting to experiment with a new verification tool turn away from it. We aim to make it easier for users to execute verification tools, as well as provide mechanism for tool developers to make their tools easily accessible. Our solution combines a web service and a common interface for verification tools. The presented service has been used during the 2023 competitions on software verification and testing, for integration testing. As another use- case, we developed a service for incremental verification on top of the {CoVeriTeam} Service and demonstrate its use in a continuous-integration process.}, keyword = {Software Model Checking,Incremental Verification,Cooperative Verification}, _sha256 = {604dd391b6a49e46e97b6faafbb3cc331ccf5c04e3d364cf1e76a2c99c1c267f}, artifact = {10.5281/zenodo.7276532}, funding = {DFG-CONVEY,DFG-COOP}, } -
Software Testing: 5th Comparative Evaluation: Test-Comp 2023.
In L. Lambers and
S. Uchitel, editors,
Proceedings of the 26th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2023, Paris, France, April 22-27),
LNCS 13991,
pages 309-323,
2023.
Springer.
doi:10.1007/978-3-031-30826-0_17
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE23, author = {Dirk Beyer}, title = {Software Testing: 5th Comparative Evaluation: {Test-Comp 2023}}, booktitle = {Proceedings of the 26th International Conference on Fundamental Approaches to Software Engineering (FASE~2023, Paris, France, April 22-27)}, editor = {L. Lambers and S. Uchitel}, pages = {309--323}, year = {2023}, series = {LNCS~13991}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-031-30826-0_17}, sha256 = {7110c26bf3c9311f84346a108a59318687bdadde4879f83d047f1a0fc546b630}, url = {https://test-comp.sosy-lab.org/2023/}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, _pdf = {https://www.sosy-lab.org/research/pub/2023-FASE.Software_Testing_5th_Comparative_Evaluation_Test-Comp_2023.pdf}, funding = {DFG-COOP}, } -
Competition on Software Verification and Witness Validation: SV-COMP 2023.
In S. Sankaranarayanan and
N. Sharygina, editors,
Proceedings of the 29th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2023, Paris, France, April 22-27),
LNCS 13994,
pages 495-522,
2023.
Springer.
doi:10.1007/978-3-031-30820-8_29
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS23b, author = {Dirk Beyer}, title = {Competition on Software Verification and Witness Validation: {SV-COMP 2023}}, booktitle = {Proceedings of the 29th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2023, Paris, France, April 22-27)}, editor = {S. Sankaranarayanan and N. Sharygina}, pages = {495--522}, year = {2023}, series = {LNCS~13994}, publisher = {Springer}, doi = {10.1007/978-3-031-30820-8_29}, sha256 = {1d35ae38d4e87c267ccc34cba880994b6f6a7927491ec13ba3cc548a29e81e5c}, url = {https://sv-comp.sosy-lab.org/2023/}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2023-TACAS.Competition_on_Software_Verification_and_Witness_Validation_SV-COMP_2023.pdf}, funding = {DFG-COOP}, } -
Parallel Program Analysis via Range Splitting.
In Proc. FASE,
LNCS 13991,
pages 195-219,
2023.
Springer.
doi:10.1007/978-3-031-30826-0_11
Keyword(s):
Ranged Program Analysis,
Cooperative Verification,
Software Model Checking,
CPAchecker
Funding:
DFG-COOP
Publisher's Version
PDF
BibTeX Entry
@inproceedings{RangedPA-CPA, author = {Jan Haltermann and Marie{-}Christine Jakobs and Cedric Richter and Heike Wehrheim}, title = {Parallel Program Analysis via Range Splitting}, booktitle = {Proc.\ {FASE}}, pages = {195--219}, year = {2023}, series = {LNCS~13991}, publisher = {Springer}, doi = {10.1007/978-3-031-30826-0_11}, url = {}, pdf = {}, keyword = {Ranged Program Analysis, Cooperative Verification, Software Model Checking, CPAchecker}, annote = {}, artifact = {}, funding = {DFG-COOP}, } -
Decomposing Software Verification into Off-the-Shelf Components: An Application to CEGAR.
In Proceedings of the 44th International Conference on
Software Engineering (ICSE 2022, Pittsburgh, PA, USA, May 8-20 (Virtual), May 22-27 (In-Person)),
pages 536-548,
2022.
ACM.
doi:10.1145/3510003.3510064
Keyword(s):
CPAchecker,
Software Model Checking,
Interfaces for Component-Based Design
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
Techniques for software verification are typically realized as cohesive units of software with tightly coupled components. This makes it difficult to re-use components, and the potential for workload distribution is limited. Innovations in software verification might find their way into practice faster if provided in smaller, more specialized components. In this paper, we propose to strictly decompose software verification: the verification task is split into independent subtasks, implemented by only loosely coupled components communicating via clearly defined interfaces. We apply this decomposition concept to one of the most frequently employed techniques in software verification: counterexample-guided abstraction refinement (CEGAR). CEGAR is a technique to iteratively compute an abstract model of the system. We develop a decomposition of CEGAR into independent components with clearly defined interfaces that are based on existing, standardized exchange formats. Its realization component-based CEGAR (C-CEGAR) concerns the three core tasks of CEGAR: abstract-model exploration, feasibility check, and precision refinement. We experimentally show that - despite the necessity of exchanging complex data via interfaces - the efficiency thereby only reduces by a small constant factor while the precision in solving verification tasks even increases. We furthermore illustrate the advantages of C-CEGAR by experimenting with different implementations of components, thereby further increasing the overall effectiveness and testing that substitution of components works well.BibTeX Entry
@inproceedings{ICSE22, author = {Dirk Beyer and Jan Haltermann and Thomas Lemberger and Heike Wehrheim}, title = {Decomposing Software Verification into Off-the-Shelf Components: An Application to {CEGAR}}, booktitle = {Proceedings of the 44th International Conference on Software Engineering (ICSE~2022, Pittsburgh, PA, USA, May 8-20 (Virtual), May 22-27 (In-Person))}, pages = {536-548}, year = {2022}, publisher = {ACM}, doi = {10.1145/3510003.3510064}, url = {https://www.sosy-lab.org/research/component-based-cegar/}, abstract = {Techniques for software verification are typically realized as cohesive units of software with tightly coupled components. This makes it difficult to re-use components, and the potential for workload distribution is limited. Innovations in software verification might find their way into practice faster if provided in smaller, more specialized components. In this paper, we propose to strictly decompose software verification: the verification task is split into independent subtasks, implemented by only loosely coupled components communicating via clearly defined interfaces. We apply this decomposition concept to one of the most frequently employed techniques in software verification: counterexample-guided abstraction refinement (CEGAR). CEGAR is a technique to iteratively compute an abstract model of the system. We develop a decomposition of CEGAR into independent components with clearly defined interfaces that are based on existing, standardized exchange formats. Its realization component-based CEGAR (C-CEGAR) concerns the three core tasks of CEGAR: abstract-model exploration, feasibility check, and precision refinement. We experimentally show that --- despite the necessity of exchanging complex data via interfaces --- the efficiency thereby only reduces by a small constant factor while the precision in solving verification tasks even increases. We furthermore illustrate the advantages of C-CEGAR by experimenting with different implementations of components, thereby further increasing the overall effectiveness and testing that substitution of components works well.}, keyword = {CPAchecker,Software Model Checking,Interfaces for Component-Based Design}, _pdf = {https://www.sosy-lab.org/research/pub/2022-ICSE.Decomposing_Software_Verification_into_Off-the-Shelf-Components.pdf}, _sha256 = {be1c5d744475af00f5a0cddd51d92353296d1d8e5ba60f5439ba5b98217e0e03}, artifact = {10.5281/zenodo.5301636}, funding = {DFG-COOP}, } -
Progress on Software Verification: SV-COMP 2022.
In D. Fisman and
G. Rosu, editors,
Proceedings of the 28th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2022, Munich, Germany, April 2-7,
LNCS 13244,
pages 375-402,
2022.
Springer.
doi:10.1007/978-3-030-99527-0_20
Keyword(s):
Competition on Software Verification (SV-COMP),
Competition on Software Verification (SV-COMP Report),
Software Model Checking
Funding:
DFG-COOP
Publisher's Version
PDF
BibTeX Entry
@inproceedings{TACAS22b, author = {Dirk Beyer}, title = {Progress on Software Verification: {SV-COMP 2022}}, booktitle = {Proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2022, Munich, Germany, April 2-7}, editor = {D.~Fisman and G.~Rosu}, pages = {375-402}, year = {2022}, series = {LNCS~13244}, publisher = {Springer}, doi = {10.1007/978-3-030-99527-0_20}, sha256 = {88d2b7552d79ad77c4e000f83a18f9d71038f7ddfca6c0f0700644405a115943}, url = {}, abstract = {}, keyword = {Competition on Software Verification (SV-COMP),Competition on Software Verification (SV-COMP Report),Software Model Checking}, _pdf = {https://www.sosy-lab.org/research/pub/2022-TACAS.Progress_on_Software_Verification_SV-COMP_2022.pdf}, funding = {DFG-COOP}, } -
CoVeriTeam: On-Demand Composition of Cooperative Verification Systems.
In D. Fisman and
G. Rosu, editors,
Proceedings of the 28th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems
(TACAS 2022, Munich, Germany, April 2-7,
LNCS 13243,
pages 561-579,
2022.
Springer.
doi:10.1007/978-3-030-99524-9_31
Keyword(s):
Software Model Checking,
Cooperative Verification
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
Supplement
BibTeX Entry
@inproceedings{TACAS22a, author = {Dirk Beyer and Sudeep Kanav}, title = {{CoVeriTeam}: {O}n-Demand Composition of Cooperative Verification Systems}, booktitle = {Proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2022, Munich, Germany, April 2-7}, editor = {D.~Fisman and G.~Rosu}, pages = {561-579}, year = {2022}, series = {LNCS~13243}, publisher = {Springer}, doi = {10.1007/978-3-030-99524-9_31}, sha256 = {e38311ae071351301b08d16849ee309a86efdc07fc45e18e466b4735ef21f241}, url = {https://www.sosy-lab.org/research/coveriteam/}, presentation = {https://www.sosy-lab.org/research/prs/2022-04-06_TACAS22_CoVeriTeam_Sudeep.pdf}, abstract = {}, keyword = {Software Model Checking,Cooperative Verification}, funding = {DFG-COOP}, } -
Advances in Automatic Software Testing: Test-Comp 2022.
In E. B. Johnsen and
M. Wimmer, editors,
Proceedings of the 25th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2022, Munich, Germany, April 2-7),
LNCS 13241,
pages 321-335,
2022.
Springer.
doi:10.1007/978-3-030-99429-7_18
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE22b, author = {Dirk Beyer}, title = {Advances in Automatic Software Testing: {Test-Comp 2022}}, booktitle = {Proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering (FASE~2022, Munich, Germany, April 2-7)}, editor = {E.~B.~Johnsen and M.~Wimmer}, pages = {321-335}, year = {2022}, series = {LNCS~13241}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-030-99429-7_18}, sha256 = {3f921c8f232a5c970f678889de8c402313049522a5dfa69ca68cd01d9dd9fce3}, url = {https://test-comp.sosy-lab.org/2022/}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, _pdf = {https://www.sosy-lab.org/research/pub/2022-FASE.Advances_in_Automatic_Software_Testing_Test-Comp_2022.pdf}, funding = {DFG-COOP}, } -
Construction of Verifier Combinations Based on Off-the-Shelf Verifiers.
In E. B. Johnsen and
M. Wimmer, editors,
Proceedings of the 25th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2022, Munich, Germany, April 2-7),
LNCS 13241,
pages 49-70,
2022.
Springer.
doi:10.1007/978-3-030-99429-7_3
Keyword(s):
Software Model Checking
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
Supplement
BibTeX Entry
@inproceedings{FASE22a, author = {Dirk Beyer and Sudeep Kanav and Cedric Richter}, title = {Construction of Verifier Combinations Based on Off-the-Shelf Verifiers}, booktitle = {Proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering (FASE~2022, Munich, Germany, April 2-7)}, editor = {E.~B.~Johnsen and M.~Wimmer}, pages = {49-70}, year = {2022}, series = {LNCS~13241}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-030-99429-7_3}, sha256 = {fa50620b5b60e7c8761ea251b3ab30ef1e18320d49d76f417eac6dcd5b4a0bbc}, url = {https://www.sosy-lab.org/research/coveriteam-combinations/}, presentation = {https://www.sosy-lab.org/research/prs/2022-04-04_FASE22-CoVeriTeam-Combinations_Cedric.pdf}, abstract = {}, keyword = {Software Model Checking}, funding = {DFG-COOP}, } -
Status Report on Software Testing: Test-Comp 2021.
In E. Guerra and
M. Stoelinga, editors,
Proceedings of the 24th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2021, Luxembourg, Luxembourg, March 27 - April 1),
LNCS 12649,
pages 341-357,
2021.
Springer.
doi:10.1007/978-3-030-71500-7_17
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
Abstract
This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.BibTeX Entry
@inproceedings{FASE21, author = {Dirk Beyer}, title = {Status Report on Software Testing: {Test-Comp 2021}}, booktitle = {Proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering (FASE~2021, Luxembourg, Luxembourg, March 27 - April 1)}, editor = {E.~Guerra and M.~Stoelinga}, pages = {341-357}, year = {2021}, series = {LNCS~12649}, publisher = {Springer}, isbn = {978-3-030-71500-7}, doi = {10.1007/978-3-030-71500-7_17}, sha256 = {113b44c5be9f6d773ebd1a5cad91e8dc66f06d7af0b8c648c9dcea8d6bbc7e3d}, url = {https://test-comp.sosy-lab.org/2021/}, abstract = {This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, funding = {DFG-COOP}, } -
Verification Artifacts in Cooperative Verification: Survey and Unifying Component Framework.
In T. Margaria and
B. Steffen, editors,
Proceedings of the 9th International Symposium on
Leveraging Applications of Formal Methods, Verification, and Validation
(ISoLA 2020, Rhodos, Greece, October 26-30), part 1,
LNCS 12476,
pages 143-167,
2020.
Springer.
doi:10.1007/978-3-030-61362-4_8
Keyword(s):
Software Model Checking,
Cooperative Verification
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
BibTeX Entry
@inproceedings{ISoLA20a, author = {Dirk Beyer and Heike Wehrheim}, title = {Verification Artifacts in Cooperative Verification: Survey and Unifying Component Framework}, booktitle = {Proceedings of the 9th International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA~2020, Rhodos, Greece, October 26-30), part~1}, editor = {T.~Margaria and B.~Steffen}, pages = {143-167}, year = {2020}, series = {LNCS~12476}, publisher = {Springer}, doi = {10.1007/978-3-030-61362-4_8}, sha256 = {86dbfb5ee4875582566bdb5d44750cc935614c11c09627295cc3ff123115a75b}, url = {}, presentation = {https://www.sosy-lab.org/research/prs/2021-10-29_ISOLA21_VerificationArtifacts_Dirk.pdf}, abstract = {}, keyword = {Software Model Checking,Cooperative Verification}, funding = {DFG-COOP}, fundingid = {418257054}, } -
Difference Verification with Conditions.
In F. d. Boer and
A. Cerone, editors,
Proceedings of the 18th International Conference on
Software Engineering and Formal Methods (SEFM 2020, Virtual, Netherlands, September 14-18),
LNCS 12310,
pages 133-154,
2020.
Springer.
doi:10.1007/978-3-030-58768-0_8
Keyword(s):
CPAchecker,
Software Model Checking
Funding:
DFG-COOP,
DFG-CONVEY
Publisher's Version
PDF
Presentation
Video
Supplement
Abstract
Modern software-verification tools need to support development processes that involve frequent changes. Existing approaches for incremental verification hard-code specific verification techniques. Some of the approaches must be tightly intertwined with the development process. To solve this open problem, we present the concept of difference verification with conditions. Difference verification with conditions is independent from any specific verification technique and can be integrated in software projects at any time. It first applies a change analysis that detects which parts of a software were changed between revisions and encodes that information in a condition. Based on this condition, an off-the-shelf verifier is used to verify only those parts of the software that are influenced by the changes. As a proof of concept, we propose a simple, syntax-based change analysis and use difference verification with conditions with three off-the-shelf verifiers. An extensive evaluation shows the competitiveness of difference verification with conditions.BibTeX Entry
@inproceedings{SEFM20b, author = {Dirk Beyer and Marie-Christine Jakobs and Thomas Lemberger}, title = {Difference Verification with Conditions}, booktitle = {Proceedings of the 18th International Conference on Software Engineering and Formal Methods (SEFM~2020, Virtual, Netherlands, September 14-18)}, editor = {F.~d.~Boer and A.~Cerone}, pages = {133--154}, year = {2020}, series = {LNCS~12310}, publisher = {Springer}, doi = {10.1007/978-3-030-58768-0_8}, sha256 = {8e5219da9a998b26f59013c809fbb1db6f92e3f08125fa1bfaacafcfafafef7f}, url = {https://www.sosy-lab.org/research/difference/}, presentation = {https://www.sosy-lab.org/research/prs/2020-09-17_SEFM20_DifferenceVerificationWithConditions_Thomas.pdf}, abstract = {Modern software-verification tools need to support development processes that involve frequent changes. Existing approaches for incremental verification hard-code specific verification techniques. Some of the approaches must be tightly intertwined with the development process. To solve this open problem, we present the concept of difference verification with conditions. Difference verification with conditions is independent from any specific verification technique and can be integrated in software projects at any time. It first applies a change analysis that detects which parts of a software were changed between revisions and encodes that information in a condition. Based on this condition, an off-the-shelf verifier is used to verify only those parts of the software that are influenced by the changes. As a proof of concept, we propose a simple, syntax-based change analysis and use difference verification with conditions with three off-the-shelf verifiers. An extensive evaluation shows the competitiveness of difference verification with conditions.}, keyword = {CPAchecker,Software Model Checking}, funding = {DFG-COOP,DFG-CONVEY}, isbnnote = {}, video = {https://youtu.be/dG02602c9oo}, } -
FRed: Conditional Model Checking via Reducers and Folders.
In F. d. Boer and
A. Cerone, editors,
Proceedings of the 18th International Conference on
Software Engineering and Formal Methods (SEFM 2020, Virtual, Netherlands, September 14-18),
LNCS 12310,
pages 113-132,
2020.
Springer.
doi:10.1007/978-3-030-58768-0_7
Keyword(s):
CPAchecker,
Software Model Checking
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{SEFM20a, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {{{\sc FRed}}: {C}onditional Model Checking via Reducers and Folders}, booktitle = {Proceedings of the 18th International Conference on Software Engineering and Formal Methods (SEFM~2020, Virtual, Netherlands, September 14-18)}, editor = {F.~d.~Boer and A.~Cerone}, pages = {113--132}, year = {2020}, series = {LNCS~12310}, publisher = {Springer}, doi = {10.1007/978-3-030-58768-0_7}, sha256 = {0ce35cbde24d7a9de0513b89f23a81147bf4f8d5880effd57742c7f195e0eeec}, url = {https://www.sosy-lab.org/research/fred/}, abstract = {}, keyword = {CPAchecker,Software Model Checking}, funding = {DFG-COOP}, isbnnote = {}, } -
TestCov: Robust Test-Suite Execution and Coverage Measurement.
In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019, San Diego, CA, USA, November 11-15),
pages 1074-1077,
2019.
IEEE.
doi:10.1109/ASE.2019.00105
Keyword(s):
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
BibTeX Entry
@inproceedings{ASE19, author = {Dirk Beyer and Thomas Lemberger}, title = {{T}est{C}ov: Robust Test-Suite Execution and Coverage Measurement}, booktitle = {Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019, San Diego, CA, USA, November 11-15)}, pages = {1074-1077}, year = {2019}, publisher = {IEEE}, doi = {10.1109/ASE.2019.00105}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2019-ASE.TestCov_Robust_Test-Suite_Execution_and_Coverage_Measurement.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-11-12_ASE19_TestCov_Thomas_Lemberger.pdf}, keyword = {Software Testing}, funding = {DFG-COOP}, isbnnote = {978-1-7281-2508-4}, } -
Conditional Testing - Off-the-Shelf Combination of Test-Case Generators.
In Yu-Fang Chen,
Chih-Hong Cheng, and
Javier Esparza, editors,
Proceedings of the 17th International Symposium on
Automated Technology for Verification and Analysis (ATVA 2019, Taipei, Taiwan, October 28-31),
LNCS 11781,
pages 189-208,
2019.
Springer.
doi:10.1007/978-3-030-31784-3_11
Keyword(s):
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
Supplement
BibTeX Entry
@inproceedings{ATVA19, author = {Dirk Beyer and Thomas Lemberger}, title = {Conditional Testing - Off-the-Shelf Combination of Test-Case Generators}, booktitle = {Proceedings of the 17th International Symposium on Automated Technology for Verification and Analysis (ATVA~2019, Taipei, Taiwan, October 28-31)}, editor = {Yu{-}Fang Chen and Chih{-}Hong Cheng and Javier Esparza}, pages = {189-208}, year = {2019}, series = {LNCS~11781}, publisher = {Springer}, doi = {10.1007/978-3-030-31784-3_11}, sha256 = {}, url = {https://www.sosy-lab.org/research/conditional-testing/}, pdf = {https://www.sosy-lab.org/research/pub/2019-ATVA.Conditional_Testing_Off-the-Shelf_Combination_of_Test-Case_Generators.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-10-29_ATVA19_Conditional_Testing_Thomas_Lemberger.pdf}, keyword = {Software Testing}, funding = {DFG-COOP}, }
Internal reports
-
Software Verification with CPAchecker 3.0: Tutorial and User Guide (Extended Version).
Technical report 2409.02094, arXiv/CoRR,
September
2024.
doi:10.48550/arXiv.2409.02094
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Funding:
DFG-COOP,
DFG-CONVEY,
DFG-IDEFIX
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, k-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. It also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at https://cpachecker.sosy-lab.org/doc.php.BibTeX Entry
@techreport{TechReport24c, author = {Daniel Baier and Dirk Beyer and Po-Chun Chien and Marie-Christine Jakobs and Marek Jankola and Matthias Kettl and Nian-Ze Lee and Thomas Lemberger and Marian Lingsch-Rosenfeld and Henrik Wachowitz and Philipp Wendler}, title = {Software Verification with {CPAchecker} 3.0: {Tutorial} and User Guide (Extended Version)}, number = {2409.02094}, year = {2024}, doi = {10.48550/arXiv.2409.02094}, url = {https://cpachecker.sosy-lab.org}, presentation = {https://www.sosy-lab.org/research/prs/2024-09-10_FM24_CPAchecker_Tutorial.pdf}, abstract = {This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, <i>k</i>-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. It also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at <a href="https://cpachecker.sosy-lab.org/doc.php">https://cpachecker.sosy-lab.org/doc.php</a>.}, keyword = {CPAchecker, Software Model Checking, Software Testing}, annote = {This technical report is an extended version of our <a href="https://www.sosy-lab.org/research/bib/All/index.html#FM24a">paper</a> at FM 2024.}, artifact = {10.5281/zenodo.13612338}, funding = {DFG-COOP, DFG-CONVEY, DFG-IDEFIX}, institution = {arXiv/CoRR}, month = {September}, }Additional Infos
This technical report is an extended version of our paper at FM 2024.
Disclaimer:
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.