Publications about Software Testing
Articles in journal or book chapters
-
Six Years Later: Testing vs. Model Checking.
International Journal on Software Tools for Technology Transfer (STTT),
2024.
Keyword(s):
Benchmarking,
Competition on Software Verification (SV-COMP),
Competition on Software Testing (Test-Comp),
Software Model Checking,
Software Testing
Funding:
DFG-COOP
PDF
Artifact(s)
Abstract
Six years ago, we performed the first large-scale comparison of automated test generators and software model checkers with respect to bug-finding capabilities on a benchmark set with 5 693 C programs. Since then, the International Competition on Software Testing (Test-Comp) has established standardized formats and community-agreed rules for the experimental comparison of test generators. With this new context, it is time to revisit our initial question: Model checkers or test generators—which tools are more effective in finding bugs in software? To answer this, we perform a comparative analysis on the tools and existing data published by two competitions, the International Competition on Software Verification (SV-COMP) and Test-Comp. The results provide two insights: (1) Almost all test generators that participate in Test-Comp use hybrid approaches that include formal methods, and (2) while the considered model checkers are still highly competitive, they are now outperformed by the bug-finding capabilities of the considered test generators.BibTeX Entry
@article{TestStudy-STTT, author = {Dirk Beyer and Thomas Lemberger}, title = {Six Years Later: Testing vs. Model Checking}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {}, number = {}, pages = {}, year = {2024}, doi = {}, url = {}, pdf = {https://www.sosy-lab.org/research/pub/2024-STTT.Six_Years_Later_Testing_vs_Model_Checking.pdf}, presentation = {}, abstract = {Six years ago, we performed the first large-scale comparison of automated test generators and software model checkers with respect to bug-finding capabilities on a benchmark set with 5 693 C programs. Since then, the International Competition on Software Testing (Test-Comp) has established standardized formats and community-agreed rules for the experimental comparison of test generators. With this new context, it is time to revisit our initial question: Model checkers or test generators—which tools are more effective in finding bugs in software? To answer this, we perform a comparative analysis on the tools and existing data published by two competitions, the International Competition on Software Verification (SV-COMP) and Test-Comp. The results provide two insights: (1) Almost all test generators that participate in Test-Comp use hybrid approaches that include formal methods, and (2) while the considered model checkers are still highly competitive, they are now outperformed by the bug-finding capabilities of the considered test generators.}, keyword = {Benchmarking, Competition on Software Verification (SV-COMP), Competition on Software Testing (Test-Comp), Software Model Checking, Software Testing}, artifact = {10.5281/zenodo.10232648}, funding = {DFG-COOP}, } -
First International Competition on Software Testing.
International Journal on Software Tools for Technology Transfer (STTT), 23(6):833-846,
2021.
doi:10.1007/s10009-021-00613-3
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@article{TestComp19-STTT, author = {Dirk Beyer}, title = {First International Competition on Software Testing}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {6}, pages = {833-846}, year = {2021}, doi = {10.1007/s10009-021-00613-3}, sha256 = {cd82a853fbbf65de7f95a9e7de4f36118bb35fb516db87421a0aa38ccc863031}, url = {https://www.sosy-lab.org/research/pub/2021-STTT.First_International_Competition_on_Software_Testing.pdf}, pdf = {}, presentation = {}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, funding = {DFG-COOP}, issn = {1433-2787}, } -
Cooperative Verifier-Based Testing with CoVeriTest.
International Journal on Software Tools for Technology Transfer (STTT), 23(3):313-333,
2021.
doi:10.1007/s10009-020-00587-8
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Abstract
Testing is a widely applied technique to evaluate software quality, and coverage criteria are often used to assess the adequacy of a generated test suite. However, manually constructing an adequate test suite is typically too expensive, and numerous techniques for automatic test-suite generation were proposed. All of them come with different strengths. To build stronger test-generation tools, different techniques should be combined. In this paper, we study cooperative combinations of verification approaches for test generation, which exchange high-level information. We present CoVeriTest, a hybrid technique for test-suite generation. CoVeriTest iteratively applies different conditional model checkers and allows users to adjust the level of cooperation and to configure individual time limits for each conditional model checker. In our experiments, we systematically study different CoVeriTest cooperation setups, which either use combinations of explicit-state model checking and predicate abstraction, or bounded model checking and symbolic execution. A comparison with state-of-the-art test-generation tools reveals that CoVeriTest achieves higher coverage for many programs (about 15BibTeX Entry
@article{CoVeriTest-STTT, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {Cooperative Verifier-Based Testing with {CoVeriTest}}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {23}, number = {3}, pages = {313-333}, year = {2021}, doi = {10.1007/s10009-020-00587-8}, sha256 = {28a5bf6103296455728076e8c12902a53b3d377a296ea2ba18ac111c93330dbd}, url = {}, pdf = {}, presentation = {}, abstract = {Testing is a widely applied technique to evaluate software quality, and coverage criteria are often used to assess the adequacy of a generated test suite. However, manually constructing an adequate test suite is typically too expensive, and numerous techniques for automatic test-suite generation were proposed. All of them come with different strengths. To build stronger test-generation tools, different techniques should be combined. In this paper, we study cooperative combinations of verification approaches for test generation, which exchange high-level information. We present CoVeriTest, a hybrid technique for test-suite generation. CoVeriTest iteratively applies different conditional model checkers and allows users to adjust the level of cooperation and to configure individual time limits for each conditional model checker. In our experiments, we systematically study different CoVeriTest cooperation setups, which either use combinations of explicit-state model checking and predicate abstraction, or bounded model checking and symbolic execution. A comparison with state-of-the-art test-generation tools reveals that CoVeriTest achieves higher coverage for many programs (about 15%).}, keyword = {CPAchecker,Software Model Checking,Software Testing}, funding = {DFG-COOP}, issn = {1433-2787}, } -
Plain random test generation with PRTest.
International Journal on Software Tools for Technology Transfer (STTT),
2020.
Springer.
doi:10.1007/s10009-020-00568-x
Keyword(s):
Software Testing
Publisher's Version
PDF
Presentation
Abstract
Automatic test-suite generation tools are often complex and their behavior is not predictable. To provide a minimum baseline that test-suite generators should be able to surpass, we present PRTest, a random black-box test-suite generator for C programs: To create a test, PRTest natively executes the program under test and creates a new, random test value whenever an input value is required. After execution, PRTest checks whether any new program branches were covered and, if this is the case, the created test is added to the test suite. This way, tests are rapidly created either until a crash is found, or until the user aborts the creation. While this naive mechanism is not competitive with more sophisticated, state-of-the-art test-suite generation tools, it is able to provide a good baseline for Test-Comp and a fast alternative for automatic test-suite generation for programs with simple control flow. PRTest is publicly available and open source.BibTeX Entry
@article{PRTEST19, author = {Thomas Lemberger}, title = {Plain random test generation with {PRTest}}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {}, number = {}, pages = {}, year = {2020}, publisher = {Springer}, doi = {10.1007/s10009-020-00568-x}, sha256 = {2e5ae7091b6adb758c123dfe62d3fab57203f930883539beb20f6e91391ebc77}, pdf = {https://www.sosy-lab.org/research/pub/2020-STTT.Plain_random_test_generation_with_PRTest.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-04-06_TestComp19_PRTest_Thomas.pdf}, abstract = {Automatic test-suite generation tools are often complex and their behavior is not predictable. To provide a minimum baseline that test-suite generators should be able to surpass, we present PRTest, a random black-box test-suite generator for C programs: To create a test, PRTest natively executes the program under test and creates a new, random test value whenever an input value is required. After execution, PRTest checks whether any new program branches were covered and, if this is the case, the created test is added to the test suite. This way, tests are rapidly created either until a crash is found, or until the user aborts the creation. While this naive mechanism is not competitive with more sophisticated, state-of-the-art test-suite generation tools, it is able to provide a good baseline for Test-Comp and a fast alternative for automatic test-suite generation for programs with simple control flow. PRTest is publicly available and open source.}, keyword = {Software Testing}, annote = {Publication appeared first online in July 2020.<BR/> PRTest is available at <a href="https://gitlab.com/sosy-lab/software/prtest"> https://gitlab.com/sosy-lab/software/prtest</a>}, }Additional Infos
Publication appeared first online in July 2020.
PRTest is available at https://gitlab.com/sosy-lab/software/prtest
Articles in conference or workshop proceedings
-
Software Verification with CPAchecker 3.0: Tutorial and User Guide.
In Proceedings of the 26th International Symposium on Formal Methods (FM 2024, Milan, Italy, September 9-13),
LNCS 14934,
pages 543-570,
2024.
Springer.
doi:10.1007/978-3-031-71177-0_30
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Funding:
DFG-COOP,
DFG-CONVEY,
DFG-IDEFIX
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, k-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. An extended version also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at https://cpachecker.sosy-lab.org/doc.php.BibTeX Entry
@inproceedings{FM24a, author = {Daniel Baier and Dirk Beyer and Po-Chun Chien and Marie-Christine Jakobs and Marek Jankola and Matthias Kettl and Nian-Ze Lee and Thomas Lemberger and Marian Lingsch-Rosenfeld and Henrik Wachowitz and Philipp Wendler}, title = {Software Verification with {CPAchecker} 3.0: {Tutorial} and User Guide}, booktitle = {Proceedings of the 26th International Symposium on Formal Methods (FM~2024, Milan, Italy, September 9-13)}, pages = {543-570}, year = {2024}, series = {LNCS~14934}, publisher = {Springer}, doi = {10.1007/978-3-031-71177-0_30}, url = {https://cpachecker.sosy-lab.org}, pdf = {https://www.sosy-lab.org/research/pub/2024-FM.Software_Verification_with_CPAchecker_3.0_Tutorial_and_User_Guide.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-09-10_FM24_CPAchecker_Tutorial.pdf}, abstract = {This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, <i>k</i>-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. An extended version also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at <a href="https://cpachecker.sosy-lab.org/doc.php">https://cpachecker.sosy-lab.org/doc.php</a>.}, keyword = {CPAchecker, Software Model Checking, Software Testing}, annote = {An <a href="https://www.sosy-lab.org/research/bib/All/index.html#TechReport24c">extended version</a> of this article is available on <a href="https://doi.org/10.48550/arXiv.2409.02094">arXiv</a>.}, artifact = {10.5281/zenodo.13612338}, funding = {DFG-COOP, DFG-CONVEY, DFG-IDEFIX}, }Additional Infos
An extended version of this article is available on arXiv. -
Refining CEGAR-based Test-Case Generation with Feasibility Annotations.
In Proc. TAP ,
LNCS ,
2024.
Springer.
Keyword(s):
Ultimate Automizer,
Software Testing
Funding:
DFG-ReVeriX
Artifact(s)
BibTeX Entry
@inproceedings{UTestGen-Reuse, author = {Max Barth and Marie{-}Christine Jakobs}, title = {Refining CEGAR-based Test-Case Generation with Feasibility Annotations}, booktitle = {Proc.\ {TAP}}, pages = {}, year = {2024}, series = {LNCS~}, publisher = {Springer}, url = {}, pdf = {}, keyword = {Ultimate Automizer, Software Testing}, annote = {}, artifact = {10.5281/zenodo.11641893}, doinone = {Unpublished: Last checked: 2024-07-08}, funding = {DFG-ReVeriX}, } -
Test-Case Generation with
Automata-based Software Model Checking.
In Proc. SPIN ,
LNCS ,
2024.
Springer.
Keyword(s):
Ultimate Automizer,
Software Testing
Artifact(s)
BibTeX Entry
@inproceedings{UTestGen, author = {Max Barth and Marie{-}Christine Jakobs}, title = {Test-Case Generation with Automata-based Software Model Checking}, booktitle = {Proc.\ {SPIN}}, pages = {}, year = {2024}, series = {LNCS~}, publisher = {Springer}, url = {}, pdf = {}, keyword = {Ultimate Automizer, Software Testing}, annote = {}, artifact = {10.5281/zenodo.10574234}, doinone = {Unpublished: Last checked: 2024-07-08}, funding = {}, } -
Ultimate TestGen: Test-Case Generation with Automata-based Software
Model Checking (Competition Contribution).
In Proc. FASE ,
LNCS 14573,
pages 326-330,
2024.
Springer.
doi:10.1007/978-3-031-57259-3_20
Keyword(s):
Ultimate Automizer,
Software Testing
Publisher's Version
PDF
Artifact(s)
BibTeX Entry
@inproceedings{UTestGen-Competition, author = {Max Barth and Daniel Dietsch and Matthias Heizmann and Marie{-}Christine Jakobs}, title = {Ultimate TestGen: Test-Case Generation with Automata-based Software Model Checking (Competition Contribution)}, booktitle = {Proc.\ {FASE}}, pages = {326-330}, year = {2024}, series = {LNCS~14573}, publisher = {Springer}, doi = {10.1007/978-3-031-57259-3_20}, url = {}, pdf = {}, keyword = {Ultimate Automizer, Software Testing}, annote = {}, artifact = {10.5281/zenodo.10071568}, funding = {}, } -
Regression-Test History Data for Flaky Test Research.
In Proc. 1st International Workshop on Flaky Tests,
pages 3–4,
2024.
ACM.
doi:10.1145/3643656.3643901
Keyword(s):
Software Testing,
Flaky Tests
Publisher's Version
PDF
Presentation
Artifact(s)
Abstract
Due to their random nature, flaky test failures are difficult to study. Without having observed a test to both pass and fail under the same setup, it is unknown whether a test is flaky and what its failure rate is. Thus, flaky-test research has greatly benefited from data records of previous studies, which provide evidence for flaky test failures and give a rough indication of the failure rates to expect. For assessing the impact of the studied flaky tests on developers' work, it is important to also know how flaky test failures manifest over a regression test history, i.e., under continuous changes to test code or code under test. While existing datasets on flaky tests are mostly based on re-runs on an invariant code base, the actual effects of flaky tests on development can only be assessed across the commits in an evolving commit history, against which (potentially flaky) regression tests are executed. In our presentation, we outline approaches to bridge this gap and report on our experiences following one of them. As a result of this work, we contribute a dataset of flaky test failures across a simulated regression test history.BibTeX Entry
@inproceedings{RegressionTestData-FTW24, author = {Philipp Wendler and Stefan Winter}, title = {Regression-Test History Data for Flaky Test Research}, booktitle = {Proc.\ 1st International Workshop on Flaky Tests}, pages = {3–4}, year = {2024}, publisher = {ACM}, doi = {10.1145/3643656.3643901}, pdf = {https://www.sosy-lab.org/research/pub/2024-FTW24.Regression-Test_History_Data_for_Flaky_Test_Research.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2024-04-14_FTW24_Regression-Test_History_Data_for_Flaky_Test_Research_Stefan.html}, abstract = {Due to their random nature, flaky test failures are difficult to study. Without having observed a test to both pass and fail under the same setup, it is unknown whether a test is flaky and what its failure rate is. Thus, flaky-test research has greatly benefited from data records of previous studies, which provide evidence for flaky test failures and give a rough indication of the failure rates to expect. For assessing the impact of the studied flaky tests on developers' work, it is important to also know how flaky test failures manifest over a regression test history, i.e., under continuous changes to test code or code under test. While existing datasets on flaky tests are mostly based on re-runs on an invariant code base, the actual effects of flaky tests on development can only be assessed across the commits in an evolving commit history, against which (potentially flaky) regression tests are executed. In our presentation, we outline approaches to bridge this gap and report on our experiences following one of them. As a result of this work, we contribute a dataset of flaky test failures across a simulated regression test history.}, keyword = {Software Testing, Flaky Tests}, artifact = {10.5281/zenodo.10639030}, keywords = {Software Testing, Dataset}, } -
Software Testing: 5th Comparative Evaluation: Test-Comp 2023.
In L. Lambers and
S. Uchitel, editors,
Proceedings of the 26th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2023, Paris, France, April 22-27),
LNCS 13991,
pages 309-323,
2023.
Springer.
doi:10.1007/978-3-031-30826-0_17
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE23, author = {Dirk Beyer}, title = {Software Testing: 5th Comparative Evaluation: {Test-Comp 2023}}, booktitle = {Proceedings of the 26th International Conference on Fundamental Approaches to Software Engineering (FASE~2023, Paris, France, April 22-27)}, editor = {L. Lambers and S. Uchitel}, pages = {309--323}, year = {2023}, series = {LNCS~13991}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-031-30826-0_17}, sha256 = {7110c26bf3c9311f84346a108a59318687bdadde4879f83d047f1a0fc546b630}, url = {https://test-comp.sosy-lab.org/2023/}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, _pdf = {https://www.sosy-lab.org/research/pub/2023-FASE.Software_Testing_5th_Comparative_Evaluation_Test-Comp_2023.pdf}, funding = {DFG-COOP}, } -
Advances in Automatic Software Testing: Test-Comp 2022.
In E. B. Johnsen and
M. Wimmer, editors,
Proceedings of the 25th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2022, Munich, Germany, April 2-7),
LNCS 13241,
pages 321-335,
2022.
Springer.
doi:10.1007/978-3-030-99429-7_18
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE22b, author = {Dirk Beyer}, title = {Advances in Automatic Software Testing: {Test-Comp 2022}}, booktitle = {Proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering (FASE~2022, Munich, Germany, April 2-7)}, editor = {E.~B.~Johnsen and M.~Wimmer}, pages = {321-335}, year = {2022}, series = {LNCS~13241}, publisher = {Springer}, isbn = {}, doi = {10.1007/978-3-030-99429-7_18}, sha256 = {3f921c8f232a5c970f678889de8c402313049522a5dfa69ca68cd01d9dd9fce3}, url = {https://test-comp.sosy-lab.org/2022/}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, _pdf = {https://www.sosy-lab.org/research/pub/2022-FASE.Advances_in_Automatic_Software_Testing_Test-Comp_2022.pdf}, funding = {DFG-COOP}, } -
Status Report on Software Testing: Test-Comp 2021.
In E. Guerra and
M. Stoelinga, editors,
Proceedings of the 24th International Conference on
Fundamental Approaches to Software Engineering
(FASE 2021, Luxembourg, Luxembourg, March 27 - April 1),
LNCS 12649,
pages 341-357,
2021.
Springer.
doi:10.1007/978-3-030-71500-7_17
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Supplement
Abstract
This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.BibTeX Entry
@inproceedings{FASE21, author = {Dirk Beyer}, title = {Status Report on Software Testing: {Test-Comp 2021}}, booktitle = {Proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering (FASE~2021, Luxembourg, Luxembourg, March 27 - April 1)}, editor = {E.~Guerra and M.~Stoelinga}, pages = {341-357}, year = {2021}, series = {LNCS~12649}, publisher = {Springer}, isbn = {978-3-030-71500-7}, doi = {10.1007/978-3-030-71500-7_17}, sha256 = {113b44c5be9f6d773ebd1a5cad91e8dc66f06d7af0b8c648c9dcea8d6bbc7e3d}, url = {https://test-comp.sosy-lab.org/2021/}, abstract = {This report describes Test-Comp 2021, the 3rd edition of the Competition on Software Testing. The competition is a series of annual comparative evaluations of fully automatic software test generators for C programs. The competition has a strong focus on reproducibility of its results and its main goal is to provide an overview of the current state of the art in the area of automatic test-generation. The competition was based on 3 173 test-generation tasks for C programs. Each test-generation task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2021 had 11 participating test generators from 6 countries.}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, funding = {DFG-COOP}, } -
Second Competition on Software Testing: Test-Comp 2020.
In Proceedings of the 23rd International Conference on
Fundamental Approaches to Software Engineering (FASE 2020, Dublin, Ireland, April 25-30),
LNCS 12076,
pages 505-519,
2020.
Springer.
doi:10.1007/978-3-030-45234-6_25
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Publisher's Version
PDF
Supplement
Artifact(s)
BibTeX Entry
@inproceedings{FASE20, author = {Dirk Beyer}, title = {Second Competition on Software Testing: {Test-Comp 2020}}, booktitle = {Proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering (FASE~2020, Dublin, Ireland, April 25-30)}, pages = {505-519}, year = {2020}, series = {LNCS~12076}, publisher = {Springer}, doi = {10.1007/978-3-030-45234-6_25}, sha256 = {296b4caf885ae029e388c2ef8fd032f1ab55c07d5e8ea1064f2e50c08f5d6919}, url = {https://test-comp.sosy-lab.org/2020/}, abstract = {}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, artifact1 = {10.5281/zenodo.3678250}, artifact2 = {10.5281/zenodo.3678264}, artifact3 = {10.5281/zenodo.3678275}, artifact4 = {10.5281/zenodo.3574420}, } -
Cooperative Test-Case Generation with Verifiers.
In M. Felderer,
W. Hasselbring,
R. Rabiser, and
R. Jung, editors,
Proceedings of the Conference on
Software Engineering (SE 2020, Innsbruck, Austria, February 24-28),
LNI P-300,
pages 107-108,
2020.
GI.
doi:10.18420/SE2020_31
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Publisher's Version
BibTeX Entry
@inproceedings{SE20, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {Cooperative Test-Case Generation with Verifiers}, booktitle = {Proceedings of the Conference on Software Engineering (SE~2020, Innsbruck, Austria, February 24-28)}, editor = {M.~Felderer and W.~Hasselbring and R.~Rabiser and R.~Jung}, pages = {107--108}, year = {2020}, series = {{LNI}~P-300}, publisher = {{GI}}, doi = {10.18420/SE2020_31}, sha256 = {}, pdf = {}, presentation = {}, abstract = {}, keyword = {CPAchecker,Software Model Checking,Software Testing}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2019.html#FASE19">full article on this topic</a> that appeared in Proc. FASE 2019.}, isbnnote = {978-3-88579-694-7}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. FASE 2019. -
TestCov: Robust Test-Suite Execution and Coverage Measurement.
In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019, San Diego, CA, USA, November 11-15),
pages 1074-1077,
2019.
IEEE.
doi:10.1109/ASE.2019.00105
Keyword(s):
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
BibTeX Entry
@inproceedings{ASE19, author = {Dirk Beyer and Thomas Lemberger}, title = {{T}est{C}ov: Robust Test-Suite Execution and Coverage Measurement}, booktitle = {Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019, San Diego, CA, USA, November 11-15)}, pages = {1074-1077}, year = {2019}, publisher = {IEEE}, doi = {10.1109/ASE.2019.00105}, sha256 = {}, pdf = {https://www.sosy-lab.org/research/pub/2019-ASE.TestCov_Robust_Test-Suite_Execution_and_Coverage_Measurement.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-11-12_ASE19_TestCov_Thomas_Lemberger.pdf}, keyword = {Software Testing}, funding = {DFG-COOP}, isbnnote = {978-1-7281-2508-4}, } -
Conditional Testing - Off-the-Shelf Combination of Test-Case Generators.
In Yu-Fang Chen,
Chih-Hong Cheng, and
Javier Esparza, editors,
Proceedings of the 17th International Symposium on
Automated Technology for Verification and Analysis (ATVA 2019, Taipei, Taiwan, October 28-31),
LNCS 11781,
pages 189-208,
2019.
Springer.
doi:10.1007/978-3-030-31784-3_11
Keyword(s):
Software Testing
Funding:
DFG-COOP
Publisher's Version
PDF
Presentation
Supplement
BibTeX Entry
@inproceedings{ATVA19, author = {Dirk Beyer and Thomas Lemberger}, title = {Conditional Testing - Off-the-Shelf Combination of Test-Case Generators}, booktitle = {Proceedings of the 17th International Symposium on Automated Technology for Verification and Analysis (ATVA~2019, Taipei, Taiwan, October 28-31)}, editor = {Yu{-}Fang Chen and Chih{-}Hong Cheng and Javier Esparza}, pages = {189-208}, year = {2019}, series = {LNCS~11781}, publisher = {Springer}, doi = {10.1007/978-3-030-31784-3_11}, sha256 = {}, url = {https://www.sosy-lab.org/research/conditional-testing/}, pdf = {https://www.sosy-lab.org/research/pub/2019-ATVA.Conditional_Testing_Off-the-Shelf_Combination_of_Test-Case_Generators.pdf}, presentation = {https://www.sosy-lab.org/research/prs/2019-10-29_ATVA19_Conditional_Testing_Thomas_Lemberger.pdf}, keyword = {Software Testing}, funding = {DFG-COOP}, } -
International Competition on Software Testing (Test-Comp).
In Proceedings of the 25th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2019, Prague, Czech Republic, April 6-11), part 3,
LNCS 11429,
pages 167-175,
2019.
Springer.
doi:10.1007/978-3-030-17502-3_11
Keyword(s):
Competition on Software Testing (Test-Comp),
Competition on Software Testing (Test-Comp Report),
Software Testing
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{TACAS19c, author = {Dirk Beyer}, title = {International Competition on Software Testing (Test-Comp)}, booktitle = {Proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2019, Prague, Czech Republic, April 6-11), part 3}, pages = {167-175}, year = {2019}, series = {LNCS~11429}, publisher = {Springer}, doi = {10.1007/978-3-030-17502-3_11}, sha256 = {80ba1d656e40b44c40e756010ccd32db5aad71820cd746b264f70244477fc737}, url = {https://test-comp.sosy-lab.org/2019/}, keyword = {Competition on Software Testing (Test-Comp),Competition on Software Testing (Test-Comp Report),Software Testing}, } -
CoVeriTest: Cooperative Verifier-Based Testing.
In Proceedings of the 22nd International Conference on
Fundamental Approaches to Software Engineering (FASE 2019, Prague, Czech Republic, April 6-11),
LNCS 11424,
pages 389-408,
2019.
Springer.
doi:10.1007/978-3-030-16722-6_23
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE19, author = {Dirk Beyer and Marie-Christine Jakobs}, title = {CoVeriTest: Cooperative Verifier-Based Testing}, booktitle = {Proceedings of the 22nd International Conference on Fundamental Approaches to Software Engineering (FASE~2019, Prague, Czech Republic, April 6-11)}, pages = {389-408}, year = {2019}, series = {LNCS~11424}, publisher = {Springer}, doi = {10.1007/978-3-030-16722-6_23}, sha256 = {ee64749fba4796ed79cecfaa500731ef2ac5d5e795770c44b1e7ad358f955398}, url = {https://www.sosy-lab.org/research/coop-testgen/}, keyword = {CPAchecker,Software Model Checking,Software Testing}, } -
Facilitating Reuse in Multi-Goal Test-Suite Generation for Software Product Lines.
In A. Egyed and
I. Schaefer, editors,
Proceedings of the 18th International Conference on
Fundamental Approaches to Software Engineering (FASE 2015, London, UK, April 13-15),
LNCS 9033,
pages 84-99,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-46675-9_6
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{FASE15, author = {Johannes B{\"u}rdek and Malte Lochau and Stefan Bauregger and Andreas Holzer and Alexander von Rhein and Sven Apel and Dirk Beyer}, title = {Facilitating Reuse in Multi-Goal Test-Suite Generation for Software Product Lines}, booktitle = {Proceedings of the 18th International Conference on Fundamental Approaches to Software Engineering (FASE~2015, London, UK, April 13-15)}, editor = {A.~Egyed and I.~Schaefer}, pages = {84-99}, year = {2015}, series = {LNCS~9033}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-46674-2}, doi = {10.1007/978-3-662-46675-9_6}, sha256 = {fcd4d2f3155e3e061318a444f578c41c5e224a7c76e1bf161fe55cc7ae01ae86}, url = {http://forsyte.at/software/cpatiger/}, keyword = {CPAchecker,Software Model Checking,Software Testing}, }
Internal reports
-
Software Verification with CPAchecker 3.0: Tutorial and User Guide (Extended Version).
Technical report 2409.02094, arXiv/CoRR,
September
2024.
doi:10.48550/arXiv.2409.02094
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Funding:
DFG-COOP,
DFG-CONVEY,
DFG-IDEFIX
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, k-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. It also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at https://cpachecker.sosy-lab.org/doc.php.BibTeX Entry
@techreport{TechReport24c, author = {Daniel Baier and Dirk Beyer and Po-Chun Chien and Marie-Christine Jakobs and Marek Jankola and Matthias Kettl and Nian-Ze Lee and Thomas Lemberger and Marian Lingsch-Rosenfeld and Henrik Wachowitz and Philipp Wendler}, title = {Software Verification with {CPAchecker} 3.0: {Tutorial} and User Guide (Extended Version)}, number = {2409.02094}, year = {2024}, doi = {10.48550/arXiv.2409.02094}, url = {https://cpachecker.sosy-lab.org}, presentation = {https://www.sosy-lab.org/research/prs/2024-09-10_FM24_CPAchecker_Tutorial.pdf}, abstract = {This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, <i>k</i>-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. It also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at <a href="https://cpachecker.sosy-lab.org/doc.php">https://cpachecker.sosy-lab.org/doc.php</a>.}, keyword = {CPAchecker, Software Model Checking, Software Testing}, annote = {This technical report is an extended version of our <a href="https://www.sosy-lab.org/research/bib/All/index.html#FM24a">paper</a> at FM 2024.}, artifact = {10.5281/zenodo.13612338}, funding = {DFG-COOP, DFG-CONVEY, DFG-IDEFIX}, institution = {arXiv/CoRR}, month = {September}, }Additional Infos
This technical report is an extended version of our paper at FM 2024.
Disclaimer:
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.