Publications by year
2024
-
Software Verification with CPAchecker 3.0: Tutorial and User Guide (Extended Version).
Technical report 2409.02094, arXiv/CoRR,
September
2024.
doi:10.48550/arXiv.2409.02094
Keyword(s):
CPAchecker,
Software Model Checking,
Software Testing
Funding:
DFG-COOP,
DFG-CONVEY,
DFG-IDEFIX
Publisher's Version
PDF
Presentation
Supplement
Artifact(s)
Abstract
This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, k-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. It also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at https://cpachecker.sosy-lab.org/doc.php.BibTeX Entry
@techreport{TechReport24c, author = {Daniel Baier and Dirk Beyer and Po-Chun Chien and Marie-Christine Jakobs and Marek Jankola and Matthias Kettl and Nian-Ze Lee and Thomas Lemberger and Marian Lingsch-Rosenfeld and Henrik Wachowitz and Philipp Wendler}, title = {Software Verification with {CPAchecker} 3.0: {Tutorial} and User Guide (Extended Version)}, number = {2409.02094}, year = {2024}, doi = {10.48550/arXiv.2409.02094}, url = {https://cpachecker.sosy-lab.org}, presentation = {https://www.sosy-lab.org/research/prs/2024-09-10_FM24_CPAchecker_Tutorial.pdf}, abstract = {This tutorial provides an introduction to CPAchecker for users. CPAchecker is a flexible and configurable framework for software verification and testing. The framework provides many abstract domains, such as BDDs, explicit values, intervals, memory graphs, and predicates, and many program-analysis and model-checking algorithms, such as abstract interpretation, bounded model checking, Impact, interpolation-based model checking, <i>k</i>-induction, PDR, predicate abstraction, and symbolic execution. This tutorial presents basic use cases for CPAchecker in formal software verification, focusing on its main verification techniques with their strengths and weaknesses. It also shows further use cases of CPAchecker for test-case generation and witness-based result validation. The envisioned readers are assumed to possess a background in automatic formal verification and program analysis, but prior knowledge of CPAchecker is not required. This tutorial and user guide is based on CPAchecker in version 3.0. This user guide's latest version and other documentation are available at <a href="https://cpachecker.sosy-lab.org/doc.php">https://cpachecker.sosy-lab.org/doc.php</a>.}, keyword = {CPAchecker, Software Model Checking, Software Testing}, annote = {This technical report is an extended version of our <a href="https://www.sosy-lab.org/research/bib/All/index.html#FM24a">paper</a> at FM 2024.}, artifact = {10.5281/zenodo.13612338}, funding = {DFG-COOP, DFG-CONVEY, DFG-IDEFIX}, institution = {arXiv/CoRR}, month = {September}, }Additional Infos
This technical report is an extended version of our paper at FM 2024. -
MoXIchecker: An Extensible Model Checker for MoXI.
Technical report 2407.15551, arXiv/CoRR,
March
2024.
doi:10.48550/arXiv.2407.15551
Keyword(s):
Btor2
Funding:
DFG-CONVEY,
DFG-BRIDGE
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
MoXI is a new intermediate verification language introduced in 2024 to promote the standardization and open-source implementations for symbolic model checking by extending the SMT-LIB 2 language with constructs to define state-transition systems. The tool suite of MoXI provides a translator from MoXI to Btor2, which is a lower-level intermediate language for hardware verification, and a translation-based model checker, which invokes mature hardware model checkers for Btor2 to analyze the translated verification tasks. The extensibility of such a translation-based model checker is restricted because more complex theories, such as integer or real arithmetics, cannot be precisely expressed with bit-vectors of fixed lengths in Btor2. We present MoXIchecker, the first model checker that solves MoXI verification tasks directly. Instead of translating MoXI to lower-level languages, MoXIchecker uses the solver-agnostic library PySMT for SMT solvers as backend for its verification algorithms. MoXIchecker is extensible because it accommodates verification tasks involving more complex theories, not limited by lower-level languages, facilitates the implementation of new algorithms, and is solver-agnostic by using the API of PySMT. In our evaluation, MoXIchecker uniquely solved tasks that use integer or real arithmetics, and achieved a comparable performance against the translation-based model checker from the MoXI tool suite.BibTeX Entry
@techreport{TechReport24b, author = {Salih Ates and Dirk Beyer and Po-Chun Chien and Nian-Ze Lee}, title = {{MoXIchecker}: {An} Extensible Model Checker for {MoXI}}, number = {2407.15551}, year = {2024}, doi = {10.48550/arXiv.2407.15551}, url = {https://gitlab.com/sosy-lab/software/moxichecker}, pdf = {https://arxiv.org/abs/2407.15551}, abstract = {MoXI is a new intermediate verification language introduced in 2024 to promote the standardization and open-source implementations for symbolic model checking by extending the SMT-LIB 2 language with constructs to define state-transition systems. The tool suite of MoXI provides a translator from MoXI to Btor2, which is a lower-level intermediate language for hardware verification, and a translation-based model checker, which invokes mature hardware model checkers for Btor2 to analyze the translated verification tasks. The extensibility of such a translation-based model checker is restricted because more complex theories, such as integer or real arithmetics, cannot be precisely expressed with bit-vectors of fixed lengths in Btor2. We present MoXIchecker, the first model checker that solves MoXI verification tasks directly. Instead of translating MoXI to lower-level languages, MoXIchecker uses the solver-agnostic library PySMT for SMT solvers as backend for its verification algorithms. MoXIchecker is extensible because it accommodates verification tasks involving more complex theories, not limited by lower-level languages, facilitates the implementation of new algorithms, and is solver-agnostic by using the API of PySMT. In our evaluation, MoXIchecker uniquely solved tasks that use integer or real arithmetics, and achieved a comparable performance against the translation-based model checker from the MoXI tool suite.}, keyword = {Btor2}, artifact = {10.5281/zenodo.12787654}, funding = {DFG-CONVEY,DFG-BRIDGE}, institution = {arXiv/CoRR}, month = {March}, } -
Augmenting Interpolation-Based Model Checking with Auxiliary Invariants (Extended Version).
Technical report 2403.07821, arXiv/CoRR,
March
2024.
doi:10.48550/arXiv.2403.07821
Keyword(s):
Software Model Checking,
Cooperative Verification,
CPAchecker
Funding:
DFG-CONVEY
Publisher's Version
PDF
Supplement
Artifact(s)
Abstract
Software model checking is a challenging problem, and generating relevant invariants is a key factor in proving the safety properties of a program. Program invariants can be obtained by various approaches, including lightweight procedures based on data-flow analysis and intensive techniques using Craig interpolation. Although data-flow analysis runs efficiently, it often produces invariants that are too weak to prove the properties. By contrast, interpolation-based approaches build strong invariants from interpolants, but they might not scale well due to expensive interpolation procedures. Invariants can also be injected into model-checking algorithms to assist the analysis. Invariant injection has been studied for many well-known approaches, including k-induction, predicate abstraction, and symbolic execution. We propose an augmented interpolation-based verification algorithm that injects external invariants into interpolation-based model checking (McMillan, 2003), a hardware model-checking algorithm recently adopted for software verification. The auxiliary invariants help prune unreachable states in Craig interpolants and confine the analysis to the reachable parts of a program. We implemented the proposed technique in the verification framework CPAchecker and evaluated it against mature SMT-based methods in CPAchecker as well as other state-of-the-art software verifiers. We found that injecting invariants reduces the number of interpolation queries needed to prove safety properties and improves the run-time efficiency. Consequently, the proposed invariant-injection approach verified difficult tasks that none of its plain version (i.e., without invariants), the invariant generator, or any compared tools could solve.BibTeX Entry
@techreport{TechReport24a, author = {Dirk Beyer and Po-Chun Chien and Nian-Ze Lee}, title = {Augmenting Interpolation-Based Model Checking with Auxiliary Invariants (Extended Version)}, number = {2403.07821}, year = {2024}, doi = {10.48550/arXiv.2403.07821}, url = {https://www.sosy-lab.org/research/imc-df/}, pdf = {https://arxiv.org/abs/2403.07821}, abstract = {Software model checking is a challenging problem, and generating relevant invariants is a key factor in proving the safety properties of a program. Program invariants can be obtained by various approaches, including lightweight procedures based on data-flow analysis and intensive techniques using Craig interpolation. Although data-flow analysis runs efficiently, it often produces invariants that are too weak to prove the properties. By contrast, interpolation-based approaches build strong invariants from interpolants, but they might not scale well due to expensive interpolation procedures. Invariants can also be injected into model-checking algorithms to assist the analysis. Invariant injection has been studied for many well-known approaches, including <i>k</i>-induction, predicate abstraction, and symbolic execution. We propose an augmented interpolation-based verification algorithm that injects external invariants into interpolation-based model checking (McMillan, 2003), a hardware model-checking algorithm recently adopted for software verification. The auxiliary invariants help prune unreachable states in Craig interpolants and confine the analysis to the reachable parts of a program. We implemented the proposed technique in the verification framework CPAchecker and evaluated it against mature SMT-based methods in CPAchecker as well as other state-of-the-art software verifiers. We found that injecting invariants reduces the number of interpolation queries needed to prove safety properties and improves the run-time efficiency. Consequently, the proposed invariant-injection approach verified difficult tasks that none of its plain version (i.e., without invariants), the invariant generator, or any compared tools could solve.}, keyword = {Software Model Checking, Cooperative Verification, CPAchecker}, annote = {This technical report is an extended version of our <a href="https://www.sosy-lab.org/research/bib/All/index.html#SPIN24b">paper</a> at SPIN 2024.}, artifact = {10.5281/zenodo.10548594}, funding = {DFG-CONVEY}, institution = {arXiv/CoRR}, month = {March}, }Additional Infos
This technical report is an extended version of our paper at SPIN 2024. -
Bridging Hardware and Software Formal Verification (Extended Abstract).
Technical report 2024-06, LMU Munich,
2024.
Keyword(s):
Software Model Checking,
Cooperative Verification,
Btor2,
CPAchecker,
Witness-Based Validation
Funding:
DFG-CONVEY
PDF
Abstract
Modern technology relies heavily on the integration of hardware and software systems, from embedded devices in consumer electronics to safety-critical controllers. Despite their interdependence, the tools and methods used for verifying the correctness and reliability of these systems are often segregated, meaning that the advancement in one community cannot benefit another directly. Addressing this challenge, my dissertation aims at bridging the gap between hardware and software formal analysis. This involves translating representations of verification tasks, generating certificates for verification results, integrating state-of-the-art formal analysis tools into a cohesive framework, and adapting and combining model-checking algorithms across domains. By translating word-level hardware circuits into C programs, we found out that software analyzers were able to identify property violations that well-established hardware verifiers failed to detect. Moreover, by adopting interpolation-based hardware-verification algorithms for software analysis, we were able to tackle tasks unsolvable by existing methods. Our research consolidates knowledge from both hardware and software domains, paving a pathway for comprehensive system-level verification.BibTeX Entry
@techreport{chien:fm24-doc-symposium, author = {Po-Chun Chien}, title = {Bridging Hardware and Software Formal Verification (Extended Abstract)}, number = {2024-06}, year = {2024}, pdf = {https://www.sosy-lab.org/research/pub/2024-FM_Doctoral_Symposium.Bridging_Hardware_and_Software_Formal_Verification_Extended_Abstract.pdf}, abstract = {Modern technology relies heavily on the integration of hardware and software systems, from embedded devices in consumer electronics to safety-critical controllers. Despite their interdependence, the tools and methods used for verifying the correctness and reliability of these systems are often segregated, meaning that the advancement in one community cannot benefit another directly. Addressing this challenge, my dissertation aims at bridging the gap between hardware and software formal analysis. This involves translating representations of verification tasks, generating certificates for verification results, integrating state-of-the-art formal analysis tools into a cohesive framework, and adapting and combining model-checking algorithms across domains. By translating word-level hardware circuits into C programs, we found out that software analyzers were able to identify property violations that well-established hardware verifiers failed to detect. Moreover, by adopting interpolation-based hardware-verification algorithms for software analysis, we were able to tackle tasks unsolvable by existing methods. Our research consolidates knowledge from both hardware and software domains, paving a pathway for comprehensive system-level verification.}, keyword = {Software Model Checking, Cooperative Verification, Btor2, CPAchecker, Witness-Based Validation}, annote = {An extended abstract of the dissertation project. Submitted to the Doctoral Symposium of FM 2024.}, funding = {DFG-CONVEY}, institution = {LMU Munich}, }Additional Infos
An extended abstract of the dissertation project. Submitted to the Doctoral Symposium of FM 2024.
2023
-
Incorporating Data Dependencies and Properties in Difference Verification
with Conditions (Technical Report).
Technical report 2309.01585, arXiv/CoRR,
2023.
doi:10.48550/arXiv.2309.01585
Keyword(s):
Incremental Verification,
Regression Verification,
Software Model Checking,
CPAchecker
Funding:
Software-Factory 4.0,
DFG-ReVeriX
Publisher's Version
PDF
Artifact(s)
BibTeX Entry
@techreport{diffDP-TechReport, author = {Marie{-}Christine Jakobs and Tim Pollandt}, title = {Incorporating Data Dependencies and Properties in Difference Verification with Conditions (Technical Report)}, number = {2309.01585}, year = {2023}, doi = {10.48550/arXiv.2309.01585}, url = {}, pdf = {}, keyword = {Incremental Verification, Regression Verification, Software Model Checking, CPAchecker}, annote = {This technical report is an extended version of our <a href="https://www.sosy-lab.org/research/bib/All/index.html#diffDP">paper</a> at iFM 2023.}, artifact = {10.5281/zenodo.8272913}, funding = {Software-Factory 4.0, DFG-ReVeriX}, institution = {arXiv/CoRR}, }Additional Infos
This technical report is an extended version of our paper at iFM 2023.
2022
-
Interpolation and SAT-Based Model Checking Revisited: Adoption to Software Verification.
Technical report 2208.05046, arXiv/CoRR,
August
2022.
doi:10.48550/arXiv.2208.05046
Keyword(s):
Software Model Checking,
CPAchecker
Publisher's Version
PDF
Presentation
Supplement
Abstract
Interpolation-based model checking (McMillan, 2003) is a formal-verification algorithm, which was originally devised to verify safety properties of finite-state transition systems. The algorithm is state-of-the-art in hardware model checking. It derives interpolants from unsatisfiable BMC queries, and collects them to construct an overapproximation of the set of reachable states. Unlike other formal-verification algorithms, such as k-induction or PDR, which have been extended to handle infinite-state systems and investigated for program analysis, McMillan's interpolation-based model checking algorithm from 2003 has not been used to verify programs so far. This paper closes this significant, 19 years old gap in knowledge by adopting the algorithm to software verification. We implemented it in the verification framework CPAchecker, and evaluated the implementation against other state-of-the-art software-verification techniques over the largest publicly available benchmark suite of C safety-verification tasks. The evaluation demonstrates that interpolation-based model checking is competitive among other algorithms in terms of both the number of solved verification tasks and the run-time efficiency. Our results might have important implications for software verification, because researchers and developers now have a richer set of approaches to choose from.BibTeX Entry
@techreport{TechReport22a, author = {Dirk Beyer and Nian-Ze Lee and Philipp Wendler}, title = {Interpolation and SAT-Based Model Checking Revisited: Adoption to Software Verification}, number = {2208.05046}, year = {2022}, doi = {10.48550/arXiv.2208.05046}, url = {https://www.sosy-lab.org/research/cpa-imc/}, presentation = {https://www.sosy-lab.org/research/prs/2022-08-11_iPRA22_Interpolation_and_SAT-Based_Model_Checking_Revisited.pdf}, abstract = {Interpolation-based model checking <a href="https://doi.org/10.1007/978-3-540-45069-6_1">(McMillan, 2003)</a> is a formal-verification algorithm, which was originally devised to verify safety properties of finite-state transition systems. The algorithm is state-of-the-art in hardware model checking. It derives interpolants from unsatisfiable BMC queries, and collects them to construct an overapproximation of the set of reachable states. Unlike other formal-verification algorithms, such as k-induction or PDR, which have been extended to handle infinite-state systems and investigated for program analysis, McMillan's <em>interpolation-based model checking</em> algorithm from 2003 has not been used to verify programs so far. This paper closes this significant, 19 years old gap in knowledge by adopting the algorithm to software verification. We implemented it in the verification framework CPAchecker, and evaluated the implementation against other state-of-the-art software-verification techniques over the largest publicly available benchmark suite of C safety-verification tasks. The evaluation demonstrates that interpolation-based model checking is competitive among other algorithms in terms of both the number of solved verification tasks and the run-time efficiency. Our results might have important implications for software verification, because researchers and developers now have a richer set of approaches to choose from.}, keyword = {Software Model Checking, CPAchecker}, _pdf = {https://www.sosy-lab.org/research/cpa-imc/Interpolation_and_SAT-Based_Model_Checking_Revisited.pdf}, institution = {arXiv/CoRR}, month = {August}, }
2021
-
Towards a Benchmark Set for Program Repair Based on Partial Fixes.
Technical report 2107.08038, arXiv/CoRR,
July
2021.
doi:10.48550/arXiv.2107.08038
Publisher's Version
PDF
BibTeX Entry
@techreport{TechReport21a, author = {Dirk Beyer and Lars Grunske and Thomas Lemberger and Minxing Tang}, title = {Towards a Benchmark Set for Program Repair Based on Partial Fixes}, number = {2107.08038}, year = {2021}, doi = {10.48550/arXiv.2107.08038}, keyword = {}, institution = {arXiv/CoRR}, month = {July}, }
2019
-
Software Verification with PDR: Implementation and Empirical Evaluation of the State of the Art.
Technical report 1908.06271, arXiv/CoRR,
August
2019.
doi:10.48550/arXiv.1908.06271
Publisher's Version
PDF
BibTeX Entry
@techreport{TechReport19b, author = {Dirk Beyer and Matthias Dangl}, title = {Software Verification with PDR: Implementation and Empirical Evaluation of the State of the Art}, number = {1908.06271}, year = {2019}, doi = {10.48550/arXiv.1908.06271}, institution = {arXiv/CoRR}, month = {August}, } -
Verification Artifacts in Cooperative Verification: Survey and Unifying Component Framework.
Technical report 1905.08505, arXiv/CoRR,
May
2019.
doi:10.48550/arXiv.1905.08505
Publisher's Version
PDF
BibTeX Entry
@techreport{TechReport19a, author = {Dirk Beyer and Heike Wehrheim}, title = {Verification Artifacts in Cooperative Verification: Survey and Unifying Component Framework}, number = {1905.08505}, year = {2019}, doi = {10.48550/arXiv.1905.08505}, institution = {arXiv/CoRR}, month = {May}, }
2018
-
VerifyThis 2018: A Program Verification Competition.
Technical report hal-01981937, Université Paris-Saclay,
2018.
PDF
Supplement
BibTeX Entry
@techreport{ernst:verifythis2018, author = {Marieke Huisman and Rosemary Monahan and Peter Müller and Andrei Paskevich and Gidon Ernst}, title = {{VerifyThis 2018: A Program Verification Competition}}, number = {hal-01981937}, year = {2018}, url = {https://hal.inria.fr/hal-01981937}, pdf = {https://www.sosy-lab.org/research/pub/2018-HAL.VerifyThis_2018.pdf}, institution = {Universit{\'e} Paris-Saclay}, }
2015
-
Combining k-Induction with Continuously-Refined Invariants.
Technical report MIP-1503, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
January
2015.
doi:10.48550/arXiv.1502.00096
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@techreport{TR1503-PA15, author = {Dirk Beyer and Matthias Dangl and Philipp Wendler}, title = {Combining k-Induction with Continuously-Refined Invariants}, number = {MIP-1503}, year = {2015}, doi = {10.48550/arXiv.1502.00096}, url = {https://www.sosy-lab.org/research/cpa-k-induction/}, keyword = {CPAchecker,Software Model Checking}, annote = {An <a href="https://www.sosy-lab.org/research/bib/Year/2015.complete.html#CAV15">abbreviated version</a> of this article appeared in Proc. CAV 2015.}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {January}, }Additional Infos
An abbreviated version of this article appeared in Proc. CAV 2015. -
Domain-Type-Guided Refinement Selection Based on Sliced Path Prefixes.
Technical report MIP-1501, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
January
2015.
doi:10.48550/arXiv.1502.00045
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@techreport{TR1501-PA15, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Domain-Type-Guided Refinement Selection Based on Sliced Path Prefixes}, number = {MIP-1501}, year = {2015}, doi = {10.48550/arXiv.1502.00045}, url = {https://www.sosy-lab.org/research/cpa-ref-sel/}, keyword = {CPAchecker,Software Model Checking}, annote = {Extended publications based on this article appeared in <a href="https://www.sosy-lab.org/research/bib/Year/2015.complete.html#FORTE15">Proc. FORTE 2015</a> and <a href="https://www.sosy-lab.org/research/bib/Year/2015.complete.html#SPIN15b">Proc. SPIN 2015</a>.}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {January}, }Additional Infos
Extended publications based on this article appeared in Proc. FORTE 2015 and Proc. SPIN 2015.
2014
-
Crash-safe refinement for a verified Flash file system.
Technical report 2014-02, University of Augsburg,
2014.
PDF
BibTeX Entry
@techreport{ernst:tr2014-02, author = {Jörg Pfähler and Gidon Ernst and Gerhard Schellhorn and Dominik Haneberg and Wolfgang Reif}, title = {{Crash-safe refinement for a verified Flash file system}}, number = {2014-02}, year = {2014}, pdf = {https://www.sosy-lab.org/research/pub/2014-TR.Crash-Safe_Refinement_for_a_Verified_Flash_File_System.pdf}, institution = {University of Augsburg}, type = {Technical Report}, } -
Translating separation logic into dynamic frames using fine-grained region logic.
Technical report CS-TR-13-02a, University of Central Florida,
2014.
BibTeX Entry
@techreport{ernst:bao2014, author = {Yuyan Bao and Gary Leavens and Gidon Ernst}, title = {{Translating separation logic into dynamic frames using fine-grained region logic}}, number = {CS-TR-13-02a}, year = {2014}, institution = {University of Central Florida}, type = {Technical Report}, }
2013
-
Reusing Precisions for Efficient Regression Verification.
Technical report MIP-1302, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
May
2013.
doi:10.48550/arXiv.1305.6915
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems, namely, 59 device drivers with 1119 revisions from the Linux kernel.BibTeX Entry
@techreport{TR1302-PA13, author = {Dirk Beyer and Stefan L{\"o}we and Evgeny Novikov and Andreas Stahlbauer and Philipp Wendler}, title = {Reusing Precisions for Efficient Regression Verification}, number = {MIP-1302}, year = {2013}, doi = {10.48550/arXiv.1305.6915}, url = {https://www.sosy-lab.org/research/cpa-reuse/}, abstract = {Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems, namely, 59 device drivers with 1119 revisions from the Linux kernel.}, keyword = {CPAchecker,Software Model Checking}, annote = {An <a href="https://www.sosy-lab.org/research/bib/Year/2013.complete.html#FSE13">abbreviated version</a> of this article appeared in Proc. ESEC/FSE 2013.}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {May}, }Additional Infos
An abbreviated version of this article appeared in Proc. ESEC/FSE 2013. -
Domain Types: Selecting Abstractions Based on Variable Usage.
Technical report MIP-1303, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
May
2013.
doi:10.48550/arXiv.1305.6640
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
The success of software model checking depends on finding an appropriate abstraction of the subject program. The choice of the abstract domain and the analysis configuration is currently left to the user, who may not be familiar with the tradeoffs and performance details of the available abstract domains. We introduce the concept of domain types, which classify the program variables into types that are more fine-grained than standard declared types, such as int or long, in order to guide the selection of an appropriate abstract domain for a model checker. Our implementation determines the domain type for each variable in a pre-processing step, based on the variable usage in the program, and then assigns each variable to an abstract domain. The model-checking framework that we use supports to specify a separate analysis precision for each abstract domain, such that we can freely configure the analysis. We experimentally demonstrate a significant impact of the choice of the abstract domain per variable. We consider one explicit (hash tables for integer values) and one symbolic (binary decision diagrams) domain. The experiments are based on standard verification tasks that are taken from recent competitions on software verification. Each abstract domain has unique advantages in representing the state space of variables of a certain domain type. Our experiments show that software model checkers can be improved with a domain-type guided combination of abstract domains.BibTeX Entry
@techreport{TR1303-PA13, author = {Sven Apel and Dirk Beyer and Karlheinz Friedberger and Franco Raimondi and Alexander von Rhein}, title = {Domain Types: Selecting Abstractions Based on Variable Usage}, number = {MIP-1303}, year = {2013}, doi = {10.48550/arXiv.1305.6640}, url = {https://www.sosy-lab.org/research/domaintypes/}, abstract = {The success of software model checking depends on finding an appropriate abstraction of the subject program. The choice of the abstract domain and the analysis configuration is currently left to the user, who may not be familiar with the tradeoffs and performance details of the available abstract domains. We introduce the concept of domain types, which classify the program variables into types that are more fine-grained than standard declared types, such as int or long, in order to guide the selection of an appropriate abstract domain for a model checker. Our implementation determines the domain type for each variable in a pre-processing step, based on the variable usage in the program, and then assigns each variable to an abstract domain. The model-checking framework that we use supports to specify a separate analysis precision for each abstract domain, such that we can freely configure the analysis. We experimentally demonstrate a significant impact of the choice of the abstract domain per variable. We consider one explicit (hash tables for integer values) and one symbolic (binary decision diagrams) domain. The experiments are based on standard verification tasks that are taken from recent competitions on software verification. Each abstract domain has unique advantages in representing the state space of variables of a certain domain type. Our experiments show that software model checkers can be improved with a domain-type guided combination of abstract domains.}, keyword = {CPAchecker,Software Model Checking}, annote = {}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {May}, }
2012
-
Explicit-Value Analysis Based on CEGAR and Interpolation.
Technical report MIP-1205, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
December
2012.
doi:10.48550/arXiv.1212.6542
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@techreport{TR1205-PA12, author = {Dirk Beyer and Stefan L{\"o}we}, title = {Explicit-Value Analysis Based on {CEGAR} and Interpolation}, number = {MIP-1205}, year = {2012}, doi = {10.48550/arXiv.1212.6542}, url = {}, abstract = {}, keyword = {CPAchecker,Software Model Checking}, annote = {}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {December}, }
2011
-
Feature-Aware Verification.
Technical report MIP-1105, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
September
2011.
doi:10.48550/arXiv.1110.0021
Keyword(s):
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
A software product line is a set of software products that are distinguished in terms of features (i.e., end-user-visible units of behavior). Feature interactions -situations in which the combination of features leads to emergent and possibly critical behavior- are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLverifier for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only feature-local knowledge, and that variability encoding significantly improves the verification performance when proving the absence of interactions.BibTeX Entry
@techreport{TR1105-PA11, author = {Sven Apel and Hendrik Speidel and Philipp Wendler and Alexander von Rhein and Dirk Beyer}, title = {Feature-Aware Verification}, number = {MIP-1105}, year = {2011}, doi = {10.48550/arXiv.1110.0021}, url = {http://fosd.net/FAV/}, abstract = {A software product line is a set of software products that are distinguished in terms of features (i.e., end-user--visible units of behavior). Feature interactions ---situations in which the combination of features leads to emergent and possibly critical behavior--- are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLverifier for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only feature-local knowledge, and that variability encoding significantly improves the verification performance when proving the absence of interactions.}, keyword = {Software Model Checking}, annote = {An <a href="https://www.sosy-lab.org/research/bib/Year/2011.complete.html#ASE11">abbreviated version</a> of this article appeared in Proc. ASE 2011.}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {September}, }Additional Infos
An abbreviated version of this article appeared in Proc. ASE 2011. -
Conditional Model Checking.
Technical report MIP-1107, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
September
2011.
doi:10.48550/arXiv.1109.6926
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Software model checking, as an undecidable problem, has three possible outcomes: (1) the program satisfies the specification, (2) the program does not satisfy the specification, and (3) the model checker fails. The third outcome usually manifests itself in a space-out, time-out, or one component of the verification tool giving up; in all of these failing cases, significant computation is performed by the verification tool before the failure, but no result is reported. We propose to reformulate the model-checking problem as follows, in order to have the verification tool report a summary of the performed work even in case of failure: given a program and a specification, the model checker returns a condition P -usually a state predicate- such that the program satisfies the specification under the condition P -that is, as long as the program does not leave states in which P is satisfied. We are of course interested in model checkers that return conditions P that are as weak as possible. Instead of outcome (1), the model checker will return P = true; instead of (2), the condition P will return the part of the state space that satisfies the specification; and in case (3), the condition P can summarize the work that has been performed by the model checker before space-out, time-out, or giving up. If complete verification is necessary, then a different verification method or tool may be used to focus on the states that violate the condition. We give such conditions as input to a conditional model checker, such that the verification problem is restricted to the part of the state space that satisfies the condition. Our experiments show that repeated application of conditional model checkers, using different conditions, can significantly improve the verification results, state-space coverage, and performance.BibTeX Entry
@techreport{TR1107-PA11, author = {Dirk Beyer and Thomas A. Henzinger and M. Erkan Keremoglu and Philipp Wendler}, title = {Conditional Model Checking}, number = {MIP-1107}, year = {2011}, doi = {10.48550/arXiv.1109.6926}, url = {https://www.sosy-lab.org/~dbeyer/cpa-cmc/}, abstract = {Software model checking, as an undecidable problem, has three possible outcomes: (1) the program satisfies the specification, (2) the program does not satisfy the specification, and (3) the model checker fails. The third outcome usually manifests itself in a space-out, time-out, or one component of the verification tool giving up; in all of these failing cases, significant computation is performed by the verification tool before the failure, but no result is reported. We propose to reformulate the model-checking problem as follows, in order to have the verification tool report a summary of the performed work even in case of failure: given a program and a specification, the model checker returns a condition P ---usually a state predicate--- such that the program satisfies the specification under the condition P ---that is, as long as the program does not leave states in which P is satisfied. We are of course interested in model checkers that return conditions P that are as weak as possible. Instead of outcome (1), the model checker will return P = true; instead of (2), the condition P will return the part of the state space that satisfies the specification; and in case (3), the condition P can summarize the work that has been performed by the model checker before space-out, time-out, or giving up. If complete verification is necessary, then a different verification method or tool may be used to focus on the states that violate the condition. We give such conditions as input to a conditional model checker, such that the verification problem is restricted to the part of the state space that satisfies the condition. Our experiments show that repeated application of conditional model checkers, using different conditions, can significantly improve the verification results, state-space coverage, and performance.}, keyword = {CPAchecker,Software Model Checking}, annote = {An <a href="https://www.sosy-lab.org/research/bib/Year/2012.complete.html#FSE12">abbreviated version</a> of this article appeared in Proc. FSE 2012.}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {September}, }Additional Infos
An abbreviated version of this article appeared in Proc. FSE 2012.
2009
-
Software Model Checking via Large-Block Encoding.
Technical report SFU-CS-2009-09, School of Computing Science (CMPT),
Simon Fraser University (SFU),
April
2009.
doi:10.48550/arXiv.0904.4709
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Abstract
The construction and analysis of an abstract reachability tree (ART) are the basis for a successful method for software verification. The ART represents unwindings of the control-flow graph of the program. Traditionally, a transition of the ART represents a single block of the program, and therefore, we call this approach single-block encoding (SBE). SBE may result in a huge number of program paths to be explored, which constitutes a fundamental source of inefficiency. We propose a generalization of the approach, in which transitions of the ART represent larger portions of the program; we call this approach large-block encoding (LBE). LBE may reduce the number of paths to be explored up to exponentially. Within this framework, we also investigate symbolic representations: for representing abstract states, in addition to conjunctions as used in SBE, we investigate the use of arbitrary Boolean formulas; for computing abstract-successor states, in addition to Cartesian predicate abstraction as used in SBE, we investigate the use of Boolean predicate abstraction. The new encoding leverages the efficiency of state-of-the-art SMT solvers, which can symbolically compute abstract large-block successors. Our experiments on benchmark C programs show that the large-block encoding outperforms the single-block encoding.BibTeX Entry
@techreport{TR009-SFU09, author = {Dirk Beyer and Alessandro Cimatti and Alberto Griggio and M. Erkan Keremoglu and Roberto Sebastiani}, title = {Software Model Checking via Large-Block Encoding}, number = {SFU-CS-2009-09}, year = {2009}, doi = {10.48550/arXiv.0904.4709}, url = {}, abstract = {The construction and analysis of an abstract reachability tree (ART) are the basis for a successful method for software verification. The ART represents unwindings of the control-flow graph of the program. Traditionally, a transition of the ART represents a single block of the program, and therefore, we call this approach single-block encoding (SBE). SBE may result in a huge number of program paths to be explored, which constitutes a fundamental source of inefficiency. We propose a generalization of the approach, in which transitions of the ART represent larger portions of the program; we call this approach large-block encoding (LBE). LBE may reduce the number of paths to be explored up to exponentially. Within this framework, we also investigate symbolic representations: for representing abstract states, in addition to conjunctions as used in SBE, we investigate the use of arbitrary Boolean formulas; for computing abstract-successor states, in addition to Cartesian predicate abstraction as used in SBE, we investigate the use of Boolean predicate abstraction. The new encoding leverages the efficiency of state-of-the-art SMT solvers, which can symbolically compute abstract large-block successors. Our experiments on benchmark C programs show that the large-block encoding outperforms the single-block encoding.}, keyword = {CPAchecker,Software Model Checking}, annote = {}, institution = {School of Computing Science (CMPT), Simon Fraser University (SFU)}, month = {April}, } -
CPAchecker: A Tool for Configurable Software Verification.
Technical report SFU-CS-2009-02, School of Computing Science (CMPT),
Simon Fraser University (SFU),
January
2009.
doi:10.48550/arXiv.0902.0019
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, is required to implement the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. The major design goal during the development was to provide a framework for developers that is flexible and easy to extend. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this platform and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. We evaluate the efficiency of our tool on benchmarks from the software model checker BLAST. The first released version of CPAchecker implements CPAs for predicate abstraction, octagon, and explicit-value domains. Binaries and the source code of CPAchecker are publicly available as free software.BibTeX Entry
@techreport{TR002-SFU09, author = {Dirk Beyer and M. Erkan Keremoglu}, title = {{CPAchecker}: A Tool for Configurable Software Verification}, number = {SFU-CS-2009-02}, year = {2009}, doi = {10.48550/arXiv.0902.0019}, url = {http://www.sosy-lab.org/~dbeyer/CPAchecker/}, abstract = {Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, is required to implement the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. The major design goal during the development was to provide a framework for developers that is flexible and easy to extend. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this platform and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. We evaluate the efficiency of our tool on benchmarks from the software model checker BLAST. The first released version of CPAchecker implements CPAs for predicate abstraction, octagon, and explicit-value domains. Binaries and the source code of CPAchecker are publicly available as free software.}, keyword = {CPAchecker,Software Model Checking}, annote = {}, institution = {School of Computing Science (CMPT), Simon Fraser University (SFU)}, month = {January}, }
2007
-
An Interface Formalism for Web Services.
Technical report MTC-REPORT-2007-002, School of Computer and Communication Sciences (IC),
Ecole Polytechnique Féd́̊ale de Lausanne (EPFL),
December
2007.
Keyword(s):
Interfaces for Component-Based Design
PDF
Supplement
Abstract
Web application development using distributed components and web services presents new software integration challenges, because solutions often cross vendor, administrative, and other boundaries across which neither binary nor source code can be shared. We present a methodology that addresses this problem through a formalism for specifying and manipulating behavioral interfaces of multi-threaded open software components that communicate with each other through method calls. An interface constrains both the implementation and the user of a web service to fulfill certain assumptions that are specified by the interface. Our methodology consists of three increasingly expressive classes of interfaces. Signature interfaces specify the methods that can be invoked by the user, together with parameters. Consistency interfaces add propositional constraints, enhancing signature interfaces with the ability to specify choice and causality. Protocol interfaces specify, in addition, temporal ordering constraints on method invocations. We provide approaches to check if two or more interfaces are compatible; if a web service can be safely substituted for another one; and if a web service satisfies a specification that represents a desired behavioral property.BibTeX Entry
@techreport{TR002-EPFL07, author = {Dirk Beyer and Arindam Chakrabarti and Thomas A. Henzinger}, title = {An Interface Formalism for Web Services}, number = {MTC-REPORT-2007-002}, year = {2007}, url = {http://infoscience.epfl.ch/search?recid=114605&ln=en}, pdf = {https://www.sosy-lab.org/research/pub/2007-EPFL-TR002.An_Interface_Formalism_for_Web_Services.pdf}, abstract = {Web application development using distributed components and web services presents new software integration challenges, because solutions often cross vendor, administrative, and other boundaries across which neither binary nor source code can be shared. We present a methodology that addresses this problem through a formalism for specifying and manipulating behavioral interfaces of multi-threaded open software components that communicate with each other through method calls. An interface constrains both the implementation and the user of a web service to fulfill certain assumptions that are specified by the interface. Our methodology consists of three increasingly expressive classes of interfaces. Signature interfaces specify the methods that can be invoked by the user, together with parameters. Consistency interfaces add propositional constraints, enhancing signature interfaces with the ability to specify choice and causality. Protocol interfaces specify, in addition, temporal ordering constraints on method invocations. We provide approaches to check if two or more interfaces are compatible; if a web service can be safely substituted for another one; and if a web service satisfies a specification that represents a desired behavioral property.}, keyword = {Interfaces for Component-Based Design}, annote = {A preliminary version of this paper was presented at the First International Workshop on Foundations of Interface Technologies (FIT 2005, San Francisco, CA, August 21). <BR> Online: <a href="http://infoscience.epfl.ch/search?recid=114605&ln=en"> http://infoscience.epfl.ch/search?recid=114605&ln=en</a> <BR>}, institution = {School of Computer and Communication Sciences (IC), Ecole Polytechnique F\'ed\'erale de Lausanne (EPFL)}, month = {December}, }Additional Infos
A preliminary version of this paper was presented at the First International Workshop on Foundations of Interface Technologies (FIT 2005, San Francisco, CA, August 21).
Online: http://infoscience.epfl.ch/search?recid=114605&ln=en
2006
-
Path Invariants.
Technical report MTC-REPORT-2006-003, School of Computer and Communication Sciences (IC),
Ecole Polytechnique Féd́̊ale de Lausanne (EPFL),
December
2006.
Keyword(s):
Software Model Checking
PDF
Supplement
Abstract
The success of software verification depends on the ability to find a suitable abstraction of a program automatically. We propose a new method for automated abstraction refinement, which overcomes the inherent limitations of predicate discovery schemes. In such schemes, the cause of a false positive is identified as an infeasible error path, and the abstraction is refined in order to remove that path. By contrast, we view the cause of a false positive -the "spurious counterexample"- as a full-fledged program, whose control-flow graph may contain loops of the original program and represent unbounded computations. The advantages of using such path programs as counterexamples for abstraction refinement are twofold. First, we can bring the whole machinery of program analysis to bear on path programs: specifically, we use abstract interpretation in the form of constrained-based invariant generation to automatically infer invariants of path programs -so-called path invariants. Second, we use path invariants for abstraction refinement in order to remove not one infeasibility at a time, but to remove at once all infeasible error computations that are represented by a path program. Unlike predicate discovery schemes, our method handles loops without unrolling them; it infers abstractions that involve universal quantification and naturally incorporates disjunctive invariants.BibTeX Entry
@techreport{TR003-EPFL06, author = {Dirk Beyer and Thomas A.~Henzinger and Rupak Majumdar and Andrey Rybalchenko}, title = {Path Invariants}, number = {MTC-REPORT-2006-003}, year = {2006}, url = {http://infoscience.epfl.ch/search.py?recid=98452&ln=en}, pdf = {https://www.sosy-lab.org/research/pub/2006-EPFL-TR003.Path_Invariants.pdf}, abstract = {The success of software verification depends on the ability to find a suitable abstraction of a program automatically. We propose a new method for automated abstraction refinement, which overcomes the inherent limitations of predicate discovery schemes. In such schemes, the cause of a false positive is identified as an infeasible error path, and the abstraction is refined in order to remove that path. By contrast, we view the cause of a false positive ---the ``spurious counterexample''--- as a full-fledged program, whose control-flow graph may contain loops of the original program and represent unbounded computations. The advantages of using such path programs as counterexamples for abstraction refinement are twofold. First, we can bring the whole machinery of program analysis to bear on path programs: specifically, we use abstract interpretation in the form of constrained-based invariant generation to automatically infer invariants of path programs ---so-called path invariants. Second, we use path invariants for abstraction refinement in order to remove not one infeasibility at a time, but to remove at once all infeasible error computations that are represented by a path program. Unlike predicate discovery schemes, our method handles loops without unrolling them; it infers abstractions that involve universal quantification and naturally incorporates disjunctive invariants.}, keyword = {Software Model Checking}, annote = {Online: <a href="http://infoscience.epfl.ch/search.py?recid=98452&ln=en"> http://infoscience.epfl.ch/search.py?recid=98452&ln=en</a> <BR>}, institution = {School of Computer and Communication Sciences (IC), Ecole Polytechnique F\'ed\'erale de Lausanne (EPFL)}, month = {December}, }Additional Infos
-
Three Algorithms for Interface Synthesis: A Comparative Study.
Technical report MTC-REPORT-2006-001, School of Computer and Communication Sciences (IC),
Ecole Polytechnique Féd́̊ale de Lausanne (EPFL),
May
2006.
Keyword(s):
Interfaces for Component-Based Design,
Software Model Checking
PDF
Supplement
Abstract
A temporal interface for a system component is a finite automaton that specifies the legal sequences of input events. We evaluate and compare three different algorithms for automatically extracting the temporal interface from the transition graph of a component: (1) a game algorithm that computes the interface as a representation of the most general environment strategy to avoid a safety violation; (2) a learning algorithm that repeatedly queries the component to construct the minimal interface automaton; and (3) a CEGAR algorithm that iteratively refines an abstract interface hypothesis by adding relevant state information from the component. Since algorithms (2) and (3) have been published in different software contexts, for comparison purposes, we present the three algorithms in a uniform finite-state setting. We furthermore extend the three algorithms to construct maximally permissive interface automata, which accept all legal input sequences. While the three algorithms have similar worst-case complexities, their actual running times differ greatly depending on the component whose interface is computed. On the theoretical side, we provide families of components that exhibit exponential differences in the performance of the three algorithms. On the practical side, we evaluate the three algorithms experimentally on a variety of real world examples. Not surprisingly, the experimental evaluation confirms the theoretical expectation: learning performs best if the minimal interface automaton is small; CEGAR performs best if only few component variables are needed to prove an interface hypothesis safe and permissive; and the direct (game) algorithm outperforms both approaches if neither is the case.BibTeX Entry
@techreport{TR001-EPFL06, author = {Dirk Beyer and Thomas A.~Henzinger and Vasu Singh}, title = {Three Algorithms for Interface Synthesis: A Comparative Study}, number = {MTC-REPORT-2006-001}, year = {2006}, url = {http://infoscience.epfl.ch/search.py?recid=85675&ln=en}, pdf = {https://www.sosy-lab.org/research/pub/2006-EPFL-TR001.Three_Algorithms_for_Interface_Synthesis_A_Comparative_Study.pdf}, abstract = {A temporal interface for a system component is a finite automaton that specifies the legal sequences of input events. We evaluate and compare three different algorithms for automatically extracting the temporal interface from the transition graph of a component: (1) a game algorithm that computes the interface as a representation of the most general environment strategy to avoid a safety violation; (2) a learning algorithm that repeatedly queries the component to construct the minimal interface automaton; and (3) a CEGAR algorithm that iteratively refines an abstract interface hypothesis by adding relevant state information from the component. Since algorithms (2) and (3) have been published in different software contexts, for comparison purposes, we present the three algorithms in a uniform finite-state setting. We furthermore extend the three algorithms to construct maximally permissive interface automata, which accept all legal input sequences. While the three algorithms have similar worst-case complexities, their actual running times differ greatly depending on the component whose interface is computed. On the theoretical side, we provide families of components that exhibit exponential differences in the performance of the three algorithms. On the practical side, we evaluate the three algorithms experimentally on a variety of real world examples. Not surprisingly, the experimental evaluation confirms the theoretical expectation: learning performs best if the minimal interface automaton is small; CEGAR performs best if only few component variables are needed to prove an interface hypothesis safe and permissive; and the direct (game) algorithm outperforms both approaches if neither is the case.}, keyword = {Interfaces for Component-Based Design,Software Model Checking}, annote = {Online: <a href="http://infoscience.epfl.ch/search.py?recid=85675&ln=en"> http://infoscience.epfl.ch/search.py?recid=85675&ln=en</a> <BR>}, institution = {School of Computer and Communication Sciences (IC), Ecole Polytechnique F\'ed\'erale de Lausanne (EPFL)}, month = {May}, }Additional Infos
2005
-
Lazy Shape Analysis.
Technical report MTC-REPORT-2005-006, School of Computer and Communication Sciences (IC),
Ecole Polytechnique Féd́̊ale de Lausanne (EPFL),
December
2005.
Keyword(s):
Software Model Checking
PDF
Supplement
Abstract
Many software model checkers are based on predicate abstraction. Values of variables in branching conditions are represented abstractly using predicates. The strength of this approach is its path-sensitive nature. However, if the control flow depends heavily on the values of memory cells on the heap, the approach does not work well, because it is difficult to find 'good' predicate abstractions to represent the heap. In contrast, shape analysis can lead to a very compact representation of data structures stored on the heap. In this paper, we combine shape analysis with predicate abstraction, and integrate it into the software model checker BLAST. Because shape analysis is expensive, we do not apply it globally. Instead, we ensure that shapes are computed and stored locally, only where necessary for proving the verification goal. To achieve this, we extend lazy abstraction refinement, which so far has been used only for predicate abstractions, to shapes. This approach does not only increase the precision of model checking and shape analysis taken individually, but also increases the efficiency of shape analysis (we do not compute shapes where not necessary). We implemented the technique by extending BLAST with calls to TVLA, and evaluated it on several C programs manipulating data structures, with the result that the combined tool can now automatically verify programs that are not verifiable using either shape analysis or predicate abstraction on its own.BibTeX Entry
@techreport{TR006-EPFL05, author = {Dirk Beyer and Thomas A.~Henzinger and Gr{\'e}gory Th{\'e}oduloz}, title = {Lazy Shape Analysis}, number = {MTC-REPORT-2005-006}, year = {2005}, url = {http://infoscience.epfl.ch/search.py?recid=63789&ln=en}, pdf = {https://www.sosy-lab.org/research/pub/2005-EPFL-TR006.Lazy_Shape_Analysis.pdf}, abstract = {Many software model checkers are based on predicate abstraction. Values of variables in branching conditions are represented abstractly using predicates. The strength of this approach is its path-sensitive nature. However, if the control flow depends heavily on the values of memory cells on the heap, the approach does not work well, because it is difficult to find `good' predicate abstractions to represent the heap. In contrast, shape analysis can lead to a very compact representation of data structures stored on the heap. In this paper, we combine shape analysis with predicate abstraction, and integrate it into the software model checker BLAST. Because shape analysis is expensive, we do not apply it globally. Instead, we ensure that shapes are computed and stored locally, only where necessary for proving the verification goal. To achieve this, we extend lazy abstraction refinement, which so far has been used only for predicate abstractions, to shapes. This approach does not only increase the precision of model checking and shape analysis taken individually, but also increases the efficiency of shape analysis (we do not compute shapes where not necessary). We implemented the technique by extending BLAST with calls to TVLA, and evaluated it on several C programs manipulating data structures, with the result that the combined tool can now automatically verify programs that are not verifiable using either shape analysis or predicate abstraction on its own.}, keyword = {Software Model Checking}, annote = {Online: <a href="http://infoscience.epfl.ch/search.py?recid=63789&ln=en"> http://infoscience.epfl.ch/search.py?recid=63789&ln=en</a> <BR>}, institution = {School of Computer and Communication Sciences (IC), Ecole Polytechnique F\'ed\'erale de Lausanne (EPFL)}, month = {December}, }Additional Infos
-
Mining Co-Change Clusters from Version Repositories.
Technical report IC/2005/003, School of Computer and Communication Sciences (IC),
Ecole Polytechnique Féd́̊ale de Lausanne (EPFL),
January
2005.
Keyword(s):
Structural Analysis and Comprehension
PDF
Supplement
Abstract
Clusters of software artifacts that are frequently changed together are subsystem candidates, because one of the main goals of software design is to make changes local. The contribution of this paper is a visualization-based method that supports the identification of such clusters. First, we define the co-change graph as a simple but powerful model of common changes of software artifacts, and describe how to extract the graph from version control repositories. Second, we introduce an energy model for computing force-directed layouts of co-change graphs. The resulting layouts have a well-defined interpretation in terms of the structure of the visualized graph, and clearly reveal groups of frequently co-changed artifacts. We evaluate our method by comparing the layouts for three example projects with authoritative subsystem decompositions.BibTeX Entry
@techreport{TR003-EPFL05, author = {Dirk Beyer and Andreas Noack}, title = {Mining Co-Change Clusters from Version Repositories}, number = {IC/2005/003}, year = {2005}, url = {http://infoscience.epfl.ch/record/52706}, pdf = {https://www.sosy-lab.org/research/pub/2005-EPFL-TR003.Mining_Co-Change_Clusters_from_Version_Repositories.pdf}, abstract = {Clusters of software artifacts that are frequently changed together are subsystem candidates, because one of the main goals of software design is to make changes local. The contribution of this paper is a visualization-based method that supports the identification of such clusters. First, we define the co-change graph as a simple but powerful model of common changes of software artifacts, and describe how to extract the graph from version control repositories. Second, we introduce an energy model for computing force-directed layouts of co-change graphs. The resulting layouts have a well-defined interpretation in terms of the structure of the visualized graph, and clearly reveal groups of frequently co-changed artifacts. We evaluate our method by comparing the layouts for three example projects with authoritative subsystem decompositions.}, keyword = {Structural Analysis and Comprehension}, annote = {Online: <a href="http://infoscience.epfl.ch/record/52706"> http://infoscience.epfl.ch/record/52706</a> <BR>}, institution = {School of Computer and Communication Sciences (IC), Ecole Polytechnique F\'ed\'erale de Lausanne (EPFL)}, month = {January}, }Additional Infos
2004
-
CrocoPat 2.1 Introduction and Reference Manual.
Technical report UCB//CSD-04-1338, Computer Science Division (EECS),
University of California, Berkeley,
July
2004.
Also: The Computing Research Repository (CoRR),
cs.PL/0409009,
September 2004
Keyword(s):
Structural Analysis and Comprehension
PDF
Supplement
Abstract
CrocoPat is an efficient, powerful and easy-to-use tool for manipulating relations of arbitrary arity, including directed graphs. This manual provides an introduction to and a reference for CrocoPat and its programming language RML. It includes several application examples, in particular from the analysis of structural models of software systems.BibTeX Entry
@techreport{TR1338-UCB04, author = {Dirk Beyer and Andreas Noack}, title = {{CrocoPat} 2.1 {I}ntroduction and Reference Manual}, number = {UCB//CSD-04-1338}, year = {2004}, url = {../../2004-UCB-TR1338.CrocoPat_2.1_Introduction_and_Reference_Manual/main.html}, pdf = {https://arxiv.org/abs/cs/0409009}, abstract = {CrocoPat is an efficient, powerful and easy-to-use tool for manipulating relations of arbitrary arity, including directed graphs. This manual provides an introduction to and a reference for CrocoPat and its programming language RML. It includes several application examples, in particular from the analysis of structural models of software systems.}, keyword = {Structural Analysis and Comprehension}, annote = {Online: <a href="http://sunsite.berkeley.edu/TechRepPages/CSD-04-1338"> http://sunsite.berkeley.edu/TechRepPages/CSD-04-1338</a> <BR> A tutorial and user's guide for CrocoPat, defines and explains the syntax and semantics of the extended language. <BR> CrocoPat is available at: <a href="http://www.sosy-lab.org/~dbeyer/CrocoPat/"> http://www.sosy-lab.org/~dbeyer/CrocoPat/</a>}, institution = {Computer Science Division (EECS), University of California, Berkeley}, month = {July}, note = {Also: The Computing Research Repository (CoRR), cs.PL/0409009, September 2004}, }Additional Infos
Online: http://sunsite.berkeley.edu/TechRepPages/CSD-04-1338
A tutorial and user's guide for CrocoPat, defines and explains the syntax and semantics of the extended language.
CrocoPat is available at: http://www.sosy-lab.org/~dbeyer/CrocoPat/
2003
-
CrocoPat: A Tool for Efficient Pattern Recognition
in Large Object-Oriented Programs.
Technical report I-04/2003, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
January
2003.
Keyword(s):
Structural Analysis and Comprehension
PDF
BibTeX Entry
@techreport{TR04-BTU03, author = {Dirk Beyer and Claus Lewerentz}, title = {{CrocoPat}: A Tool for Efficient Pattern Recognition in Large Object-Oriented Programs}, number = {I-04/2003}, year = {2003}, pdf = {https://www.sosy-lab.org/research/pub/2003-BTU-TR04.CrocoPat_A_Tool_for_Efficient_Pattern_Recognition.in_Large_Object-Oriented_Programs.pdf}, keyword = {Structural Analysis and Comprehension}, annote = {See WCRE03 [27] for proceedings version.}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {January}, }Additional Infos
See WCRE03 [27] for proceedings version. -
A Comparative Study of Decision Diagrams
for Real-Time Verification.
Technical report I-03/2003, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
January
2003.
Keyword(s):
Formal Verification of Real-Time Systems
BibTeX Entry
@techreport{TR03-BTU03, author = {Dirk Beyer and Andreas Noack}, title = {A Comparative Study of Decision Diagrams for Real-Time Verification}, number = {I-03/2003}, year = {2003}, keyword = {Formal Verification of Real-Time Systems}, annote = {See FORTE03 [26] for proceedings version.}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {January}, }Additional Infos
See FORTE03 [26] for proceedings version.
2001
-
Rabbit: Verification of Real-Time Systems.
Technical report I-05/2001, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
March
2001.
Keyword(s):
Formal Verification of Real-Time Systems
BibTeX Entry
@techreport{TR05-BTU01, author = {Dirk Beyer}, title = {Rabbit: Verification of Real-Time Systems}, number = {I-05/2001}, year = {2001}, keyword = {Formal Verification of Real-Time Systems}, annote = {See CAV03 [25] for proceedings version.}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {March}, }Additional Infos
See CAV03 [25] for proceedings version. -
Reachability Analysis and Refinement Checking
for BDD-Based Model Checking of Timed Automata.
Technical report I-04/2001, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
February
2001.
Keyword(s):
Formal Verification of Real-Time Systems
BibTeX Entry
@techreport{TR04-BTU01, author = {Dirk Beyer}, title = {Reachability Analysis and Refinement Checking for {BDD}-Based Model Checking of Timed Automata}, number = {I-04/2001}, year = {2001}, keyword = {Formal Verification of Real-Time Systems}, annote = {See CHARME01 [20] for proceedings version.}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {February}, }Additional Infos
See CHARME01 [20] for proceedings version.
2000
-
Efficient Verification of Real-Time Systems using BDDs.
Technical report I-13/2000, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
December
2000.
Keyword(s):
Formal Verification of Real-Time Systems
BibTeX Entry
@techreport{TR13-BTU00, author = {Dirk Beyer and Andreas Noack}, title = {Efficient Verification of Real-Time Systems using {BDD}s}, number = {I-13/2000}, year = {2000}, keyword = {Formal Verification of Real-Time Systems}, annote = {See FMICS01 [18] for proceedings version.}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {December}, }Additional Infos
See FMICS01 [18] for proceedings version. -
Flattening Inheritance Structures - OR -
Getting the Right Picture of Large OO-Systems.
Technical report I-12/2000, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
November
2000.
Keyword(s):
Structural Analysis and Comprehension
PDF
Abstract
More and more software systems are developed using the object oriented paradigm. Thus, large systems contain inheritance structures to provide a flexible and re-usable design and to allow for polymorphic method calls. This paper gives a detailed overview about the impact of using inheritance on measuring, understanding and using subclasses in such class systems. Usually, considering classes within an inheritance relation is reduced to the consideration of locally defined members of a class. This view might be incomplete or even misleading in some use cases. To provide an additional view on a given system we define a tool-supported flattening process which transforms an inheritance structure to a representation in which all the inherited members are explicit in each subclass. This representation provides additional insights for measuring, understanding, and developing large software systems.BibTeX Entry
@techreport{TR12-BTU00, author = {Dirk Beyer and Claus Lewerentz and Frank Simon}, title = {Flattening Inheritance Structures -- OR -- {G}etting the Right Picture of Large {OO}-Systems}, number = {I-12/2000}, year = {2000}, pdf = {https://www.sosy-lab.org/research/pub/2000-BTU-TR12.Flattening_Inheritance_Structures.pdf}, abstract = {More and more software systems are developed using the object oriented paradigm. Thus, large systems contain inheritance structures to provide a flexible and re-usable design and to allow for polymorphic method calls. This paper gives a detailed overview about the impact of using inheritance on measuring, understanding and using subclasses in such class systems. Usually, considering classes within an inheritance relation is reduced to the consideration of locally defined members of a class. This view might be incomplete or even misleading in some use cases. To provide an additional view on a given system we define a tool-supported flattening process which transforms an inheritance structure to a representation in which all the inherited members are explicit in each subclass. This representation provides additional insights for measuring, understanding, and developing large software systems.}, keyword = {Structural Analysis and Comprehension}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {November}, } -
Considering Inheritance, Overriding, Overloading and
Polymorphism for Measuring C++ Sources.
Technical report I-04/2000, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
May
2000.
Keyword(s):
Structural Analysis and Comprehension
BibTeX Entry
@techreport{TR04-BTU00, author = {Frank Simon and Dirk Beyer}, title = {Considering Inheritance, Overriding, Overloading and Polymorphism for Measuring {C++} Sources}, number = {I-04/2000}, year = {2000}, keyword = {Structural Analysis and Comprehension}, annote = {See IWSM00 [12] for proceedings version.}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {May}, }Additional Infos
See IWSM00 [12] for proceedings version.
1999
-
A Formalism for Modular Modelling of Hybrid Systems.
Technical report I-10/1999, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
October
1999.
Keyword(s):
Formal Verification of Real-Time Systems
BibTeX Entry
@techreport{TR10-BTU99, author = {Dirk Beyer and Heinrich Rust}, title = {A Formalism for Modular Modelling of Hybrid Systems}, number = {I-10/1999}, year = {1999}, keyword = {Formal Verification of Real-Time Systems}, annote = {See FSCBS01 [16] for proceedings version.}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {October}, }Additional Infos
See FSCBS01 [16] for proceedings version. -
A Modular Hybrid Modelling Notation.
Technical report I-03/1999, Institute of Computer Science,
Brandenburgische Technische Universität Cottbus,
February
1999.
Keyword(s):
Formal Verification of Real-Time Systems
BibTeX Entry
@techreport{TR03-BTU99, author = {Dirk Beyer and Heinrich Rust}, title = {A Modular Hybrid Modelling Notation}, number = {I-03/1999}, year = {1999}, keyword = {Formal Verification of Real-Time Systems}, annote = {See TR10-BTU99 [04] for revised version.}, institution = {Institute of Computer Science, Brandenburgische Technische Universit{\"a}t Cottbus}, month = {February}, }Additional Infos
See TR10-BTU99 [04] for revised version.
Disclaimer:
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.