Publications of Stefan Löwe
Articles in journal or book chapters
-
Reliable Benchmarking: Requirements and Solutions.
International Journal on Software Tools for Technology Transfer (STTT), 21(1):1-29,
2019.
doi:10.1007/s10009-017-0469-y
Keyword(s):
Benchmarking
Publisher's Version
PDF
Presentation
Supplement
Abstract
Benchmarking is a widely used method in experimental computer science, in particular, for the comparative evaluation of tools and algorithms. As a consequence, a number of questions need to be answered in order to ensure proper benchmarking, resource measurement, and presentation of results, all of which is essential for researchers, tool developers, and users, as well as for tool competitions. We identify a set of requirements that are indispensable for reliable benchmarking and resource measurement of time and memory usage of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework can (on Linux systems) currently only be done by using the cgroup and namespace features of the kernel. We developed BenchExec, a ready-to-use, tool-independent, and open-source implementation of a benchmarking framework that fulfills all presented requirements, making reliable benchmarking and resource measurement easy. Our framework is able to work with a wide range of different tools, has proven its reliability and usefulness in the International Competition on Software Verification, and is used by several research groups worldwide to ensure reliable benchmarking. Finally, we present guidelines on how to present measurement results in a scientifically valid and comprehensible way.BibTeX Entry
@article{Benchmarking-STTT, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Reliable Benchmarking: {R}equirements and Solutions}, journal = {International Journal on Software Tools for Technology Transfer (STTT)}, volume = {21}, number = {1}, pages = {1--29}, year = {2019}, doi = {10.1007/s10009-017-0469-y}, sha256 = {a50fbc212af394b32166d6354f986e7b1d5bc87220bdc50df899d6a46fedf33c}, url = {https://www.sosy-lab.org/research/benchmarking/}, presentation = {https://www.sosy-lab.org/research/prs/Latest_ReliableBenchmarking.pdf}, abstract = {Benchmarking is a widely used method in experimental computer science, in particular, for the comparative evaluation of tools and algorithms. As a consequence, a number of questions need to be answered in order to ensure proper benchmarking, resource measurement, and presentation of results, all of which is essential for researchers, tool developers, and users, as well as for tool competitions. We identify a set of requirements that are indispensable for reliable benchmarking and resource measurement of time and memory usage of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework can (on Linux systems) currently only be done by using the cgroup and namespace features of the kernel. We developed BenchExec, a ready-to-use, tool-independent, and open-source implementation of a benchmarking framework that fulfills all presented requirements, making reliable benchmarking and resource measurement easy. Our framework is able to work with a wide range of different tools, has proven its reliability and usefulness in the International Competition on Software Verification, and is used by several research groups worldwide to ensure reliable benchmarking. Finally, we present guidelines on how to present measurement results in a scientifically valid and comprehensible way.}, keyword = {Benchmarking}, _pdf = {https://www.sosy-lab.org/research/pub/2019-STTT.Reliable_Benchmarking_Requirements_and_Solutions.pdf}, annote = {Publication appeared first online in November 2017<BR/> BenchExec is available at: <a href="https://github.com/sosy-lab/benchexec"> https://github.com/sosy-lab/benchexec</a>}, }Additional Infos
Publication appeared first online in November 2017
BenchExec is available at: https://github.com/sosy-lab/benchexec
Articles in conference or workshop proceedings
-
CPA-RefSel: CPAchecker with Refinement Selection (Competition Contribution).
In Marsha Chechik and
Jean-François Raskin, editors,
Proceedings of the 22nd International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016, Eindhoven, The Netherlands, April 2-8),
LNCS 9636,
pages 916-919,
2016.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-49674-9_59
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{CPAREFSEL-COMP16, author = {Stefan L{\"{o}}we}, title = {{CPA-RefSel}: {{\sc CPAchecker}} with Refinement Selection (Competition Contribution)}, booktitle = {Proceedings of the 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2016, Eindhoven, The Netherlands, April 2-8)}, editor = {Marsha Chechik and Jean{-}Fran{\c{c}}ois Raskin}, pages = {916--919}, year = {2016}, series = {LNCS~9636}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-49673-2}, doi = {10.1007/978-3-662-49674-9_59}, sha256 = {}, url = {https://doi.org/10.1007/978-3-662-49674-9_59}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won category DeviceDriversLinux64 in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2016/">SV-COMP'16</a></span>}, }Additional Infos
Won category DeviceDriversLinux64 in SV-COMP'16 -
Refinement Selection.
In B. Fischer and
J. Geldenhuys, editors,
Proceedings of the 22nd International Symposium on
Model Checking of Software (SPIN 2015, Stellenbosch, South Africa, August 24-26),
LNCS 9232,
pages 20-38,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-23404-5_3
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Counterexample-guided abstraction refinement is a property-directed approach for the automatic construction of an abstract model for a given system. The approach learns information from infeasible error paths in order to refine the abstract model. We address the problem of selecting which information to learn from a given infeasible error path. In previous work, we presented a method that enables refinement selection by extracting a set of sliced prefixes from a given infeasible error path, each of which represents a different reason for infeasibility of the error path and thus, a possible way to refine the abstract model. In this work, we (1) define and investigate several promising heuristics for selecting an appropriate precision for refinement, and (2) propose a new combination of a value analysis and a predicate analysis that does not only find out which information to learn from an infeasible error path, but automatically decides which analysis should be preferred for a refinement. These contributions allow a more systematic refinement strategy for CEGAR-based analyses. We evaluated the idea on software verification. We provide an implementation of the new concepts in the verification framework CPAchecker and make it publicly available. In a thorough experimental study, we show that refinement selection often avoids state-space explosion where existing approaches diverge, and that it can be even more powerful if applied on a higher level, where it decides which analysis of a combination should be favored for a refinement.BibTeX Entry
@inproceedings{SPIN15b, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Refinement Selection}, booktitle = {Proceedings of the 22nd International Symposium on Model Checking of Software (SPIN~2015, Stellenbosch, South Africa, August 24-26)}, editor = {B.~Fischer and J.~Geldenhuys}, pages = {20-38}, year = {2015}, series = {LNCS~9232}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-23403-8}, doi = {10.1007/978-3-319-23404-5_3}, url = {https://www.sosy-lab.org/research/cpa-ref-sel/}, pdf = {https://www.sosy-lab.org/research/pub/2015-SPIN.Refinement_Selection.pdf}, abstract = {Counterexample-guided abstraction refinement is a property-directed approach for the automatic construction of an abstract model for a given system. The approach learns information from infeasible error paths in order to refine the abstract model. We address the problem of selecting which information to learn from a given infeasible error path. In previous work, we presented a method that enables refinement selection by extracting a set of sliced prefixes from a given infeasible error path, each of which represents a different reason for infeasibility of the error path and thus, a possible way to refine the abstract model. In this work, we (1) define and investigate several promising heuristics for selecting an appropriate precision for refinement, and (2) propose a new combination of a value analysis and a predicate analysis that does not only find out which information to learn from an infeasible error path, but automatically decides which analysis should be preferred for a refinement. These contributions allow a more systematic refinement strategy for CEGAR-based analyses. We evaluated the idea on software verification. We provide an implementation of the new concepts in the verification framework CPAchecker and make it publicly available. In a thorough experimental study, we show that refinement selection often avoids state-space explosion where existing approaches diverge, and that it can be even more powerful if applied on a higher level, where it decides which analysis of a combination should be favored for a refinement.}, keyword = {CPAchecker,Software Model Checking}, } -
Benchmarking and Resource Measurement.
In B. Fischer and
J. Geldenhuys, editors,
Proceedings of the 22nd International Symposium on
Model Checking of Software (SPIN 2015, Stellenbosch, South Africa, August 24-26),
LNCS 9232,
pages 160-178,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-23404-5_12
Keyword(s):
Benchmarking
Publisher's Version
PDF
Supplement
Abstract
Proper benchmarking and resource measurement is an important topic, because benchmarking is a widely-used method for the comparative evaluation of tools and algorithms in many research areas. It is essential for researchers, tool developers, and users, as well as for competitions. We formulate a set of requirements that are indispensable for reproducible benchmarking and reliable resource measurement of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework is complex and can (on Linux) currently only be done by using the cgroups feature of the kernel. We provide BenchExec, a ready-to-use, tool-independent, and free implementation of a benchmarking framework that fulfills all presented requirements, making reproducible benchmarking and reliable resource measurement easy. Our framework is able to work with a wide range of different tools and has proven its reliability and usefulness in the International Competition on Software Verification.BibTeX Entry
@inproceedings{SPIN15a, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Benchmarking and Resource Measurement}, booktitle = {Proceedings of the 22nd International Symposium on Model Checking of Software (SPIN~2015, Stellenbosch, South Africa, August 24-26)}, editor = {B.~Fischer and J.~Geldenhuys}, pages = {160-178}, year = {2015}, series = {LNCS~9232}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-23403-8}, doi = {10.1007/978-3-319-23404-5_12}, url = {https://www.sosy-lab.org/research/benchmarking/}, pdf = {https://www.sosy-lab.org/research/pub/2015-SPIN.Benchmarking_and_Resource_Measurement.pdf}, abstract = {Proper benchmarking and resource measurement is an important topic, because benchmarking is a widely-used method for the comparative evaluation of tools and algorithms in many research areas. It is essential for researchers, tool developers, and users, as well as for competitions. We formulate a set of requirements that are indispensable for reproducible benchmarking and reliable resource measurement of automatic solvers, verifiers, and similar tools, and discuss limitations of existing methods and benchmarking tools. Fulfilling these requirements in a benchmarking framework is complex and can (on Linux) currently only be done by using the cgroups feature of the kernel. We provide BenchExec, a ready-to-use, tool-independent, and free implementation of a benchmarking framework that fulfills all presented requirements, making reproducible benchmarking and reliable resource measurement easy. Our framework is able to work with a wide range of different tools and has proven its reliability and usefulness in the International Competition on Software Verification.}, keyword = {Benchmarking}, annote = {An <a href="https://www.sosy-lab.org/research/bib/Year/2017.complete.html#Benchmarking-STTT">extended version</a> of this article appeared in STTT.}, }Additional Infos
An extended version of this article appeared in STTT. -
Sliced Path Prefixes: An Effective Method to Enable Refinement Selection.
In S. Graf and
M. Viswanathan, editors,
Proceedings of the 35th IFIP WG 6.1 International Conference on
Formal Techniques for Distributed Objects, Components, and Systems (FORTE 2015, Grenoble, France, June 2-4),
LNCS 9039,
pages 228-243,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-319-19195-9_15
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Automatic software verification relies on constructing, for a given program, an abstract model that is (1) abstract enough to avoid state-space explosion and (2) precise enough to reason about the specification. Counterexample-guided abstraction refinement is a standard technique that suggests to extract information from infeasible error paths, in order to refine the abstract model if it is too imprecise. Existing approaches -including our previous work- do not choose the refinement for a given path systematically. We present a method that generates alternative refinements and allows to systematically choose a suited one. The method takes as input one given infeasible error path and applies a slicing technique to obtain a set of new error paths that are more abstract than the original error path but still infeasible, each for a different reason. The (more abstract) constraints of the new paths can be passed to a standard refinement procedure, in order to obtain a set of possible refinements, one for each new path. Our technique is completely independent from the abstract domain that is used in the program analysis, and does not rely on a certain proof technique, such as SMT solving. We implemented the new algorithm in the verification framework CPAchecker and made our extension publicly available. The experimental evaluation of our technique indicates that there is a wide range of possibilities on how to refine the abstract model for a given error path, and we demonstrate that the choice of which refinement to apply to the abstract model has a significant impact on the verification effectiveness and efficiency.BibTeX Entry
@inproceedings{FORTE15, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Sliced Path Prefixes: An Effective Method to Enable Refinement Selection}, booktitle = {Proceedings of the 35th IFIP WG 6.1 International Conference on Formal Techniques for Distributed Objects, Components, and Systems (FORTE~2015, Grenoble, France, June 2-4)}, editor = {S.~Graf and M.~Viswanathan}, pages = {228-243}, year = {2015}, series = {LNCS~9039}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-319-19194-2}, doi = {10.1007/978-3-319-19195-9_15}, sha256 = {96e16841eb13a602455334a71a516f509ad1b1e2328edade3d5954062b387e7d}, url = {https://www.sosy-lab.org/research/cpa-ref-sel/#FORTE15}, abstract = {Automatic software verification relies on constructing, for a given program, an abstract model that is (1) abstract enough to avoid state-space explosion and (2) precise enough to reason about the specification. Counterexample-guided abstraction refinement is a standard technique that suggests to extract information from infeasible error paths, in order to refine the abstract model if it is too imprecise. Existing approaches ---including our previous work--- do not choose the refinement for a given path systematically. We present a method that generates alternative refinements and allows to systematically choose a suited one. The method takes as input one given infeasible error path and applies a slicing technique to obtain a set of new error paths that are more abstract than the original error path but still infeasible, each for a different reason. The (more abstract) constraints of the new paths can be passed to a standard refinement procedure, in order to obtain a set of possible refinements, one for each new path. Our technique is completely independent from the abstract domain that is used in the program analysis, and does not rely on a certain proof technique, such as SMT solving. We implemented the new algorithm in the verification framework CPAchecker and made our extension publicly available. The experimental evaluation of our technique indicates that there is a wide range of possibilities on how to refine the abstract model for a given error path, and we demonstrate that the choice of which refinement to apply to the abstract model has a significant impact on the verification effectiveness and efficiency.}, keyword = {CPAchecker,Software Model Checking}, } -
Interpolation for Value Analysis.
In U. Aßmann,
B. Demuth,
T. Spitta,
G. Püschel, and
R. Kaiser, editors,
Tagungsband Software Engineering 2015, Fachtagung des GI-Fachbereichs Softwaretechnik
(17. März - 20. März 2015, Dresden, Deutschland),
LNI 239,
pages 73-74,
2015.
Gesellschaft für Informatik (GI).
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
BibTeX Entry
@inproceedings{SE15-ExplicitCEGAR, author = {Dirk Beyer and Stefan L{\"{o}}we}, title = {Interpolation for Value Analysis}, booktitle = {Tagungsband Software Engineering 2015, Fachtagung des GI-Fachbereichs Softwaretechnik (17. M{\"{a}}rz - 20. M{\"{a}}rz 2015, Dresden, Deutschland)}, editor = {U.~A{\ss}mann and B.~Demuth and T.~Spitta and G.~P{\"{u}}schel and R.~Kaiser}, pages = {73-74}, year = {2015}, series = {{LNI}~239}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, url = {}, keyword = {CPAchecker,Software Model Checking}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2013.html#FASE13">full article on this topic</a> that appeared in Proc. FASE 2013.}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/2495}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. FASE 2013. -
CPAchecker with Support for Recursive Programs and Floating-Point Arithmetic (Competition Contribution).
In C. Baier and
C. Tinelli, editors,
Proceedings of the 21st International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2015, London, UK, April 13-17),
LNCS 9035,
pages 423-425,
2015.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-662-46681-0_34
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
We submit to SV-COMP'15 the software-verification framework CPAchecker. The submitted configuration is a combination of seven different analyses, based on explicit-value analysis, k-induction, predicate analysis, and concrete memory graphs. These analyses use concepts such as CEGAR, lazy abstraction, interpolation, adjustable-block encoding, bounded model checking, invariant generation, and block-abstraction memoization. Found counterexamples are cross-checked by a bit-precise analysis. The combination of several different analyses copes well with the diversity of the verification tasks in SV-COMP.BibTeX Entry
@inproceedings{CPACHECKER-COMP15, author = {Matthias Dangl and Stefan L{\"{o}}we and Philipp Wendler}, title = {{{\sc CPAchecker}} with Support for Recursive Programs and Floating-Point Arithmetic (Competition Contribution)}, booktitle = {Proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2015, London, UK, April 13-17)}, editor = {C.~Baier and C.~Tinelli}, pages = {423--425}, year = {2015}, series = {LNCS~9035}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-662-46680-3}, doi = {10.1007/978-3-662-46681-0_34}, sha256 = {}, url = {https://doi.org/10.1007/978-3-662-46681-0_34}, pdf = {https://www.sosy-lab.org/research/pub/2015-TACAS.CPAchecker_with_Support_for_Recursive_Programs_and_Floating-Point_Arithmetic.pdf}, abstract = {We submit to SV-COMP'15 the software-verification framework CPAchecker. The submitted configuration is a combination of seven different analyses, based on explicit-value analysis, k-induction, predicate analysis, and concrete memory graphs. These analyses use concepts such as CEGAR, lazy abstraction, interpolation, adjustable-block encoding, bounded model checking, invariant generation, and block-abstraction memoization. Found counterexamples are cross-checked by a bit-precise analysis. The combination of several different analyses copes well with the diversity of the verification tasks in SV-COMP.}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won categories ControlFlow, MemorySafety, and Overall, and received three silver and two bronze medals in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2015/">SV-COMP'15</a></span>}, }Additional Infos
Won categories ControlFlow, MemorySafety, and Overall, and received three silver and two bronze medals in SV-COMP'15 -
Precision Reuse in CPAchecker.
In W. Hasselbring and
N. C. Ehmke, editors,
Tagungsband Software Engineering 2014, Fachtagung des GI-Fachbereichs Softwaretechnik
(25. Februar - 28. Februar 2014, Kiel, Deutschland),
LNI 227,
pages 41-42,
2014.
Gesellschaft für Informatik (GI).
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Abstract
Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.BibTeX Entry
@inproceedings{SE14-Reuse, author = {Dirk Beyer and Stefan L{\"{o}}we and Evgeny Novikov and Andreas Stahlbauer and Philipp Wendler}, title = {Precision Reuse in CPAchecker}, booktitle = {Tagungsband Software Engineering 2014, Fachtagung des GI-Fachbereichs Softwaretechnik (25. Februar - 28. Februar 2014, Kiel, Deutschland)}, editor = {W.~Hasselbring and N.~C.~Ehmke}, pages = {41--42}, year = {2014}, series = {{LNI}~227}, publisher = {Gesellschaft f{\"{u}}r Informatik ({GI})}, pdf = {https://dl.gi.de/bitstream/handle/20.500.12116/30949/041.pdf?sequence=1&isAllowed=y}, abstract = {Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.}, keyword = {CPAchecker,Software Model Checking}, annote = {This is a summary of a <a href="https://www.sosy-lab.org/research/bib/Year/2013.html#FSE13">full article on this topic</a> that appeared in Proc. ESEC/FSE 2013.}, doinone = {DOI not available}, urlpub = {https://dl.gi.de/handle/20.500.12116/30949}, }Additional Infos
This is a summary of a full article on this topic that appeared in Proc. ESEC/FSE 2013. -
CPAchecker with Sequential Combination of
Explicit-Value Analyses and Predicate Analyses (Competition Contribution).
In E. Abraham and
K. Havelund, editors,
Proceedings of the 20th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2014, Grenoble, France, April 5-13),
LNCS 8413,
pages 392-394,
2014.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-54862-8_27
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
CPAchecker is a framework for software verification, built on the foundations of configurable program analysis (CPA). For the SV-COMP'14, we file a CPAchecker configuration that runs up to five analyses in sequence. The first two analyses of our approach utilize the explicit-value domain for modeling the state space, while the remaining analyses are based on predicate abstraction. In addition to that, a bit-precise counterexample checker comes into action whenever an analysis finds a counterexample. The combination of conceptually different analyses is key to the success of our verification approach, as the diversity of verification tasks is taken into account.BibTeX Entry
@inproceedings{CPACHECKER-COMP14, author = {Stefan~L{\"{o}}we and Mikhail~U.~Mandrykin and Philipp~Wendler}, title = {{{\sc CPAchecker}} with Sequential Combination of Explicit-Value Analyses and Predicate Analyses (Competition Contribution)}, booktitle = {Proceedings of the 20th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2014, Grenoble, France, April 5-13)}, editor = {E.~Abraham and K. Havelund}, pages = {392-394}, year = {2014}, series = {LNCS~8413}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-54861-1}, doi = {10.1007/978-3-642-54862-8_27}, sha256 = {}, url = {https://doi.org/10.1007/978-3-642-54862-8_27}, pdf = {https://www.sosy-lab.org/research/pub/2014-TACAS.CPAchecker_with_Sequential_Combination_of_Explicit-Value_Analyses_and_Predicate_Analyses.pdf}, abstract = {CPAchecker is a framework for software verification, built on the foundations of configurable program analysis (CPA). For the SV-COMP'14, we file a CPAchecker configuration that runs up to five analyses in sequence. The first two analyses of our approach utilize the explicit-value domain for modeling the state space, while the remaining analyses are based on predicate abstraction. In addition to that, a bit-precise counterexample checker comes into action whenever an analysis finds a counterexample. The combination of conceptually different analyses is key to the success of our verification approach, as the diversity of verification tasks is taken into account.}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won categories ControlFlow, MemorySafety, and Simple, and received one silver and one bronze medal in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2014/">SV-COMP'14</a></span>}, }Additional Infos
Won categories ControlFlow, MemorySafety, and Simple, and received one silver and one bronze medal in SV-COMP'14 -
Precision Reuse for Efficient Regression Verification.
In B. Meyer,
L. Baresi, and
M. Mezini, editors,
Proceedings of the 9th Joint Meeting of the European Software Engineering Conference and
the ACM SIGSOFT Symposium on Foundations of Software Engineering (ESEC/FSE 2013, St. Petersburg, Russia, August 18-26),
pages 389-399,
2013.
ACM.
doi:10.1145/2491411.2491429
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.BibTeX Entry
@inproceedings{FSE13, author = {Dirk Beyer and Stefan L{\"o}we and Evgeny Novikov and Andreas Stahlbauer and Philipp Wendler}, title = {Precision Reuse for Efficient Regression Verification}, booktitle = {Proceedings of the 9th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on Foundations of Software Engineering (ESEC/FSE 2013, St. Petersburg, Russia, August 18-26)}, editor = {B.~Meyer and L.~Baresi and M.~Mezini}, pages = {389-399}, year = {2013}, publisher = {ACM}, isbn = {}, doi = {10.1145/2491411.2491429}, url = {https://www.sosy-lab.org/research/cpa-reuse/}, pdf = {https://www.sosy-lab.org/research/pub/2013-FSE.Precision_Reuse_for_Efficient_Regression_Verification.pdf}, abstract = {Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are reasonably small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems created from 62 Linux kernel device drivers with 1119 revisions.}, keyword = {CPAchecker,Software Model Checking}, } -
Explicit-State Software Model Checking Based on CEGAR and Interpolation.
In V. Cortellessa and
D. Varro, editors,
Proceedings of the 16th International Conference on
Fundamental Approaches to Software Engineering (FASE 2013, Rome, Italy, March 20-22),
LNCS 7793,
pages 146-162,
2013.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-37057-1_11
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Abstract
Abstraction, counterexample-guided refinement, and interpolation are techniques that are essential to the success of predicate-based program analysis. These techniques have not yet been applied together to explicit-value program analysis. We present an approach that integrates abstraction and interpolation-based refinement into an explicit-value analysis, i.e., a program analysis that tracks explicit values for a specified set of variables (the precision). The algorithm uses an abstract reachability graph as central data structure and a path-sensitive dynamic approach for precision adjustment. We evaluate our algorithm on the benchmark set of the Competition on Software Verification 2012 (SV-COMP'12) to show that our new approach is highly competitive. We also show that combining our new approach with an auxiliary predicate analysis scores significantly higher than the SV-COMP'12 winner.BibTeX Entry
@inproceedings{FASE13, author = {Dirk Beyer and Stefan L{\"o}we}, title = {Explicit-State Software Model Checking Based on {CEGAR} and Interpolation}, booktitle = {Proceedings of the 16th International Conference on Fundamental Approaches to Software Engineering (FASE~2013, Rome, Italy, March 20-22)}, editor = {V.~Cortellessa and D.~Varro}, pages = {146-162}, year = {2013}, series = {LNCS~7793}, publisher = {Springer-Verlag, Heidelberg}, isbn = {978-3-642-37056-4}, doi = {10.1007/978-3-642-37057-1_11}, sha256 = {3e2ba52da100fd835736b3673fa47a3429522dd3f4c0834b13f29d6a68a8bd45}, url = {}, abstract = {Abstraction, counterexample-guided refinement, and interpolation are techniques that are essential to the success of predicate-based program analysis. These techniques have not yet been applied together to explicit-value program analysis. We present an approach that integrates abstraction and interpolation-based refinement into an explicit-value analysis, i.e., a program analysis that tracks explicit values for a specified set of variables (the precision). The algorithm uses an abstract reachability graph as central data structure and a path-sensitive dynamic approach for precision adjustment. We evaluate our algorithm on the benchmark set of the Competition on Software Verification 2012 (SV-COMP'12) to show that our new approach is highly competitive. We also show that combining our new approach with an auxiliary predicate analysis scores significantly higher than the SV-COMP'12 winner.}, keyword = {CPAchecker,Software Model Checking}, } -
CPAchecker with Explicit-Value Analysis
Based on CEGAR and Interpolation (Competition Contribution).
In N. Piterman and
S. Smolka, editors,
Proceedings of the 19th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2013, Rome, Italy, March 16-24),
LNCS 7795,
pages 610-612,
2013.
Springer.
doi:10.1007/978-3-642-36742-7_44
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@inproceedings{CPACHECKEREXPLICIT-COMP13, author = {Stefan L{\"{o}}we}, title = {{{\sc CPAchecker}} with Explicit-Value Analysis Based on {CEGAR} and Interpolation (Competition Contribution)}, booktitle = {Proceedings of the 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2013, Rome, Italy, March 16-24)}, editor = {N.~Piterman and S.~Smolka}, pages = {610-612}, year = {2013}, series = {LNCS~7795}, publisher = {Springer}, isbn = {978-3-642-36741-0}, doi = {10.1007/978-3-642-36742-7_44}, sha256 = {}, url = {https://doi.org/10.1007/978-3-642-36742-7_44}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Received four silver medals in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2013/">SV-COMP'13</a></span>}, }Additional Infos
Received four silver medals in SV-COMP'13 -
CPAchecker with Adjustable Predicate Analysis (Competition Contribution).
In C. Flanagan and
B. König, editors,
Proceedings of the 18th International Conference on
Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2012, Tallinn, Estonia, March 27-30),
LNCS 7214,
pages 528-530,
2012.
Springer-Verlag, Heidelberg.
doi:10.1007/978-3-642-28756-5_40
Keyword(s):
CPAchecker,
Competition on Software Verification (SV-COMP),
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
CPAchecker is a freely available software-verification framework, built on the concepts of configurable program analysis (CPA). CPAchecker integrates most of the state-of-the-art technologies for software model checking, such as counterexample-guided abstraction refinement (CEGAR), lazy predicate abstraction, interpolation-based refinement, and large-block encoding. The CPA for predicate analysis with adjustable-block encoding (ABE) is very promising in many categories, and thus, we submit a CPAchecker configuration that uses this analysis approach to the competition.BibTeX Entry
@inproceedings{CPACHECKERABE-COMP12, author = {Stefan L{\"{o}}we and Philipp Wendler}, title = {{{\sc CPAchecker}} with Adjustable Predicate Analysis (Competition Contribution)}, booktitle = {Proceedings of the 18th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS~2012, Tallinn, Estonia, March 27-30)}, editor = {C.~Flanagan and B.~K{\"o}nig}, pages = {528--530}, year = {2012}, series = {LNCS~7214}, publisher = {Springer-Verlag, Heidelberg}, doi = {10.1007/978-3-642-28756-5_40}, sha256 = {}, url = {https://doi.org/10.1007/978-3-642-28756-5_40}, pdf = {https://www.sosy-lab.org/research/pub/2012-TACAS.CPAchecker_with_Adjustable_Predicate_Analysis.pdf}, abstract = {CPAchecker is a freely available software-verification framework, built on the concepts of configurable program analysis (CPA). CPAchecker integrates most of the state-of-the-art technologies for software model checking, such as counterexample-guided abstraction refinement (CEGAR), lazy predicate abstraction, interpolation-based refinement, and large-block encoding. The CPA for predicate analysis with adjustable-block encoding (ABE) is very promising in many categories, and thus, we submit a CPAchecker configuration that uses this analysis approach to the competition.}, keyword = {CPAchecker,Competition on Software Verification (SV-COMP),Software Model Checking}, annote = {Won category ControlFlowInteger and received one silver and two bronze medals in <span style="white-space: nowrap"><a href="https://sv-comp.sosy-lab.org/2012/">SV-COMP'12</a></span>}, }Additional Infos
Won category ControlFlowInteger and received one silver and two bronze medals in SV-COMP'12
Internal reports
-
Domain-Type-Guided Refinement Selection Based on Sliced Path Prefixes.
Technical report MIP-1501, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
January
2015.
doi:10.48550/arXiv.1502.00045
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@techreport{TR1501-PA15, author = {Dirk Beyer and Stefan L{\"o}we and Philipp Wendler}, title = {Domain-Type-Guided Refinement Selection Based on Sliced Path Prefixes}, number = {MIP-1501}, year = {2015}, doi = {10.48550/arXiv.1502.00045}, url = {https://www.sosy-lab.org/research/cpa-ref-sel/}, keyword = {CPAchecker,Software Model Checking}, annote = {Extended publications based on this article appeared in <a href="https://www.sosy-lab.org/research/bib/Year/2015.complete.html#FORTE15">Proc. FORTE 2015</a> and <a href="https://www.sosy-lab.org/research/bib/Year/2015.complete.html#SPIN15b">Proc. SPIN 2015</a>.}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {January}, }Additional Infos
Extended publications based on this article appeared in Proc. FORTE 2015 and Proc. SPIN 2015. -
Reusing Precisions for Efficient Regression Verification.
Technical report MIP-1302, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
May
2013.
doi:10.48550/arXiv.1305.6915
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
Abstract
Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems, namely, 59 device drivers with 1119 revisions from the Linux kernel.BibTeX Entry
@techreport{TR1302-PA13, author = {Dirk Beyer and Stefan L{\"o}we and Evgeny Novikov and Andreas Stahlbauer and Philipp Wendler}, title = {Reusing Precisions for Efficient Regression Verification}, number = {MIP-1302}, year = {2013}, doi = {10.48550/arXiv.1305.6915}, url = {https://www.sosy-lab.org/research/cpa-reuse/}, abstract = {Continuous testing during development is a well-established technique for software-quality assurance. Continuous model checking from revision to revision is not yet established as a standard practice, because the enormous resource consumption makes its application impractical. Model checkers compute a large number of verification facts that are necessary for verifying if a given specification holds. We have identified a category of such intermediate results that are easy to store and efficient to reuse: abstraction precisions. The precision of an abstract domain specifies the level of abstraction that the analysis works on. Precisions are thus a precious result of the verification effort and it is a waste of resources to throw them away after each verification run. In particular, precisions are small and thus easy to store; they are easy to process and have a large impact on resource consumption. We experimentally show the impact of precision reuse on industrial verification problems, namely, 59 device drivers with 1119 revisions from the Linux kernel.}, keyword = {CPAchecker,Software Model Checking}, annote = {An <a href="https://www.sosy-lab.org/research/bib/Year/2013.complete.html#FSE13">abbreviated version</a> of this article appeared in Proc. ESEC/FSE 2013.}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {May}, }Additional Infos
An abbreviated version of this article appeared in Proc. ESEC/FSE 2013. -
Explicit-Value Analysis Based on CEGAR and Interpolation.
Technical report MIP-1205, Department of Computer Science and Mathematics (FIM),
University of Passau (PA),
December
2012.
doi:10.48550/arXiv.1212.6542
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
BibTeX Entry
@techreport{TR1205-PA12, author = {Dirk Beyer and Stefan L{\"o}we}, title = {Explicit-Value Analysis Based on {CEGAR} and Interpolation}, number = {MIP-1205}, year = {2012}, doi = {10.48550/arXiv.1212.6542}, url = {}, abstract = {}, keyword = {CPAchecker,Software Model Checking}, annote = {}, institution = {Department of Computer Science and Mathematics (FIM), University of Passau (PA)}, month = {December}, }
Theses and projects (PhD, MSc, BSc, Project)
-
Effective Approaches to Abstraction Refinement for Automatic Software Verification.
PhD Thesis, University of Passau, Software Systems Lab,
2017.
Keyword(s):
CPAchecker,
Software Model Checking
Publisher's Version
PDF
Supplement
BibTeX Entry
@misc{StefanValueDomain, author = {Stefan L{\"{o}}we}, title = {Effective Approaches to Abstraction Refinement for Automatic Software Verification}, year = {2017}, url = {https://www.sosy-lab.org/research/phd/loewe/}, pdf = {https://www.sosy-lab.org/research/phd/2017.Loewe.Effective_Approaches_to_Abstraction_Refinement_for_Automatic_Software_Verification.pdf}, keyword = {CPAchecker,Software Model Checking}, howpublished = {PhD Thesis, University of Passau, Software Systems Lab}, urn = {urn:nbn:de:bvb:739-opus4-4815}, }
Disclaimer:
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.