Dirk Beyer, Thomas A. Henzinger, M. Erkan Keremoglu, and Philipp Wendler
Software model checking, as an undecidable problem, has three possible outcomes: (1) the program satisfies the specification, (2) the program does not satisfy the specification, and (3) the model checker fails. The third outcome usually manifests itself in a space-out, time-out, or one component of the verification tool giving up; in all of these failing cases, significant computation is performed by the verification tool before the failure, but no result is reported. We propose to reformulate the model-checking problem as follows, in order to have the verification tool report a summary of the performed work even in case of failure: given a program and a specification, the model checker returns a condition Ψ —usually a state predicate— such that the program satisfies the specification under the condition Ψ —that is, as long as the program does not leave the states in which Ψ is satisfied.
In our experiments, we investigated as one major application of conditional model checking the sequential combination of model checkers with information passing. We give the condition that one model checker produces, as input to a second conditional model checker, such that the verification problem for the second is restricted to the part of the state space that is not covered by the condition, i.e., the second model checker works on the problems that the first model checker could not solve. Our experiments demonstrate that repeated application of conditional model checkers, passing information from one model checker to the next, can significantly improve the verification results and performance, i.e., we can now verify programs that we could not verify before.
The supplementary archive contains all data, tables, the used benchmark programs and configurations as well as a ready-to-run version of CPAchecker. The following files are in this archive:
config/The CPAchecker configuration files used:
explicitAnalysis.properties predicateAnalysis-cbmc.properties, cmc-explicit-100s+predicate.properties
programs/The benchmark programs.
our-results/Our raw results, including log files and tables (see also below).
cmc-*.xmlThe input files for our benchmarking script.
table-*.xmlThe files defining the tables produced by our table-generator script.
The rest of the files are from CPAchecker in revision 5871.
We have made available an example output condition (encoded as an automaton). This output condition was produced by the explicit-value analysis and then used as input condition by the predicate analysis, when analyzing the verification task “token_ring.07.BUG.c” from our benchmark set, as reported in the last column of the results table.
The raw result tables are available here:
CSV files can be found in the “our-results” directory. The columns “Expl.+Pred.” and “Expl.(100s)+Pred.” in the paper were simulated by calculcating what the results would have been if we had run the two analyses sequentially. Thus no raw data is available for them.
You can click on the cells in the status columns in these tables to see the CPAchecker log file. Clicking on the column headings (e.g., “walltime” or “cputime”) will produce a graph (this requires JavaScript to be enabled).
In order to reproduce the results, download the archive with everything necessary and follow these steps. As the SMT solver used by CPAchecker (MathSAT4) is only available on Linux, this is not possible on other platforms.
CPAchecker requires that the amount of memory used for the Java heap is set according to the memory of your machine. If this parameter is higher than the available memory, the JVM will not start. A good start is a value of “2000m” if you have at least 3-4GB of RAM. For the results in the paper a value of “12000m” was used.
scripts/cpa.sh -heap HEAP_SIZE -config CONFIG_FILE PROGRAM.cExample:
scripts/cpa.sh -heap 2000m -config config/predicateAnalysis-cbmc.properties programs/systemc/kundu.cOutput files of CPAchecker are written to the "output" directory.
<option name="-heap">12000m</option>
scripts/benchmark.pyfor the chosen set.
scripts/benchmark.py cmc-systemc.xmlThe results (log files and data) will be placed in the “results” directory.
scripts/table-generator.py -x table-systemc.xmlThe tables will be placed as HTML files in the “results” directory.