Strategy Selection for Software Verification
Based on Boolean Features
An Empirical Evaluation of the State of the Art
Abstract
Software verification is the concept of determining, given an input program and a specification, whether the input program satisfies the specification or not. There are different strategies that can be used to approach the problem of software verification, but, according to comparative evaluations, none of the known strategies is superior over the others. Therefore, many tools for software verification leave the choice of which strategy to use up to the user, which is problematic because the user might not be an expert on strategy selection. In the past, several learning-based approaches were proposed in order to perform the strategy selection automatically. This automatic choice can be formalized by a strategy selector, which is a function that takes as input a model of the given program, and assigns a verification strategy. The goal of this paper is to identify a small set of program features that (1) can be statically determined for each input program in an efficient way and (2) sufficiently distinguishes the input programs such that a strategy selector for picking a particular verification strategy can be defined that outperforms every constant strategy selector. Our results can be used as a baseline for future comparisons, because our strategy selector is simple and easy to understand, while still powerful enough to outperform the individual strategies. We evaluate our feature set and strategy selector on a large set of 5 687 verification tasks and provide a replication package for comparative evaluation.
Content
The supplementary archive contains CPAchecker and all data from our experiments. The following files are in this archive:
- CPAchecker in revision 28 268 from the trunk
- The additional test-set definitions and configurations used for the experiments
- The results presented in our paper
- A README file explaining how to reproduce the results
Results
The result tables of the figures in our article are also available:
- Table for SVCOMP tasks for all basic strategies (Table I from the article)
- Table for SVCOMP tasks for all sequential strategies and strategy selectors (Random, Model-Based, and Oracle) using these sequential strategies (Table II from the article)
You can click on the cells in the status columns of the different algorithms in these tables to see their log output.