A loop invariant inference tool which combines LLMs with traditional Bounded Model Checking tools..
Our current tool only runs on windows system.
cd /LM2C/
- If you want to run experiments on GPT models, prepare a python environment with python version = 3.7.10, then run the command
pip install -r requirements.txt
. You should input your own API key inGPT_chat/GPT.py
. You should disable allfrom GPT_chat import Llama3chat
inRunAllLinear.py
andGPT_chat/GPT.py
. - If you want to run experiments on Llama3-8B, prepare a python environment with python version >= 3.8, then run the command
pip install -r requirementsllama.txt
. The recommended configuration is a GPU higher than 4090 and more than 48G of RAM.
cd /LM2C/
- All parameter configurations are in the
Config.py
file and you can modify them as needed. - Before you start experimenting, you have to go to the
Result
folder and create the output folder and.txt
file according to the parameter inConfig.py
. If you want to run all linear benchmarks,python RunAllLinear.py
. - The results can be founded in
Result
directory. - If you want to run specific one benchmark, change the range of
i
inRunAllLinear.py
.
###Analyze
- All result are stored in
Result
. python averageTimeAndProposal.py
If you want to analyze the results of the experiment and output the average time spent and the number of proposals.
- All benchmarks are put in "Benchmarks/", each instance has three files: c source file, CFG json file, and SMT file.
- If you want to add new instance, you only need to prepare the three files.
- As for how to prepare the CFG json file and SMT file, please refer to Code2Inv, which use Clang to do it automatically.