π₯ Impact β’ π° News β’ π₯ Quick Start β’ π Remote Evaluation β’ π» LLM-generated Code β’ π§ Advanced Usage β’ π° Result Submission β’ π Citation
BigCodeBench has been used by many LLM teams including:
- Zhipu AI
- Alibaba Qwen
- DeepSeek
- Amazon AWS AI
- Snowflake AI Research
- ServiceNow Research
- Meta AI
- Cohere AI
- Sakana AI
- [2024-10-06] We are releasing
bigcodebench==v0.2.0
! - [2024-10-05] We create a public code execution API on the Hugging Face space.
- [2024-10-01] We have evaluated 139 models on BigCodeBench-Hard so far. Take a look at the leaderboard!
- [2024-08-19] To make the evaluation fully reproducible, we add a real-time code execution session to the leaderboard. It can be viewed here.
- [2024-08-02] We release
bigcodebench==v0.1.9
.
More News :: click to expand ::
- [2024-07-18] We announce a subset of BigCodeBench, BigCodeBench-Hard, which includes 148 tasks that are more aligned with the real-world programming tasks. The details are available in this blog post. The dataset is available here. The new release is
bigcodebench==v0.1.8
. - [2024-06-28] We release
bigcodebench==v0.1.7
. - [2024-06-27] We release
bigcodebench==v0.1.6
. - [2024-06-19] We start the Hugging Face BigCodeBench Leaderboard! The leaderboard is available here.
- [2024-06-18] We release BigCodeBench, a new benchmark for code generation with 1140 software-engineering-oriented programming tasks. Preprint is available here. PyPI package is available here with the version
0.1.5
.
BigCodeBench is an easy-to-use benchmark for solving practical and challenging tasks via code. It aims to evaluate the true programming capabilities of large language models (LLMs) in a more realistic setting. The benchmark is designed for HumanEval-like function-level code generation tasks, but with much more complex instructions and diverse function calls.
There are two splits in BigCodeBench:
Complete
: Thes split is designed for code completion based on the comprehensive docstrings.Instruct
: The split works for the instruction-tuned and chat models only, where the models are asked to generate a code snippet based on the natural language instructions. The instructions only contain necessary information, and require more complex reasoning.
BigCodeBench focuses on task automation via code generation with diverse function calls and complex instructions, with:
- β¨ Precise evaluation & ranking: See our leaderboard for latest LLM rankings before & after rigorous evaluation.
- β¨ Pre-generated samples: BigCodeBench accelerates code intelligence research by open-sourcing LLM-generated samples for various models -- no need to re-run the expensive benchmarks!
To get started, please first set up the environment:
# By default, you will use the remote evaluation API to execute the output samples.
pip install bigcodebench --upgrade
# You are suggested to use `flash-attn` for generating code samples.
pip install packaging ninja
pip install flash-attn --no-build-isolation
# Note: if you have installation problem, consider using pre-built
# wheels from https://github.com/Dao-AILab/flash-attention/releases
β¬ Install nightly version :: click to expand ::
# Install to use bigcodebench.generate
pip install "git+https://github.com/bigcode-project/bigcodebench.git" --upgrade
We use the greedy decoding as an example to show how to evaluate the generated code samples via remote API.
Warning
To ease the generation, we use batch inference by default. However, the batch inference results could vary from batch sizes to batch sizes and versions to versions, at least for the vLLM backend. If you want to get more deterministic results for greedy decoding, please set --bs
to 1
.
Note
Remotely executing on BigCodeBench-Full
typically takes 6-7 minutes, and on BigCodeBench-Hard
typically takes 4-5 minutes.
bigcodebench.evaluate \
--model meta-llama/Meta-Llama-3.1-8B-Instruct \
--split [complete|instruct] \
--subset [full|hard] \
--backend [vllm|openai|anthropic|google|mistral|hf]
- All the resulted files will be stored in a folder named
bcb_results
. - The generated code samples will be stored in a file named
[model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated.jsonl
. - The evaluation results will be stored in a file named
[model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated_eval_results.json
. - The pass@k results will be stored in a file named
[model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated_pass_at_k.json
.
Note
BigCodeBench uses different prompts for base and chat models.
By default it is detected by tokenizer.chat_template
when using hf
/vllm
as backend.
For other backends, only chat mode is allowed.
Therefore, if your base models come with a tokenizer.chat_template
,
please add --direct_completion
to avoid being evaluated
in a chat mode.
Access OpenAI APIs from OpenAI Console
export OPENAI_API_KEY=<your_openai_api_key>
Access Anthropic APIs from Anthropic Console
export ANTHROPIC_API_KEY=<your_anthropic_api_key>
Access Mistral APIs from Mistral Console
export MISTRAL_API_KEY=<your_mistral_api_key>
Access Gemini APIs from Google AI Studio
export GOOGLE_API_KEY=<your_google_api_key>
We share pre-generated code samples from LLMs we have evaluated on the full set:
- See the attachment of our v0.2.1.post7. We include
sanitized_samples_calibrated.zip
for your convenience.
Please refer to the ADVANCED USAGE for more details.
Please email both the generated code samples and the execution results to [email protected] if you would like to contribute your model to the leaderboard. Note that the file names should be in the format of [model_name]--[revision]--[bigcodebench|bigcodebench-hard]-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated.jsonl
and [model_name]--[revision]--[bigcodebench|bigcodebench-hard]-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated_eval_results.json
. You can file an issue to remind us if we do not respond to your email within 3 days.
@article{zhuo2024bigcodebench,
title={BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions},
author={Zhuo, Terry Yue and Vu, Minh Chien and Chim, Jenny and Hu, Han and Yu, Wenhao and Widyasari, Ratnadira and Yusuf, Imam Nur Bani and Zhan, Haolan and He, Junda and Paul, Indraneil and others},
journal={arXiv preprint arXiv:2406.15877},
year={2024}
}