This repo contains the evaluation code for the paper "SciCode: A Research Coding Benchmark Curated by Scientists"
[2024-11-04]: Leaderboard is on! Check here. We have also added Claude Sonnet 3.5 (new) results.
[2024-10-01]: We have added OpenAI o1-mini and o1-preview results.
[2024-09-26]: SciCode is accepted at NeurIPS D&B Track 2024.
[2024-08-22]: The SciCode benchmark has been successfully integrated into OpenCompass.
[2024-07-24]: We add the scientist-annotated background and support setup for w/ background evaluation.
SciCode is a challenging benchmark designed to evaluate the capabilities of language models (LMs) in generating code for solving realistic scientific research problems. It has a diverse coverage of 16 subdomains from 6 domains: Physics, Math, Material Science, Biology, and Chemistry. Unlike previous benchmarks that consist of exam-like question-answer pairs, SciCode is converted from real research problems. SciCode problems naturally factorize into multiple subproblems, each involving knowledge recall, reasoning, and code synthesis. In total, SciCode contains 338 subproblems decomposed from 80 challenging main problems, and it offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions and test cases for evaluation. OpenAI o1-preview, the best-performing model among those tested, can solve only 7.7% of the problems in the most realistic setting. Broadly, SciCode demonstrates a realistic and scientists' everyday workflow of identifying critical science concepts and facts and then transforming them into computation and simulation code. We believe SciCode not only helps demonstrate contemporary LLMs' progress towards helpful assistant for scientists but also helps shed light on future building and evaluation of scientific AI.
SciCode sources challenging and realistic research-level coding problems across 6 natural science disciplines, covering a total of 16 subfields. Scicode mainly focuses on 1. Numerical methods 2.Simulation of systems 3. Scientific calculation. These are the tasks we believe require intense scientific knowledge and reasoning to optimally test LM’s science capability.
Models | Main Problem Resolve Rate | Subproblem |
---|---|---|
🥇 OpenAI o1-preview | 7.7 |
28.5 |
🥈 Claude3.5-Sonnet | 4.6 |
26.0 |
🥉 Claude3.5-Sonnet (new) | 4.6 |
25.3 |
Deepseek-Coder-v2 | 3.1 |
21.2 |
GPT-4o | 1.5 |
25.0 |
GPT-4-Turbo | 1.5 |
22.9 |
OpenAI o1-mini | 1.5 |
22.2 |
Gemini 1.5 Pro | 1.5 |
21.9 |
Claude3-Opus | 1.5 |
21.5 |
Llama-3.1-405B-Chat | 1.5 |
19.8 |
Claude3-Sonnet | 1.5 |
17.0 |
Qwen2-72B-Instruct | 1.5 |
17.0 |
Llama-3.1-70B-Chat | 0.0 |
17.0 |
Mixtral-8x22B-Instruct | 0.0 |
16.3 |
Llama-3-70B-Chat | 0.0 |
14.6 |
- Clone this repository
git clone [email protected]:scicode-bench/SciCode.git
- Install the
scicode
package withpip install -e .
- Download the numeric test results and save them as
./eval/data/test_data.h5
- Run
eval/scripts/gencode_json.py
to generate new model outputs (see theeval/scripts
readme) for more information - Run
eval/scripts/test_generated_code.py
to evaluate the unittests
More information, including a FAQ section, is provided on our website. If you have trouble reaching the website, please find the markdown source in its github repository.
- Minyang Tian: [email protected]
- Eliu Huerta: [email protected]
- Hao Peng: [email protected]
@misc{tian2024scicode,
title={SciCode: A Research Coding Benchmark Curated by Scientists},
author={Minyang Tian and Luyu Gao and Shizhuo Dylan Zhang and Xinan Chen and Cunwei Fan and Xuefei Guo and Roland Haas and Pan Ji and Kittithat Krongchon and Yao Li and Shengyan Liu and Di Luo and Yutao Ma and Hao Tong and Kha Trinh and Chenyu Tian and Zihan Wang and Bohao Wu and Yanyu Xiong and Shengzhu Yin and Minhui Zhu and Kilian Lieret and Yanxin Lu and Genglin Liu and Yufeng Du and Tianhua Tao and Ofir Press and Jamie Callan and Eliu Huerta and Hao Peng},
year={2024},
eprint={2407.13168},
archivePrefix={arXiv},
primaryClass={cs.AI}
}