Skip to content

OptimalScale/LMFlowBenchmark

Repository files navigation

LMFlow Benchmark

Website Code License Python 3.9+ Doc Embark slack badge WeChat badge

This project provides a unified framework to evaluation generative large language models on different evaluation tasks.

Large Model for All. See our vision.

Latest News

Supported Models

Seamlessly supported all the decoder models in 🤗 Hugging Face. LLaMA, GPT2, GPT-Neo, Galactica, have been fully tested. We will support encoder models soon.

1.Setup

Our package has been fully tested on Linux OS (Ubuntu 20.04). Other OS platforms (MacOS, Windows) are not fully tested. You may encounter some unexpected errors. You may try it first on a Linux machine or use Google Colab to experience it.

git clone https://github.com/research4pan/lmflow-benchmark.git
cd lmflow-benchmark
conda create -n lmflow-benchmark python=3.9 -y
conda activate lmflow-benchmark
pip install -e .

2.Prepare Dataset

Please refer to our doc.

3. Running Scripts

3.1 LMFlow Benchmark

LMFlow Benchmark is an automatic evaluation framework for open-source large language models. We use negative log likelihood (NLL) as the metric to evaluate different aspects of a language model: chitchat, commonsense reasoning, and instruction following abilities.

You can directly run the LMFlow benchmark evaluation to obtain the results to participate in the LLM comparision. For example, to run GPT2 XL, one may execute

./scripts/run_benchmark.sh --model_name_or_path gpt2-xl

--model_name_or_path is required, you may fill in huggingface model name or local model path here.

To check the evaluation results, you may check benchmark.log in ./output_dir/gpt2-xl_lmflow_chat_nll_eval, ./output_dir/gpt2-xl_all_nll_eval and ./output_dir/gpt2-xl_commonsense_qa_eval.

Vision

Make evaluation of LLMs automatic, Hello there! We are excited to announce the upcoming release of our code repository that includes a complete LLM training process, enabling users to quickly build their own language models and train them effectively.

Our code repository is not just a simple model; it includes the complete training workflow, model optimization, and testing tools. You can use it to build various types of language models, including conversation models, question-answering models, and text generation models, among others.

Moreover, we aim to create an open and democratic LLM sharing platform where people can share their checkpoints and experiences to collectively improve the skills of the community. We welcome anyone who is interested in LLM to participate and join us in building an open and friendly community!

Whether you are a beginner or an expert, we believe that you can benefit from this platform. Let's work together to build a vibrant and innovative LLM community!

Embark slack badge WeChat badge

Contributors

Citation

If you find this repository useful, please consider giving ⭐ and citing:

@misc{lmflow,
  author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang},
  title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://optimalscale.github.io/LMFlow/}},
}

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published