Skip to content

IBM/SALMON

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SALMON Logo

Generated by DALL·E 3

SALMON: Self-Alignment with Principle-Following Reward Models

Code License Data License

SALMON is a new RLAIF paradigm for self-aligning language models from scratch, using only a small set of human-defined principles as guidance. Central to our approach is a principle-following reward model. Trained on synthetic preference data, this model can generate reward scores based on arbitrary human-defined principles. For comprehensive details and insights, we kindly direct you to our paper.

SALMON Comparison

Dromedary-2

We release the Dromedary-2 model, which is trained with the SALMON paradigm on the LLaMA-2-70b base language model, with Principle-Driven Self-Alignment as the Supervised Fine-Tuning (SFT) strategy to initialize the policy model.

This codebase focuses on the Reinforcement Learning (RL) fine-tuning stage with the SALMON paradigm, while the Self-Align SFT training pipeline is released at the original Dromedary repo,

Dromedary-2 Pipeline

Model Weights

We release Dromedary-2 weights as delta weights to comply with the LLaMA model license. You can directly load our QLoRA weights upon the LLaMA-2 base model to obtain Dromedary-2. Instructions:

  1. Get the original LLaMA-2 weights in the Hugging Face format by following the instructions here.
  2. Download the QLoRA delta weights from our Hugging Face model hub.
  3. Load the model with Hugging Face's PEFT-LoRA and QLoRA's bitsandbytes.

NOTE: Dromedary-2 is trained with QLoRA and the bfloat16 data type. While it is possible to merge the QLoRA weights with the quantized model and thus enable inference with libraries such as TGI and vLLM, we found the merged weights can lead to degenerated performance. Therefore, we recommend directly loading the QLoRA weights with the PEFT-LoRA framework.

# Please check the inference section for the complete inference code.
system_prompt = (
      "# Dromedary\n\n## System Overview\n\n"
      "Consider an AI assistant whose codename is Dromedary, developed by the Self-Align team. "
      "Dromedary is trained on data up until Sept-2022, and it endeavors to be a helpful, ethical and reliable assistant.\n\n"
      "## User Conversation\n\n"
)
user_prompt = "### User\n"
assistant_prompt = "### Dromedary\n"
seperator = "\n\n"

# USAGE: system_prompt + user_prompt + `user_message` + seperator + assistant_prompt + `assistant_message` + seperator + user_prompt ...

dtype = torch.bfloat16

model_path = "path/to/llama-2-70b-hf"
qlora_path = "path/to/dromedary-2-70b-qlora-delta-v0"

bnb_config = BitsAndBytesConfig(
      load_in_4bit=True,
      bnb_4bit_compute_dtype=dtype,
      bnb_4bit_use_double_quant=True,
      bnb_4bit_quant_type="nf4",
)

model = AutoModelForCausalLM.from_pretrained(
      model_path,
      load_in_4bit=True,
      device_map={"": "cuda:0"},
      quantization_config=bnb_config,
      torch_dtype=dtype,
)

model = PeftModel.from_pretrained(
      model,
      qlora_path,
      is_trainable=False,
)

Setup

  1. Clone this repository and navigate to SALMON folder
git clone https://github.com/IBM/SALMON
cd SALMON
  1. Install the packages
conda create -n salmon python=3.9 -y
conda activate salmon
pip install -r requirements.txt

Inference

We provide a chatbot demo for Dromedary-2.

Training

We provide the full training pipeline of Dromedary-2 for reproduction.

Prompts

All the human supervision used in this project can be found here.

Citation

Please consider citing the following papers if you use the data or code in this repo.

@misc{sun2023principledriven,
      title={Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision},
      author={Zhiqing Sun and Yikang Shen and Qinhong Zhou and Hongxin Zhang and Zhenfang Chen and David Cox and Yiming Yang and Chuang Gan},
      year={2023},
      eprint={2305.03047},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
@misc{sun2023salmon,
      title={SALMON: Self-Alignment with Principle-Following Reward Models},
      author={Zhiqing Sun and Yikang Shen and Hongxin Zhang and Qinhong Zhou and Zhenfang Chen and David Cox and Yiming Yang and Chuang Gan},
      year={2023},
      eprint={2310.05910},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Acknowledgements

We thank Meta LLaMA team, Standford Alpaca team, Vicuna team, Alpaca-LoRA, QLoRA team, Hugging Face PEFT, and AlpacaFarm team for their open-source efforts in democratizing large language models.

About

Self-Alignment with Principle-Following Reward Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published