Notebooks related to RWKV β a Parallelizable RNN
A demo of ChatRWKV
A demo for using the LangChain wrapper with RWKV
The goal for these notebooks is to simplify fine-tuning of the RWKV models trained by BlinkDL on the Pile
With small changes you should be able to train/tune any RWKV model
Only necessary for fine-tuning legacy models
Edits training files to attempt to stay up to date to changes to the RWKV repo, may cause unexpected instability
Most stable option using the latest command line interface for training
Uses the LoRA fork of RWKV by Blealtan
Requires less VRAM and is recommended for most fine-tuning tasks