Generate and edit the audio with a simple sentence.
This repo currently support:
- text-guided audio editing: edit the content of given audio clip(s) conditioned on text input
- text-guided audio generation: create an audio clip given text input
- audio scriptwriting: get more inspiration from WavCraft by prompting a script setting and let the model do the scriptwriting and create the sound for you.
- check if your audio file is synthesized by WavCraft
2024-05-06: Support openLLMs (MistralAI family) for WavCraft.
2024-03-20: Add watermarking to the system
source scripts/setup_envs.sh
export OPENAI_KEY=YOUR_OPENAI_KEY
export HF_KEY=YOUR_HF_KEY
source scripts/start_services.sh
python3 WavCraft.py basic -f \
--input-wav assets/duck_quacking_in_water.wav \
--input-text "Add dog barking."
python3 WavCraft-chat.py basic -f -c
[New session is create]
Add audio files(s) (each file starts with '+'): +assets/duck_quacking_in_water.wav
Enter your instruction (input `EXIT` to exit the process): "Add dog barking"
python3 check_watermark.py --wav-path /path/to/audio/file
python3 WavCraft.py basic -f \
--input-wav assets/duck_quacking_in_water.wav \
--input-text "Add dog barking." \
--model 'mistralai/Mistral-7B-Instruct-v0.2'
WavCraft is an LLM-driven agent for audio content creation and editing. It applies LLM to connect various audio expert models and DSP function together. An overview of WavCraft architecture can be found bellow:
This repository is for research purpose only. We are not responsible for audio generated/edited using semantics created by this model. Also, everyone use WavCraft must NOT disable the watermarking techniques in anyway.
We appreciate WavJourney, AudioCraft, AudioSep, AudioSR, AudioLDM, WavMark for their amazing code work.
If you found our work is helpful, please cite our work:
@misc{liang2024wavcraft,
title={WavCraft: Audio Editing and Generation with Large Language Models},
author={Jinhua Liang and Huan Zhang and Haohe Liu and Yin Cao and Qiuqiang Kong and Xubo Liu and Wenwu Wang and Mark D. Plumbley and Huy Phan and Emmanouil Benetos},
year={2024},
eprint={2403.09527},
archivePrefix={arXiv},
primaryClass={eess.AS}
}