You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yi Coder 1.5b is potentially a great model for fine-tuning on one's own codebase. It'd be great if you could release some SAMPLE portions of the dataset used for base training & instruction tuning (and perhaps recipes), so that fine-tuning could be done in a compatible format.
I have an app that includes built-in Python scripting with an app-specific API, and would love to add a chatbot tuned on that API, so that users could get small scripts written from natural language prompts, or even written and executed, without the user ever needed to see the code.
The text was updated successfully, but these errors were encountered:
Yi Coder 1.5b is potentially a great model for fine-tuning on one's own codebase. It'd be great if you could release some SAMPLE portions of the dataset used for base training & instruction tuning (and perhaps recipes), so that fine-tuning could be done in a compatible format.
I have an app that includes built-in Python scripting with an app-specific API, and would love to add a chatbot tuned on that API, so that users could get small scripts written from natural language prompts, or even written and executed, without the user ever needed to see the code.
Hi Twardoch,
Unfortunately we will not be releasing our data, but I'd be happy to provide some directions for your requirement.
For pertaining, check https://huggingface.co/datasets/bigcode/starcoderdata and https://huggingface.co/datasets/bigcode/the-stack-dedup .
For code SFT, check Code Feedback / CodeAlpaca / Evol-Instruct-code.
For library-specific SDK, start with library-specific SFT pairs mixed with general-purpose SFT first, continue train is very difficult to do correctly without performance loss, and python codes are not extremely hard to generalize. Just remember to keep your SFT pairs as diverse(both semantic-wise and task-wise) and as high-quality as possible.
Yi Coder 1.5b is potentially a great model for fine-tuning on one's own codebase. It'd be great if you could release some SAMPLE portions of the dataset used for base training & instruction tuning (and perhaps recipes), so that fine-tuning could be done in a compatible format.
I have an app that includes built-in Python scripting with an app-specific API, and would love to add a chatbot tuned on that API, so that users could get small scripts written from natural language prompts, or even written and executed, without the user ever needed to see the code.
The text was updated successfully, but these errors were encountered: