Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

will you release the code making the eval data? #5

Open
menghonghan opened this issue Jun 26, 2024 · 1 comment
Open

will you release the code making the eval data? #5

menghonghan opened this issue Jun 26, 2024 · 1 comment

Comments

@menghonghan
Copy link

Awesome work! Will you release the code that creates your eval dataset because it's quite complicated from your description in the paper. Using this method to generate more data for further instruction following finetuning could be promising.

@YJiangcm
Copy link
Owner

YJiangcm commented Aug 6, 2024

Thanks for your interest in our work! We construct FollowBench relying on human annotation in most cases. Only the data falling into the category of "example constraint" is generated using code. We have released the code at create_example_constraint.py.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants