-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instruct mode or system prompt sorely needed! #155
Comments
#156 added context/user/bot prompt prefix and postfix to config files. This can be configured as promp template. (At least, I thing so) |
Thanks I will try it out. |
Added examples for different templates: |
I tested it in the new repo and it doesn't really work as instruct mode since the example dialog isn't wrapped but it does work for a system prompt. I suppose one could manually edit the character card to fit an instruct template too. |
Can you give an examples? Share a character, config and what you wanna got? p.s. perhaps, manual character edit needed. Need to investigate real cases. p.p.s. I tested few common prompt templates and they work fine as for me. If you know another prompt templates - share them please, I'll try them. |
I just used alpaca. The ##instruction and ##response doesn't wrap the sample dialog so the model got confused and started outputting nothing. I used evilGPT off of chub.ai to mess with. I think if I manually wrapped the examples in the instruct template it would work fine. It's not seamless but at least now it's possible. I gave up and went with the system prompt only, which does 80% of what it needs to. Getting the model out of the "assistant" personality was what I'm after as a lot of models are the most censored when using that. If you load up good models like euryale or airoboros in textgen and use chat vs chat instruct with the default character you can see what I'm talking about. Might be a bigger problem for small stuff like 7b/13b as they are more template bound than 70b, the latter just goes with whatever. |
Got it. As workaround, I can adwice to move example to context. But I realy should think about fix this... There is to much templates variation -_- |
I watched terminal in textgen, example dialog sends with the character prompt. It is formatted like the card with Char: message\n User: message\n I didn't try standalone with exllama or llama.cpp yet. Mostly run this to access the AI when away from home. You are also missing a few of the new repetition penalty params but it was trivial to add that. |
I think i will change template implemenation in a few next updates... Curently ": " between Char and message is hardcoded. |
I am trying to spin this up for fun and to have my AI on telegram. Some things I notice:
I have tried to use both api and normal mode. I edited it like the user did here: #94 (comment) as well. Perhaps I can also try to duplicate in: generator_params.json
Thoughts?
edit: I find that the variables in config are different than in code. I got preset and character loading by changing the name as they are written there.
The text was updated successfully, but these errors were encountered: