Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add stopword checker + iterable generate function #106

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Nintorac
Copy link

@Nintorac Nintorac commented Apr 29, 2023

This PR adds two sets of functionality.

  1. It adds an iterable generate function that is useful for streaming usecases where the upstream caller may want to stop execution before the model is ready
  2. Adds a stopwords argument to the pipeline args, this mimics the OpenAI complete endpoint and is useful halting execution when certain strings are produced, eg bob: could be used to yield control when the model outputs bob which would indicate it is the user turn to speak.

Let me know what you think :)

eg. usage

args = PIPELINE_ARGS(temperature=1e-1,top_p=1e-1, 
                     stop_words=['bob:']
                     )
instr = """
bob: hello, how are you today?

alice: I'm fine, thanks.

bob: that's good. Write me a function in python to calculate pi please

alice:"""

for i in pipeline.igenerate(instr, args=args):
    print(i, end='', flush=True)

@BlinkDL
Copy link
Owner

BlinkDL commented May 3, 2023

Nice :) Actually a better method is to "recover" the state when you see Bob: / Alice:, as in #87

I use \n\n for now, because I replace all \n\n in ChatGPT generations by \n, so whenever you see \n\n it must be endoftext

@Nintorac
Copy link
Author

Nintorac commented May 9, 2023

hmm, can't really grasp what the PR is doing after reviewing, the out state doesn't get used? eg here and I don't think it's solving the same problem. Also looks to use some global state load_all_stat which is better for me not to use as I would like to run this in a production (in a very loose sense) environment

The target use case here supports arbitrary custom stopwords, this can be used in langchain to stop the LLM when it needs to. eg

Task: Find a list of cheeses
Action: search google for cheese
Observation:

in such a chain you might put Observation: as a stop word in order to stop there and inject your own observation based on the search results.

I replace all \n\n in ChatGPT generations by \n

yes! are you thinking of including an <end of turn> token or something? Is it even possible to finetune in new tokens?

@Nintorac Nintorac force-pushed the stopword-checker branch from 6a58e66 to f905496 Compare May 12, 2023 03:12
@cahya-wirawan
Copy link

I use \n\n for now, because I replace all \n\n in ChatGPT generations by \n, so whenever you see \n\n it must be endoftext

Sometimes the text generation stop even it doesn’t finish the text because it creates \n\n for example if i ask the bot to write code, it starts with the sentence “here is the python code to write X algorithm” and then “\n\n”. Actually it will write the code, but the chatrwkv will stop the generation because it sees “\n\n”

@Nintorac
Copy link
Author

Oh actually I did notice something, based on this section of the HF article; do you use str.replace('\n\n', '\n') or re.replace('\n+', '\n')? I don't think the former is actually removing all double new lines? Might explain the behaviour @cahya-wirawan is seeing since you would might not expect to see that if the model was not trained on it?

Big GZ on the HF release by the way!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants