Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide the OpenAI prompt parameters to emulate the experience of ChatGPT app #1

Open
nqngo opened this issue Feb 4, 2024 · 1 comment
Assignees

Comments

@nqngo
Copy link
Contributor

nqngo commented Feb 4, 2024

ChatGPT is very verbose and provide a fair bit of contextual information on the prompt provide.

When /ask is implemented, what are the parameters, token size, words penalty tuning we need to provide to OpenAI to emulate that experience?

Please investigate and provide the parameters needed.

@nqngo nqngo changed the title Provide a the exact GPT3/GPT4 prompt format Provide the exact GPT3/GPT4 prompt format Feb 8, 2024
@hungdtrn hungdtrn self-assigned this Feb 9, 2024
@keduong33 keduong33 self-assigned this Feb 18, 2024
@AndrewsTrinh AndrewsTrinh self-assigned this Feb 18, 2024
@nqngo nqngo changed the title Provide the exact GPT3/GPT4 prompt format Provide the OpenAI prompt parameters to emulate the experience of ChatGPT app Feb 27, 2024
@dacphuc1993
Copy link

dacphuc1993 commented Feb 27, 2024

Here is my suggestion for default parameters of OpenAI API

frequency_penalty = 0           # penalty for repeating tokens, if set to high -> lower likelihood of repeating word
presence_penalty = 0             # similar to penalty above. Keeping default = 0 for the balance
logprobs = False                    # whether to return log probabilities in output object
max_tokens = 1000                # maximum number of tokens generated by models
n = 1                                       # number of response generated by models
seed = 1000                # for reproducible output, must keep this consistent across the codebase
stream = False            # whether to stream the output -> display text as stream
temperature = 0.1      # control the creativity of models. Lower values indicate more factual text

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants