-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Danny search and rescue #14
Conversation
… one class for both, ActionLoc
…t_poignancy run_gpt_prompt_summarize_conversation
Takes in "Answer: {name}" and reduces to just name. | ||
Also hanldes an input of {name} | ||
''' | ||
name: str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can call this "area" rather than "name" to give the LLM a bit more of a hint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, that could work (not sure if I change that or if you)
output = safe_generate_response(prompt, gpt_param, 5, fail_safe, | ||
__func_validate, __func_clean_up, verbose=False) | ||
#output = safe_generate_response(prompt, gpt_param, 5, fail_safe,__func_validate, __func_clean_up, verbose=False) | ||
output = generate_structured_response(prompt, gpt_param, ActionLoc ,5, fail_safe,__func_validate, __func_clean_up, verbose=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good class reuse! 👍
@@ -991,7 +1012,16 @@ def get_fail_safe(persona): | |||
|
|||
return output, [output, prompt, gpt_param, prompt_input, fail_safe] | |||
|
|||
class ObjDesc(BaseModel): | |||
desc: str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe expand this to "description"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree
@field_validator("desc") | ||
def max_token_limit(cls, value): | ||
# Split text by whitespace to count words (tokens) | ||
tokens = value.split() | ||
if len(tokens) > 100: | ||
raise ValueError("Text exceeds the maximum limit of 100 tokens.") | ||
return value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need a validator here for token limit. We already tell the LLM to give us a max of 100 down below, so we shouldn't be getting more than 100 anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we can take that out
output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe, | ||
__chat_func_validate, __chat_func_clean_up, True) | ||
#output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe,__chat_func_validate, __chat_func_clean_up, True) | ||
output = generate_structured_response(prompt, gpt_param, ObjDesc ,5, fail_safe,__chat_func_validate, __chat_func_clean_up, verbose=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't replace ChatGPT_safe_generate_response
with generate_structured_response
, as they have slightly different arguments. Instead we should use the Structured Outputs ChatGPT_safe_generate_response
conversion that @LemmyTron is working on, so this can wait for that PR to be merged
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok
class DecideToReact(BaseModel): | ||
''' | ||
Should be a decision 1,2, or 3 | ||
''' | ||
decision: int | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When there's only a small set of specific values that are valid, you can use an Enum (short for enumerated) to constrain the model field: https://docs.pydantic.dev/2.0/usage/types/enums/
Something like this might work:
class DecideToReactEnum(IntEnum):
one = 1
two = 2
three = 3
class DecideToReact(BaseModel):
decision: DecideToReactEnum
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh got it
def run_gpt_generate_safety_score(persona, comment, test_input=None, verbose=False): | ||
class SafetyScore(BaseModel): | ||
#safety score should range 1-10 | ||
output: int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could call this "score" or "safety_score"?
if isinstance(gpt_response.output, int) and 1 <= gpt_response.output <= 10: | ||
return gpt_response.output | ||
raise ValueError("Output is not a valid integer between 1 and 10") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good validation and error logic!
output = ChatGPT_safe_generate_response_OLD(prompt, 3, fail_safe, | ||
__chat_func_validate, __chat_func_clean_up, verbose) | ||
#output = ChatGPT_safe_generate_response_OLD(prompt, 3, fail_safe,__chat_func_validate, __chat_func_clean_up, verbose) | ||
output = generate_structured_response( | ||
prompt, | ||
gpt_param, | ||
SafetyScore, | ||
3, | ||
fail_safe, | ||
__chat_func_validate, | ||
__chat_func_clean_up | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also wait on this one since it was using ChatGPT_safe_generate_response_OLD
Just look at differences for my functions in run_gpt_prompt as I implement structured output