Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Danny search and rescue #14

Merged
merged 25 commits into from
Nov 25, 2024
Merged

Danny search and rescue #14

merged 25 commits into from
Nov 25, 2024

Conversation

dfinch8
Copy link
Collaborator

@dfinch8 dfinch8 commented Oct 27, 2024

Just look at differences for my functions in run_gpt_prompt as I implement structured output

Takes in "Answer: {name}" and reduces to just name.
Also hanldes an input of {name}
'''
name: str
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can call this "area" rather than "name" to give the LLM a bit more of a hint?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, that could work (not sure if I change that or if you)

output = safe_generate_response(prompt, gpt_param, 5, fail_safe,
__func_validate, __func_clean_up, verbose=False)
#output = safe_generate_response(prompt, gpt_param, 5, fail_safe,__func_validate, __func_clean_up, verbose=False)
output = generate_structured_response(prompt, gpt_param, ActionLoc ,5, fail_safe,__func_validate, __func_clean_up, verbose=False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good class reuse! 👍

@@ -991,7 +1012,16 @@ def get_fail_safe(persona):

return output, [output, prompt, gpt_param, prompt_input, fail_safe]

class ObjDesc(BaseModel):
desc: str
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe expand this to "description"?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree

Comment on lines 1018 to 1024
@field_validator("desc")
def max_token_limit(cls, value):
# Split text by whitespace to count words (tokens)
tokens = value.split()
if len(tokens) > 100:
raise ValueError("Text exceeds the maximum limit of 100 tokens.")
return value
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need a validator here for token limit. We already tell the LLM to give us a max of 100 down below, so we shouldn't be getting more than 100 anyway

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we can take that out

Comment on lines 1042 to 1075
output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe,
__chat_func_validate, __chat_func_clean_up, True)
#output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe,__chat_func_validate, __chat_func_clean_up, True)
output = generate_structured_response(prompt, gpt_param, ObjDesc ,5, fail_safe,__chat_func_validate, __chat_func_clean_up, verbose=False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't replace ChatGPT_safe_generate_response with generate_structured_response, as they have slightly different arguments. Instead we should use the Structured Outputs ChatGPT_safe_generate_response conversion that @LemmyTron is working on, so this can wait for that PR to be merged

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

Comment on lines 1406 to 1411
class DecideToReact(BaseModel):
'''
Should be a decision 1,2, or 3
'''
decision: int

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When there's only a small set of specific values that are valid, you can use an Enum (short for enumerated) to constrain the model field: https://docs.pydantic.dev/2.0/usage/types/enums/

Something like this might work:

class DecideToReactEnum(IntEnum):
  one = 1
  two = 2
  three = 3
  
class DecideToReact(BaseModel):
  decision: DecideToReactEnum

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh got it

def run_gpt_generate_safety_score(persona, comment, test_input=None, verbose=False):
class SafetyScore(BaseModel):
#safety score should range 1-10
output: int
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could call this "score" or "safety_score"?

Comment on lines 2752 to 2754
if isinstance(gpt_response.output, int) and 1 <= gpt_response.output <= 10:
return gpt_response.output
raise ValueError("Output is not a valid integer between 1 and 10")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good validation and error logic!

Comment on lines 2714 to 2794
output = ChatGPT_safe_generate_response_OLD(prompt, 3, fail_safe,
__chat_func_validate, __chat_func_clean_up, verbose)
#output = ChatGPT_safe_generate_response_OLD(prompt, 3, fail_safe,__chat_func_validate, __chat_func_clean_up, verbose)
output = generate_structured_response(
prompt,
gpt_param,
SafetyScore,
3,
fail_safe,
__chat_func_validate,
__chat_func_clean_up
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also wait on this one since it was using ChatGPT_safe_generate_response_OLD

@chowington chowington self-requested a review November 25, 2024 21:16
@chowington chowington merged commit 0dcf80a into dev Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants