-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jonathan dev rebase #15
Conversation
… and rescue scenario
…to jonathan-dev-rebase Merging Connor H's updates on adding structured output, and removing duplicate requests in getting a task fro each our of the day.
… and firehouse location. Merge remote-tracking branch 'origin/chowington-search-and-rescue' into jonathan-dev-rebase
… run_gpt_prompt.py using List[str] throught the file, installed the new requirements.txt, added new gpt-key
…ed 7 functions in run_gpt_prompt.py to strucutred output. The functions are the following: run_gpt_prompt_focal_pt, run_gpt_generate_iterative_chat_utt, run_gpt_prompt_insight_and_guidance, run_gpt_prompt_agent_chat, run_gpt_prompt_act_obj_event_triple, run_gpt_prompt_agent_chat_summarize_relationship, run_gpt_prompt_chat_poignancy
…s that need to be handled
convo_parser.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Jonathan, what's the difference between this file and print_conversations.py
? Do we need both of them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way we define Structured Output formats is using Pydantic models in run_gpt_prompt.py
, as I see you've done below. We don't need these changes anymore, so you can revert them
def __func_clean_up(gpt_response: IntPoignancy, prompt=""): | ||
gpt_response = int(gpt_response.strip()) | ||
return gpt_response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since gpt_response
is an object of the type IntPoignancy
, to get the poignany int itself, you need to reference it like this: gpt_response.number
. It should be guaranteed to be an int, but for sanity checking, you can do a type check on it to make sure. If it's not an int, you can throw an error. There's no need to call strip()
or int()
on it, since it's not a string
output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe, | ||
__chat_func_validate, __chat_func_clean_up, True) | ||
output = generate_structured_response( | ||
prompt, | ||
gpt_param, | ||
IntPoignancy, | ||
3, | ||
fail_safe, | ||
__func_validate, | ||
__func_clean_up | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather not change from using ChatGPT_safe_generate_response
to generate_structured_response
, since the latter function is based on safe_generate_response
, and has different arguments than the former function. Instead, this can wait until @LemmyTron's PR is merged so we have a Structured Outputs version of ChatGPT_safe_generate_response
as well
|
||
def run_gpt_prompt_focal_pt(persona, statements, n, test_input=None, verbose=False): | ||
def create_prompt_input(persona, statements, n, test_input=None): | ||
prompt_input = [statements, str(n)] | ||
return prompt_input | ||
|
||
def __func_clean_up(gpt_response, prompt=""): | ||
def __func_clean_up(gpt_response:FocalPt, prompt=""): | ||
gpt_response = "1) " + gpt_response.strip() | ||
ret = [] | ||
for i in gpt_response.split("\n"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here; this clean up function should change
class PromptAgentChat(BaseModel): | ||
convo: list[list[str]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this model could be more explicit/specific in what it's asking for, although I realized that this function is not currently being used in the program flow. This one generates a full conversation in one go, whereas the current method generates one "utterance" from each agent at a time. So it's up to you whether to update this function
class ConversationOutput(BaseModel): | ||
utterance: str | ||
did_end: bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This class looks good--I would use "did_conversation_end" just for a bit more clarity for the LLM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I think it would be good to edit the prompt template for this function, as it suggests a particular JSON format to the LLM itself. We should remove or alter that JSON format in the prompt template to better match this model above. Here's the JSON suggestion in the prompt template:
Output format: Output a json of the following format:
{
"!<INPUT 11>!": "<!<INPUT 12>!'s utterance>",
"Did the conversation end with !<INPUT 13>!'s utterance?": "<json Boolean>"
}
class IterativeChat(BaseModel): | ||
INPUT0: str | ||
INPUT1: str | ||
INPUT2: str | ||
INPUT3: str | ||
INPUT4: str | ||
INPUT5: str | ||
INPUT6: str | ||
INPUT7: str | ||
INPUT8: str | ||
INPUT9: str | ||
INPUT10: str | ||
INPUT11: str | ||
INPUT12: str | ||
INPUT13: str | ||
output: ConversationOutput | ||
|
||
def format_output(self): | ||
return { | ||
self.INPUT11: f"{self.INPUT12}'s utterance", | ||
f"Did the conversation end with {self.INPUT13}'s utterance?": self.output.did_end | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not entirely sure about the purpose of this class? It seems like we would only need the ConversationOutput
class, if I'm reading this code right
@@ -2821,7 +2885,7 @@ def create_prompt_input( | |||
] | |||
return prompt_input | |||
|
|||
def __chat_func_clean_up(gpt_response, prompt=""): | |||
def __chat_func_clean_up(gpt_response:IterativeChat, prompt=""): | |||
gpt_response = extract_first_json_dict(gpt_response) | |||
|
|||
cleaned_dict = dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Be sure to update this cleanup function as well
output = ChatGPT_safe_generate_response_OLD( | ||
prompt, 3, fail_safe, __chat_func_validate, __chat_func_clean_up, verbose | ||
) | ||
output = generate_structured_response( | ||
prompt, | ||
gpt_param, | ||
IterativeChat, | ||
3, | ||
fail_safe, | ||
__chat_func_validate, | ||
__chat_func_clean_up | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is one of the few functions that uses ChatGPT_safe_generate_response_OLD
. We should compare that to ChatGPT_safe_generate_response
and see how the differ, and create a Structure Outputs version of the OLD function as well
created script to test structured output function changes and converted 7 functions in run_gpt_prompt.py to strucutred output. The functions are the following: run_gpt_prompt_focal_pt, run_gpt_generate_iterative_chat_utt, run_gpt_prompt_insight_and_guidance, run_gpt_prompt_agent_chat, run_gpt_prompt_act_obj_event_triple, run_gpt_prompt_agent_chat_summarize_relationship, run_gpt_prompt_chat_poignancy