-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/langchain integration for agent #9
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: andresdiverGPT <[email protected]>
Co-authored-by: andresdiverGPT <[email protected]>
Co-authored-by: andresdiverGPT <[email protected]>
Co-authored-by: andresdiverGPT <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see inline comments for specific feedback and suggested changes.
|
||
functions = [format_tool_to_openai_function(tool) for tool in tools] | ||
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview") | ||
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview").bind(functions=functions) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like you are initializing two separate instances of ChatOpenAI
which are exactly the same.
Consider creating a single instance and then using it across your code to save resources and avoid redundancy.
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview").bind(functions=functions) | |
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview") | |
model = llm.bind(functions=functions) |
functions = [format_tool_to_openai_function(tool) for tool in tools] | ||
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview") | ||
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview").bind(functions=functions) | ||
prompt = ChatPromptTemplate.from_messages([ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable system_message
is used but not defined in this scope, which would result in a NameError
.
Make sure that system_message
is defined before using it.
functions = [format_tool_to_openai_function(tool) for tool in tools] | |
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview") | |
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview").bind(functions=functions) | |
prompt = ChatPromptTemplate.from_messages([ | |
# Assuming system_message is defined elsewhere and imported correctly | |
system_message = "Your system message here" |
""" | ||
|
||
functions = [format_tool_to_openai_function(tool) for tool in tools] | ||
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Be cautious with hardcoding model strings like "gpt-4-1106-preview". It's recommended to use a configuration file or environment variables
for such parameters, to increase the flexibility and maintainability of your code.
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview") | |
# Example using environment variable | |
llm = ChatOpenAI(temperature=0, model=os.environ.get("OPENAI_MODEL", "gpt-3")) |
self.chat_agent = AgentExecutor(agent=chain, memory=memory, tools=tools, verbose=True) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the instantiation of AgentExecutor
, the verbose
flag is set to True
. Ensure that this does not leak sensitive
information to the logs in a production environment.
self.chat_agent = AgentExecutor(agent=chain, memory=memory, tools=tools, verbose=True) | |
self.chat_agent = AgentExecutor(agent=chain, memory=memory, tools=tools, verbose=False) |
def create_query_engine(self, agents): | ||
""" | ||
Create a query engine using the specified agents. | ||
|
||
Args: | ||
agents (dict): A dictionary of agents to be used in the query engine. | ||
""" | ||
nodes = [ | ||
IndexNode(text=f"Law Information about {file}", index_id=file) | ||
for file in self.docs.keys() | ||
] | ||
|
||
vector_index = VectorStoreIndex(nodes) | ||
vector_retriever = vector_index.as_retriever(similarity_top_k=1) | ||
recursive_retriever = RecursiveRetriever( | ||
"vector", retriever_dict={"vector": vector_retriever}, query_engine_dict=agents, verbose=True | ||
) | ||
|
||
response_synthesizer = get_response_synthesizer(response_mode="compact") | ||
self.query_engine = RetrieverQueryEngine.from_args( | ||
recursive_retriever, response_synthesizer=response_synthesizer, service_context=self.service_context | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method create_query_engine
relies on a parameter agents
that is not passed to it,
leading to a potential NameError
. Ensure that all necessary parameters are being passed and used correctly.
def create_query_engine(self, agents): | |
""" | |
Create a query engine using the specified agents. | |
Args: | |
agents (dict): A dictionary of agents to be used in the query engine. | |
""" | |
nodes = [ | |
IndexNode(text=f"Law Information about {file}", index_id=file) | |
for file in self.docs.keys() | |
] | |
vector_index = VectorStoreIndex(nodes) | |
vector_retriever = vector_index.as_retriever(similarity_top_k=1) | |
recursive_retriever = RecursiveRetriever( | |
"vector", retriever_dict={"vector": vector_retriever}, query_engine_dict=agents, verbose=True | |
) | |
response_synthesizer = get_response_synthesizer(response_mode="compact") | |
self.query_engine = RetrieverQueryEngine.from_args( | |
recursive_retriever, response_synthesizer=response_synthesizer, service_context=self.service_context | |
) | |
def create_query_engine(self, agents = None): | |
if agents is None: | |
agents = self.agents | |
... |
index_path (str): The path to the index directory. | ||
""" | ||
self.llm = OpenAI(temperature=0, model="gpt-4-1106-preview") | ||
self.service_context = ServiceContext.from_defaults(llm=self.llm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creating a ServiceContext
could be a heavyweight operation. Please verify that the context is supposed to be initiated with each Retriever
instance,
or if it can be shared amongst them as a single instance, improving efficiency.
self.service_context = ServiceContext.from_defaults(llm=self.llm) | |
# Assuming that ServiceContext can be created once and reused | |
self.service_context = shared_service_context |
) | ||
] | ||
|
||
function_llm = OpenAI(model="gpt-4-1106-preview") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The instantiation of OpenAI
here seems to be redundant with the one initialized in __init__
as self.llm
.
Consider reusing self.llm
instead of creating a new instance.
function_llm = OpenAI(model="gpt-4-1106-preview") | |
self.agents[file] = OpenAIAgent.from_tools(query_engine_tools, llm=self.llm, verbose=True) |
@@ -0,0 +1 @@ | |||
"""PLACEHOLDER FOR THE SYSTEM MESSAGE""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The placeholder here must be replaced with actual system message content or configuration.
Ensure this placeholder has been appropriately updated before merging.
"""PLACEHOLDER FOR THE SYSTEM MESSAGE""" | |
SYSTEM_MESSAGE_CONTENT = "Actual system message content goes here." |
No description provided.