Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/langchain integration for agent #9

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

Inaki-FF
Copy link

No description provided.

Copy link
Collaborator

@andresdiverGPT andresdiverGPT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see inline comments for specific feedback and suggested changes.


functions = [format_tool_to_openai_function(tool) for tool in tools]
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview").bind(functions=functions)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you are initializing two separate instances of ChatOpenAI which are exactly the same.
Consider creating a single instance and then using it across your code to save resources and avoid redundancy.

Suggested change
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview").bind(functions=functions)
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
model = llm.bind(functions=functions)

Comment on lines +29 to +32
functions = [format_tool_to_openai_function(tool) for tool in tools]
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview").bind(functions=functions)
prompt = ChatPromptTemplate.from_messages([
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable system_message is used but not defined in this scope, which would result in a NameError.
Make sure that system_message is defined before using it.

Suggested change
functions = [format_tool_to_openai_function(tool) for tool in tools]
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview").bind(functions=functions)
prompt = ChatPromptTemplate.from_messages([
# Assuming system_message is defined elsewhere and imported correctly
system_message = "Your system message here"

"""

functions = [format_tool_to_openai_function(tool) for tool in tools]
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be cautious with hardcoding model strings like "gpt-4-1106-preview". It's recommended to use a configuration file or environment variables
for such parameters, to increase the flexibility and maintainability of your code.

Suggested change
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
# Example using environment variable
llm = ChatOpenAI(temperature=0, model=os.environ.get("OPENAI_MODEL", "gpt-3"))

Comment on lines +49 to +50
self.chat_agent = AgentExecutor(agent=chain, memory=memory, tools=tools, verbose=True)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the instantiation of AgentExecutor, the verbose flag is set to True. Ensure that this does not leak sensitive
information to the logs in a production environment.

Suggested change
self.chat_agent = AgentExecutor(agent=chain, memory=memory, tools=tools, verbose=True)
self.chat_agent = AgentExecutor(agent=chain, memory=memory, tools=tools, verbose=False)

Comment on lines +70 to +91
def create_query_engine(self, agents):
"""
Create a query engine using the specified agents.

Args:
agents (dict): A dictionary of agents to be used in the query engine.
"""
nodes = [
IndexNode(text=f"Law Information about {file}", index_id=file)
for file in self.docs.keys()
]

vector_index = VectorStoreIndex(nodes)
vector_retriever = vector_index.as_retriever(similarity_top_k=1)
recursive_retriever = RecursiveRetriever(
"vector", retriever_dict={"vector": vector_retriever}, query_engine_dict=agents, verbose=True
)

response_synthesizer = get_response_synthesizer(response_mode="compact")
self.query_engine = RetrieverQueryEngine.from_args(
recursive_retriever, response_synthesizer=response_synthesizer, service_context=self.service_context
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method create_query_engine relies on a parameter agents that is not passed to it,
leading to a potential NameError. Ensure that all necessary parameters are being passed and used correctly.

Suggested change
def create_query_engine(self, agents):
"""
Create a query engine using the specified agents.
Args:
agents (dict): A dictionary of agents to be used in the query engine.
"""
nodes = [
IndexNode(text=f"Law Information about {file}", index_id=file)
for file in self.docs.keys()
]
vector_index = VectorStoreIndex(nodes)
vector_retriever = vector_index.as_retriever(similarity_top_k=1)
recursive_retriever = RecursiveRetriever(
"vector", retriever_dict={"vector": vector_retriever}, query_engine_dict=agents, verbose=True
)
response_synthesizer = get_response_synthesizer(response_mode="compact")
self.query_engine = RetrieverQueryEngine.from_args(
recursive_retriever, response_synthesizer=response_synthesizer, service_context=self.service_context
)
def create_query_engine(self, agents = None):
if agents is None:
agents = self.agents
...

index_path (str): The path to the index directory.
"""
self.llm = OpenAI(temperature=0, model="gpt-4-1106-preview")
self.service_context = ServiceContext.from_defaults(llm=self.llm)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creating a ServiceContext could be a heavyweight operation. Please verify that the context is supposed to be initiated with each Retriever instance,
or if it can be shared amongst them as a single instance, improving efficiency.

Suggested change
self.service_context = ServiceContext.from_defaults(llm=self.llm)
# Assuming that ServiceContext can be created once and reused
self.service_context = shared_service_context

)
]

function_llm = OpenAI(model="gpt-4-1106-preview")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instantiation of OpenAI here seems to be redundant with the one initialized in __init__ as self.llm.
Consider reusing self.llm instead of creating a new instance.

Suggested change
function_llm = OpenAI(model="gpt-4-1106-preview")
self.agents[file] = OpenAIAgent.from_tools(query_engine_tools, llm=self.llm, verbose=True)

@@ -0,0 +1 @@
"""PLACEHOLDER FOR THE SYSTEM MESSAGE"""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The placeholder here must be replaced with actual system message content or configuration.
Ensure this placeholder has been appropriately updated before merging.

Suggested change
"""PLACEHOLDER FOR THE SYSTEM MESSAGE"""
SYSTEM_MESSAGE_CONTENT = "Actual system message content goes here."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants