Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an initial test for chat_with_ai #3793

Closed
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions tests/test_chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
from unittest.mock import MagicMock

import pytest

from autogpt.agent import Agent
from autogpt.config.ai_config import AIConfig
from autogpt.llm.base import Message
from autogpt.llm.chat import chat_with_ai


@pytest.fixture
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this fixture live in the conftest.py? there might be a agent fixture already defined therem, if so lets just use that other fixture

def agent():
ai_name = "Test AI"
memory = MagicMock()
next_action_count = 0
command_registry = MagicMock()
config = AIConfig()
system_prompt = "System prompt"
triggering_prompt = "Triggering prompt"
workspace_directory = "workspace_directory"

agent = Agent(
ai_name,
memory,
next_action_count,
command_registry,
config,
system_prompt,
triggering_prompt,
workspace_directory,
)
return agent


def test_chat_with_ai_basic_response(mocker, agent):
prompt = "Welcome to the autogpt!"
user_input = "Hello, how are you?"
token_limit = 4000
mocker.patch(
"autogpt.llm.chat.create_chat_completion",
return_value="I'm doing well, thank you for asking.",
Copy link
Contributor

@rihp rihp May 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im still learning more about mocker so excuse me if im incorrect, but by doing this return value and then asserting it arent we always going to get a passing test because the way you are asserting it is just going to return this? please feel free to correct me if im wrong

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the patched module is create_chat_completion it might be a correct implementation, so let me know if I'm missing something

)

response = chat_with_ai(agent, prompt, user_input, token_limit)

assert response == "I'm doing well, thank you for asking."
assert agent.history.messages == [
Message("user", "Hello, how are you?", None),
Message("assistant", "I'm doing well, thank you for asking.", "ai_response"),
]