-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an initial test for chat_with_ai #3793
Add an initial test for chat_with_ai #3793
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Codecov ReportPatch coverage has no change and project coverage change:
Additional details and impacted files@@ Coverage Diff @@
## master #3793 +/- ##
==========================================
- Coverage 69.65% 67.72% -1.94%
==========================================
Files 72 72
Lines 3523 3535 +12
Branches 562 559 -3
==========================================
- Hits 2454 2394 -60
- Misses 881 956 +75
+ Partials 188 185 -3 ☔ View full report in Codecov by Sentry. |
This is a mass message from the AutoGPT core team. For more details (and for infor on joining our Discord), please refer to: |
This is going to be even more important once there's support for dynamic prompting #3937 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some comments
token_limit = 4000 | ||
mocker.patch( | ||
"autogpt.llm.chat.create_chat_completion", | ||
return_value="I'm doing well, thank you for asking.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im still learning more about mocker so excuse me if im incorrect, but by doing this return value
and then asserting it arent we always going to get a passing test because the way you are asserting it is just going to return this? please feel free to correct me if im wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the patched module is create_chat_completion it might be a correct implementation, so let me know if I'm missing something
from autogpt.llm.chat import chat_with_ai | ||
|
||
|
||
@pytest.fixture |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this fixture live in the conftest.py
? there might be a agent fixture already defined therem, if so lets just use that other fixture
tests/test_chat.py
Outdated
token_limit = 4000 | ||
mocker.patch( | ||
"autogpt.llm.chat.create_chat_completion", | ||
return_value="I'm doing well, thank you for asking.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above, by patching the function arent we just testing that the patch works, rather than the implementation of the actual function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i might be wrong though as we are patching create_chat_completion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some comments
Deployment failed with the following error:
|
Deployment failed with the following error:
|
Background
The core function, chat_with_ai, is very important to auto-gpt, but missing a test. It'd be great to have some testcases to verify some basic behaviors and increase code coverage.
Changes
Just to get started: added a basic test for chat.py to cover the chat_with_ai function, and verify the output message, full_message_history and new summary_memory. More testcases to come later.
Test Plan
Ran the test itself.
PR Quality Checklist