Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an initial test for chat_with_ai #3793

Closed

Conversation

kinance
Copy link
Contributor

@kinance kinance commented May 4, 2023

Background

The core function, chat_with_ai, is very important to auto-gpt, but missing a test. It'd be great to have some testcases to verify some basic behaviors and increase code coverage.

Changes

Just to get started: added a basic test for chat.py to cover the chat_with_ai function, and verify the output message, full_message_history and new summary_memory. More testcases to come later.

Test Plan

Ran the test itself.

PR Quality Checklist

  • [*] My pull request is atomic and focuses on a single change.
  • [*] I have thoroughly tested my changes with multiple different prompts.
  • [*] I have considered potential risks and mitigations for my changes.
  • [*] I have documented my changes clearly and comprehensively.
  • [*] I have not snuck in any "extra" small tweaks changes

@vercel
Copy link

vercel bot commented May 4, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 31, 2023 11:45am

@github-actions github-actions bot added the size/l label May 4, 2023
@kinance kinance changed the title Add an initial test for chat_with_ai Adding an initial test for chat_with_ai May 4, 2023
@kinance kinance changed the title Adding an initial test for chat_with_ai Add an initial test for chat_with_ai May 4, 2023
@kinance kinance added the testing label May 4, 2023
@codecov
Copy link

codecov bot commented May 4, 2023

Codecov Report

Patch coverage has no change and project coverage change: -1.94 ⚠️

Comparison is base (63b79a8) 69.65% compared to head (9618530) 67.72%.

❗ Current head 9618530 differs from pull request most recent head f9388aa. Consider uploading reports for the commit f9388aa to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3793      +/-   ##
==========================================
- Coverage   69.65%   67.72%   -1.94%     
==========================================
  Files          72       72              
  Lines        3523     3535      +12     
  Branches      562      559       -3     
==========================================
- Hits         2454     2394      -60     
- Misses        881      956      +75     
+ Partials      188      185       -3     

see 20 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@p-i-
Copy link
Contributor

p-i- commented May 5, 2023

This is a mass message from the AutoGPT core team.
Our apologies for the ongoing delay in processing PRs.
This is because we are re-architecting the AutoGPT core!

For more details (and for infor on joining our Discord), please refer to:
https://github.com/Significant-Gravitas/Auto-GPT/wiki/Architecting

@vercel vercel bot temporarily deployed to Preview May 8, 2023 15:03 Inactive
@Boostrix
Copy link
Contributor

This is going to be even more important once there's support for dynamic prompting #3937

Copy link
Contributor

@rihp rihp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments

token_limit = 4000
mocker.patch(
"autogpt.llm.chat.create_chat_completion",
return_value="I'm doing well, thank you for asking.",
Copy link
Contributor

@rihp rihp May 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im still learning more about mocker so excuse me if im incorrect, but by doing this return value and then asserting it arent we always going to get a passing test because the way you are asserting it is just going to return this? please feel free to correct me if im wrong

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the patched module is create_chat_completion it might be a correct implementation, so let me know if I'm missing something

from autogpt.llm.chat import chat_with_ai


@pytest.fixture
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this fixture live in the conftest.py? there might be a agent fixture already defined therem, if so lets just use that other fixture

token_limit = 4000
mocker.patch(
"autogpt.llm.chat.create_chat_completion",
return_value="I'm doing well, thank you for asking.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above, by patching the function arent we just testing that the patch works, rather than the implementation of the actual function

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i might be wrong though as we are patching create_chat_completion

Copy link
Contributor

@rihp rihp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments

@Boostrix Boostrix added this to the v0.4.0 Release milestone May 16, 2023
@vercel vercel bot temporarily deployed to Preview May 22, 2023 13:21 Inactive
@vercel vercel bot temporarily deployed to Preview May 23, 2023 11:51 Inactive
@vercel
Copy link

vercel bot commented May 25, 2023

Deployment failed with the following error:

Resource is limited - try again in 4 minutes (more than 100, code: "api-deployments-free-per-day").

@vercel
Copy link

vercel bot commented May 26, 2023

Deployment failed with the following error:

Resource is limited - try again in 23 seconds (more than 100, code: "api-deployments-free-per-day").

@vercel vercel bot temporarily deployed to Preview May 29, 2023 13:02 Inactive
@vercel vercel bot temporarily deployed to Preview May 31, 2023 11:45 Inactive
@kinance kinance closed this Jun 11, 2023
@kinance kinance deleted the test-add-first-chat-test branch June 12, 2023 14:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

5 participants