-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle streamed function calls #1118
Conversation
@microsoft-github-policy-service agree |
HI! Could you please include a test for it? |
Thanks. I'm pretty sure this PR doesn't warrant a new test. No input or return schemas have been modified. It's just supplying what's already expected via the existing pydantic model, which was already being used for the return data, and should already have coverage. I could be mistaken but if so, I'm not yet seeing the test-case. |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1118 +/- ##
===========================================
+ Coverage 31.92% 51.19% +19.26%
===========================================
Files 29 29
Lines 4097 4112 +15
Branches 955 1012 +57
===========================================
+ Hits 1308 2105 +797
+ Misses 2695 1806 -889
- Partials 94 201 +107
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Two things:
pre-commit install
pre-commit run --all-files That should reformat the source using black. Otherwise, the code formatting check will fail (https://github.com/microsoft/autogen/actions/runs/7384108079/job/20093405650?pr=1118).
rm -rf .cache
coverage run -a -m pytest test/oai/test_client_stream.py You should get the following error: ====================================================== test session starts ======================================================
platform linux -- Python 3.10.12, pytest-7.4.3, pluggy-1.3.0
rootdir: /workspaces/autogen
configfile: pyproject.toml
plugins: asyncio-0.23.2, anyio-4.1.0
asyncio: mode=strict
collected 4 items
test/oai/test_client_stream.py ...F [100%]
=========================================================== FAILURES ============================================================
____________________________________________________ test_completion_stream _____________________________________________________
@pytest.mark.skipif(skip, reason="openai>=1 not installed")
def test_completion_stream():
config_list = config_list_openai_aoai(KEY_LOC)
client = OpenAIWrapper(config_list=config_list)
> response = client.create(prompt="1+1=", model="gpt-3.5-turbo-instruct", stream=True)
test/oai/test_client_stream.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
autogen/oai/client.py:272: in create
response.cost = self.cost(response)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <autogen.oai.client.OpenAIWrapper object at 0x7f8274375120>, response = <openai.Stream object at 0x7f8275026d70>
def cost(self, response: Union[ChatCompletion, Completion]) -> float:
"""Calculate the cost of the response."""
> model = response.model
E AttributeError: 'Stream' object has no attribute 'model'
autogen/oai/client.py:468: AttributeError
==================================================== short test summary info ====================================================
FAILED test/oai/test_client_stream.py::test_completion_stream - AttributeError: 'Stream' object has no attribute 'model'
================================================== 1 failed, 3 passed in 2.93s ================================================== |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix black formatting and failed test as described in the comment.
@sonichi run openai tests |
@bitnom as @sonichi noticed, |
Thanks @davorrunje for pointing this out I missed it. Let's do it in this PR as it is the most relevant. |
This is correct. My initial reaction was to want to do it separately since autogen was founded on the deprecated methods. I know there have already been commits merged for using the tools stuff though. I have yet to come to terms with this deprecation myself. I'll read up on it. If someone can zip through it ahead of me, please feel free to go ahead with it. I have some tasks I must complete before I can get to it. |
Perhaps we can merge this PR first and add support for tool call in a different PR. |
@bitnom I need streaming with functions calling too. Is there anything I can do to help? |
This is merged, but supports only deprecated function calls. I am working to support tool calls that replaced function calls. Should be finished this week. |
@davorrunje Ok, thank you. |
I tried downloading and using this updated repo, and it is still not streaming. |
@tyler-suard-parker I made #1184 which should fix it. You could try it out by installing pyautogen from the branch. |
@davorrunje thank you, I really appreciate your help. I will try it now. |
* update colab link * typo * upload file instruction
* Handle streamed function calls * apply black formatting * rm unnecessary stdout print * bug fix --------- Co-authored-by: Davor Runje <[email protected]> Co-authored-by: Eric Zhu <[email protected]>
Why are these changes needed?
Currently, the user's setting of
stream: True
is disregarded (Set toFalse
) whenever function calling is used. We should honor the user's decision, and pave the way for incremental response processing and chunked callback functionality.Related issue number
Resolves reviews of #786 , making the work done in #597 more complete.
Closes #785.
Let's also ping #831 since it was linked to #786 for some reason.
Checks