Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quick fix for 765 #809

Merged
merged 1 commit into from
Nov 29, 2023
Merged

Quick fix for 765 #809

merged 1 commit into from
Nov 29, 2023

Conversation

afourney
Copy link
Member

Why are these changes needed?

For some reason select_speaker is getting a 'ChatCompletionMessage' rather than a str, and this wasn't being handled gracefully causing a crash.

This PR fixes the crash. 'ChatCompletionMessage' messages are now handled like any other response. However, it is unclear why some folks were seeing this behavior to begin with. This will require further investigation.

Related issue number

Mitigates #765

Checks

…red why we are getting a 'ChatCompletionMessage' rather than a str.
@codecov-commenter
Copy link

codecov-commenter commented Nov 29, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (f96963e) 27.76% compared to head (8ee0af1) 37.21%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #809      +/-   ##
==========================================
+ Coverage   27.76%   37.21%   +9.44%     
==========================================
  Files          27       27              
  Lines        3493     3493              
  Branches      791      791              
==========================================
+ Hits          970     1300     +330     
+ Misses       2452     2073     -379     
- Partials       71      120      +49     
Flag Coverage Δ
unittests 37.16% <100.00%> (+9.44%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@afourney
Copy link
Member Author

The PR #809 offers a quick fix to prevent the crash (once reviewed, accepted, and merged). I will continue hunting down the root cause.

@qingyun-wu qingyun-wu added this pull request to the merge queue Nov 29, 2023
Merged via the queue into main with commit 4fde121 Nov 29, 2023
64 of 71 checks passed
@sonichi sonichi deleted the quick_fix_765 branch November 29, 2023 20:47
whiskyboy pushed a commit to whiskyboy/autogen that referenced this pull request Apr 17, 2024
* Refactor into automl subpackage

Moved some of the packages into an automl subpackage to tidy before the
task-based refactor. This is in response to discussions with the group
and a comment on the first task-based PR.

Only changes here are moving subpackages and modules into the new
automl, fixing imports to work with this structure and fixing some
dependencies in setup.py.

* Fix doc building post automl subpackage refactor

* Fix broken links in website post automl subpackage refactor

* Fix broken links in website post automl subpackage refactor

* Remove vw from test deps as this is breaking the build

* Move default back to the top-level

I'd moved this to automl as that's where it's used internally, but had
missed that this is actually part of the public interface so makes sense
to live where it was.

* Re-add top level modules with deprecation warnings

flaml.data, flaml.ml and flaml.model are re-added to the top level,
being re-exported from flaml.automl for backwards compatability. Adding
a deprecation warning so that we can have a planned removal later.

* Fix model.py line-endings

* Pin pytorch-lightning to less than 1.8.0

We're seeing strange lightning related bugs from pytorch-forecasting
since the release of lightning 1.8.0. Going to try constraining this to
see if we have a fix.

* Fix the lightning version pin

Was optimistic with setting it in the 1.7.x range, but that isn't
compatible with python 3.6

* Remove lightning version pin

* Revert dependency version changes

* Minor change to retrigger the build

* Fix line endings in ml.py and model.py

Co-authored-by: Qingyun Wu <[email protected]>
Co-authored-by: EgorKraevTransferwise <[email protected]>
whiskyboy pushed a commit to whiskyboy/autogen that referenced this pull request Apr 17, 2024
…red why we are getting a 'ChatCompletionMessage' rather than a str. (microsoft#809)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
group chat/teams group-chat-related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants