You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks a lot for this cool framework! From the docs, I understand that it is possible to set the maximum number of tokens that can be generated in a single response when making API calls to models. I need to limit the tokens used across an entire interaction between user proxy and assisant, i.e. multiple messages and responses. I am using user_proxy.initiate_chat(assistant, message=message, max_turns=max_turns). Once the pre-defined token limit has been reached, the conversation should be terminated. Is this possible? Thanks!
The text was updated successfully, but these errors were encountered:
Thanks a lot for this cool framework! From the docs, I understand that it is possible to set the maximum number of tokens that can be generated in a single response when making API calls to models. I need to limit the tokens used across an entire interaction between user proxy and assisant, i.e. multiple messages and responses. I am using
user_proxy.initiate_chat(assistant, message=message, max_turns=max_turns)
. Once the pre-defined token limit has been reached, the conversation should be terminated. Is this possible? Thanks!The text was updated successfully, but these errors were encountered: