Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autogen Assistant app can't work with UserProxy's human_input_mode='ALWAYS' #813

Closed
miiiz opened this issue Nov 30, 2023 · 6 comments
Closed
Assignees
Labels
0.2 Issues which are related to the pre 0.4 codebase proj-studio Related to AutoGen Studio.

Comments

@miiiz
Copy link

miiiz commented Nov 30, 2023

It seems that the whole app can only work with UserProxy's human_input_mode='NEVER'. if UserProxy's human_input_mode set to be 'ALWAYS', there is no way to enter human feedback in the frontend(can enter in the shell). only when the conversation is terminated, one can enter another input in the frontend. that means there is no chats around between UserProxy and AssistantAgent. only one time conversation? Thank you.

@julianakiseleva
Copy link
Contributor

@sonichi can you please clarify this

@ruifengma
Copy link
Collaborator

Yes, I have this issue as well, it seems that the autogenra do not support human in the loop mode and each time I should type in the command line not the UI side, otherwize the process will freeze.

@victordibia
Copy link
Collaborator

Yes, at the moment this requires some streaming infrastructure which is on the roadmap.

  • stream socket setup backend
  • agents can register a human input callback to send a message on a socket channel
  • front end connects to the socket and can allow the user provide responses/human input.

@hughlv
Copy link
Collaborator

hughlv commented Dec 28, 2023

I introduced a hack to solve this issue in my autogenra-alike project.

Can refer to the custom_get_human_input here in this Jinja2 template: https://github.com/tiwater/flowgen/blob/main/backend/app/templates/import_mapping.j2

The generated code would like following. The caller only need to handle stdout and stdin for output/input, could also refer to https://github.com/tiwater/flowgen/blob/main/backend/app/utils/runner.py

# This file is auto-generated by [FlowGen](https://github.com/tiwater/flowgen)
# Last generated: 2023-12-28 16:26:17
#
# Flow Name: Simple Chat
# Description: 
"""
ChatGPT-alike Simple Chat involves human.
"""
#

"""
Notes:

This flow has set the Human Input Mode to ALWAYS. That means whenever receive a message from Assistant, you as user need to respond by typing message in the chatbox. 'exit' will quit the conversation.

1. Send a simple message such as `What day is today?`.
2. If need to quit, send 'exit'
3. Sometimes Assistant will send back some code ask you to execute, you can simply press Enter and the code will be executed.
"""


from dotenv import load_dotenv
load_dotenv()  # This will load all environment variables from .env

import argparse
import os
import time
from termcolor import colored

# Parse command line arguments
parser = argparse.ArgumentParser(description='Start a chat with agents.')
parser.add_argument('message', type=str, help='The message to send to agent.')
args = parser.parse_args()

import autogen

# openai, whisper and moviepy are optional dependencies, currently only used in video transcript example
# However, we beleive they are useful for other future examples, so we include them here as part of standard imports
from openai import OpenAI
import whisper
from moviepy.editor import VideoFileClip

from IPython import get_ipython

from autogen import AssistantAgent
from autogen import UserProxyAgent

# Replace the default get_human_input function for status control
def custom_get_human_input(self, prompt: str) -> str:
    # Set wait_for_human_input to True
    print('__STATUS_WAIT_FOR_HUMAN_INPUT__', prompt, flush=True)
    reply = input(prompt)
    # Restore the status to running
    print('__STATUS_RECEIVED_HUMAN_INPUT__', prompt, flush=True)
    return reply

autogen.ConversableAgent.get_human_input = custom_get_human_input


config_list = autogen.config_list_from_json(
    "OAI_CONFIG_LIST",
    filter_dict={
        "model": ["gpt-4-1106-preview", "gpt-4-vision-preview"],
    },
)

llm_config = {
    "config_list": config_list,
    "temperature": 0.5,
    "max_tokens": 1024,
}

node_vbhhpjj8xo = AssistantAgent(
    name="Assistant",
    llm_config=llm_config,
)

user_proxy = UserProxyAgent(
    name="UserProxy",
    system_message="Hello AI",
    human_input_mode="ALWAYS",
    is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
    max_consecutive_auto_reply=0,
    code_execution_config={
      "work_dir": "work_dir",
    },
)
# Function template content generator
# register the functions

# Start the conversation
user_proxy.initiate_chat(
    node_vbhhpjj8xo,
    message=args.message,
)

@victordibia victordibia added the proj-studio Related to AutoGen Studio. label Jan 30, 2024
@rysweet rysweet added 0.2 Issues which are related to the pre 0.4 codebase needs-triage labels Oct 2, 2024
@rysweet
Copy link
Collaborator

rysweet commented Oct 18, 2024

@victordibia - please either update or close this one. Thanks!

@victordibia
Copy link
Collaborator

There has been a PR that addresses this already. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.2 Issues which are related to the pre 0.4 codebase proj-studio Related to AutoGen Studio.
Projects
None yet
Development

No branches or pull requests

7 participants