diff --git a/README.md b/README.md
index 0f0079c..bd1783c 100644
--- a/README.md
+++ b/README.md
@@ -17,6 +17,12 @@ RoboOS: A Hierarchical Embodied Framework for Cross-Embodiment and Multi-Agent C
  π€ RoboBrain 1.0: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete.
+π¬ If you have any questions, feel free to contact us via WeChat.
+
+
+
+
+
## π₯ Overview
The rise of embodied intelligence has intensified the need for robust multi-agent collaboration in industrial automation, service robotics, and smart manufacturing. However, current robotic systems struggle with critical limitations, including poor cross-embodiment adaptability, inefficient task scheduling, and inadequate dynamic error correction. While end-to-end vision-language-action (VLA) models (e.g., OpenVLA, RDT, Pi-0) exhibit weak long-horizon planning and task generalization, hierarchical VLA models (e.g., Helix, Gemini-Robotics, GR00T-N1) lack cross-embodiment compatibility and multi-agent coordination capabilities.
To address these challenges, we present **RoboOS**, the first open-source embodied operating system based on a *Brain-Cerebellum* hierarchical architecture, facilitating a paradigm shift from single-agent to swarm intelligence. Specifically, RoboOS comprises three key components: **(1) the Embodied Cloud Model**, a multimodal large language model (MLLM) for global perception and high-level decision-making; **(2) the Cerebellum Skill Library**, a modular, plug-and-play toolkit for seamless multi-skill execution; and **(3) Real-Time Shared Memory**, a spatiotemporal synchronization mechanism for multi-agent state coordination. By integrating hierarchical information flow, RoboOS bridges the Embodied Brain and Cerebellum Skill Library, enabling robust planning, scheduling, and error correction for long-horizon tasks while ensuring efficient multi-agent collaboration by Real-Time Shared Memory. Moreover, we optimize edge-cloud communication and cloud-based distributed inference to support high-frequency interactions and scalable deployment.
@@ -45,10 +51,8 @@ Extensive real-world experiments across diverse scenarios (e.g., restaurant, hou
- pip package manager
### π₯ Deployment Video Tutorial
-
+> π [Click here to watch the deployment tutorial](https://flagopen.github.io/RoboOS/assets/Deployment.mp4) *(Recommended before getting started)*
+
### π 2. Deployment Methods
Two deployment options are available based on your needs:
@@ -70,7 +74,7 @@ docker run -itd \
--shm-size=500g \
--name agent \
--hostname flagscale-agent \
- -v {your_local_path}/BAAI/RoboBrain2.0-7B:/path/in/container \
+ -v {your_local_path}/BAAI/RoboBrain2.0-7B:/workspace/RoboBrain2.0-7B \
--network=host \
-p 8888:8888 \
-w /workspace/RoboOS \
@@ -102,7 +106,7 @@ pip install -r requirements.txt
git clone https://github.com/FlagOpen/FlagScale
cd FlagScale
-git checkout a0687db035ba1d9c7b2661d8142ee4e8348b1459
+git checkout 3fc2037f90917227bd4aebabd9d7b330523f437c
# Install in editable mode with PYTHONPATH
PYTHONPATH=./:$PYTHONPATH pip install . --verbose --no-build-isolation
@@ -148,6 +152,116 @@ python skill.py
Visit the web UI at http://127.0.0.1:8888 and follow the on-screen instructions to complete configuration.
Once finished, you can control the robot and trigger skills from the interface.
+
+##### β‘οΈ 5. Start vLLM Model Service
+
+RoboOS requires a large language model backend to handle reasoning and tool calls.
+We recommend using **vLLM** to serve the [RoboBrain2.0-7B](https://www.modelscope.cn/models/BAAI/RoboBrain2.0-7B/summary) model.
+
+
+#### 5.1 Install vLLM
+
+```bash
+pip install vllm
+```
+
+#### 5.2 Prepare Chat Template
+The tool_chat_template_hermes.jinja file must be provided for tool-call parsing.
+Place it in the following directory:
+
+```arduino
+RoboOS/deploy/templates/tool_chat_template_hermes.jinja
+```
+#### 5.3 Launch vLLM
+Run the following command to start the model service:
+
+```bash
+vllm serve RoboBrain2.0-7B \
+ --gpu-memory-utilization=0.9 \
+ --max-model-len=10000 \
+ --max-num-seqs=256 \
+ --port=4567 \
+ --trust-remote-code \
+ --enable-chunked-prefill \
+ --enable-auto-tool-choice \
+ --tool-call-parser hermes \
+ --chat-template RoboOS/deploy/templates/tool_chat_template_hermes.jinja
+```
+
+### βοΈ 6. Master & Slaver Configuration
+Before running the system, you need to configure both the **master** and **slaver** agents.
+Each agent requires a `config.yaml` file to define model connection, audio, and logging settings.
+
+#### 6.1 Configuration Files
+- `master/config.yaml`
+- `slaver/config.yaml`
+
+A default template is provided below (you may adjust according to your environment):
+
+```yaml
+
+
+# Cloud Server (vLLM) Model Parameters
+model:
+ model_select: "/workspace/model/BAAI/RoboBrain2.0-7B"
+ model_retry_planning: 5
+ model_dict:
+ cloud_model: "/workspace/model/BAAI/RoboBrain2.0-7B"
+ cloud_type: "default"
+ cloud_api_key: "EMPTY"
+ cloud_server: "http://localhost:4567/v1/"
+ max_chat_message: 50
+
+# Redis Collaborator
+collaborator:
+ host: "127.0.0.1"
+ port: 6379
+ db: 0
+ clear: true
+ password: ""
+
+# Slaver Robot
+robot:
+ # "local" with a fold name such as "demo_robot"
+ # "remote" with URL such as "http://127.0.0.1:8000", and run the Python script 'skill.py' on the robot itself.
+ # call_type: local
+ # path: "demo_robot_local"
+ name: demo_robot
+ call_type: remote
+ path: "http://127.0.0.1:8000"
+
+# Master Scene profile
+profile:
+ path: ./scene/profile.yaml
+
+# Slaver
+tool:
+ # Has the model undergone targeted training on tool_calls
+ support_tool_calls: false
+
+```
+
+
+#### 6.2 Key Parameters
+
++ model.cloud_server:
+Must point to your vLLM service (default: http://localhost:4567/v1/)
+
++ collaborator:
+Redis server configuration (default: 127.0.0.1:6379)
+
++ profileοΌ
+Path to the scene profile YAML file that defines environment and task settings (e.g., ./scene/profile.yaml)
+
++ toolοΌ
+Enable or disable tool-call support. Set `support_tool_calls: true` if your model has been trained for tool calls
++ robot:
+Two modes of calling robot tools
+
+
+β οΈ Make sure these fields are correctly configured; otherwise, RoboOS may fail to connect to vLLM, Redis, or load scene/tool profiles.
+
+
## π§ Manual Deployment (Advanced)
If you prefer to manually run RoboOS without using the deployment web UI, follow the steps below to start the system components directly from source.
diff --git a/assets/wechat.png b/assets/wechat.png
new file mode 100644
index 0000000..d5f91ad
Binary files /dev/null and b/assets/wechat.png differ
diff --git a/deploy/templates/tool_chat_template_hermes.jinja b/deploy/templates/tool_chat_template_hermes.jinja
new file mode 100644
index 0000000..0b0902c
--- /dev/null
+++ b/deploy/templates/tool_chat_template_hermes.jinja
@@ -0,0 +1,130 @@
+{%- macro json_to_python_type(json_spec) %}
+ {%- set basic_type_map = {
+ "string": "str",
+ "number": "float",
+ "integer": "int",
+ "boolean": "bool"
+} %}
+
+ {%- if basic_type_map[json_spec.type] is defined %}
+ {{- basic_type_map[json_spec.type] }}
+ {%- elif json_spec.type == "array" %}
+ {{- "list[" + json_to_python_type(json_spec|items) + "]" }}
+ {%- elif json_spec.type == "object" %}
+ {%- if json_spec.additionalProperties is defined %}
+ {{- "dict[str, " + json_to_python_type(json_spec.additionalProperties) + ']' }}
+ {%- else %}
+ {{- "dict" }}
+ {%- endif %}
+ {%- elif json_spec.type is iterable %}
+ {{- "Union[" }}
+ {%- for t in json_spec.type %}
+ {{- json_to_python_type({"type": t}) }}
+ {%- if not loop.last %}
+ {{- "," }}
+ {%- endif %}
+ {%- endfor %}
+ {{- "]" }}
+ {%- else %}
+ {{- "Any" }}
+ {%- endif %}
+{%- endmacro %}
+
+
+{{- bos_token }}
+{{- "<|im_start|>system\nYou are a function calling AI model. You are provided with function signatures within XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: " }}
+{%- if tools is iterable and tools | length > 0 %}
+ {%- for tool in tools %}
+ {%- if tool.function is defined %}
+ {%- set tool = tool.function %}
+ {%- endif %}
+ {{- '{"type": "function", "function": ' }}
+ {{- '{"name": "' + tool.name + '", ' }}
+ {{- '"description": "' + tool.name + '(' }}
+ {%- for param_name, param_fields in tool.parameters.properties|items %}
+ {{- param_name + ": " + json_to_python_type(param_fields) }}
+ {%- if not loop.last %}
+ {{- ", " }}
+ {%- endif %}
+ {%- endfor %}
+ {{- ")" }}
+ {%- if tool.return is defined %}
+ {{- " -> " + json_to_python_type(tool.return) }}
+ {%- endif %}
+ {{- " - " + tool.description + "\n\n" }}
+ {%- for param_name, param_fields in tool.parameters.properties|items %}
+ {%- if loop.first %}
+ {{- " Args:\n" }}
+ {%- endif %}
+ {{- " " + param_name + "(" + json_to_python_type(param_fields) + "): " + param_fields.description|trim }}
+ {%- endfor %}
+ {%- if tool.return is defined and tool.return.description is defined %}
+ {{- "\n Returns:\n " + tool.return.description }}
+ {%- endif %}
+ {{- '"' }}
+ {{- ', "parameters": ' }}
+ {%- if tool.parameters.properties | length == 0 %}
+ {{- "{}" }}
+ {%- else %}
+ {{- tool.parameters|tojson }}
+ {%- endif %}
+ {{- "}" }}
+ {%- if not loop.last %}
+ {{- "\n" }}
+ {%- endif %}
+ {%- endfor %}
+{%- endif %}
+{{- " " }}
+{{- 'Use the following pydantic model json schema for each tool call you will make: {"properties": {"name": {"title": "Name", "type": "string"}, "arguments": {"title": "Arguments", "type": "object"}}, "required": ["name", "arguments"], "title": "FunctionCall", "type": "object"}}
+' }}
+{{- "For each function call return a json object with function name and arguments within XML tags as follows:
+" }}
+{{- "
+" }}
+{{- '{"name": , "arguments": }
+' }}
+{{- '<|im_end|>' }}
+{%- for message in messages %}
+ {%- if message.role == "user" or message.role == "system" or (message.role == "assistant" and message.tool_calls is not defined) %}
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
+ {%- elif message.role == "assistant" and message.tool_calls is defined %}
+ {{- '<|im_start|>' + message.role }}
+ {%- for tool_call in message.tool_calls %}
+ {{- '\n\n' }}
+ {%- if tool_call.function is defined %}
+ {%- set tool_call = tool_call.function %}
+ {%- endif %}
+ {{- '{' }}
+ {{- '"name": "' }}
+ {{- tool_call.name }}
+ {{- '"' }}
+ {%- if tool_call.arguments is defined %}
+ {{- ', ' }}
+ {{- '"arguments": ' }}
+ {{- tool_call.arguments|tojson }}
+ {%- endif %}
+ {{- '}' }}
+ {{- '\n' }}
+ {%- endfor %}
+ {{- '<|im_end|>\n' }}
+ {%- elif message.role == "tool" %}
+ {%- if loop.previtem and loop.previtem.role != "tool" %}
+ {{- '<|im_start|>tool\n' }}
+ {%- endif %}
+ {{- '\n' }}
+ {{- message.content }}
+ {%- if not loop.last %}
+ {{- '\n\n' }}
+ {%- else %}
+ {{- '\n' }}
+ {%- endif %}
+ {%- if not loop.last and loop.nextitem.role != "tool" %}
+ {{- '<|im_end|>' }}
+ {%- elif loop.last %}
+ {{- '<|im_end|>' }}
+ {%- endif %}
+ {%- endif %}
+{%- endfor %}
+{%- if add_generation_prompt %}
+ {{- '<|im_start|>assistant\n' }}
+{%- endif %}
diff --git a/master/agents/agent.py b/master/agents/agent.py
index 04ff612..38190d4 100644
--- a/master/agents/agent.py
+++ b/master/agents/agent.py
@@ -22,6 +22,8 @@ def __init__(self, config_path="config.yaml"):
self.logger.info(f"Configuration loaded from {config_path} ...")
self.logger.info(f"Master Configuration:\n{self.config}")
+
+ self._init_scene(self.config["profile"])
self._start_listener()
def _init_logger(self, logger_config):
@@ -54,9 +56,26 @@ def _init_config(self, config_path="config.yaml"):
with open(config_path, "r", encoding="utf-8") as f:
self.config = yaml.safe_load(f)
+ def _init_scene(self, scene_config):
+ """Initialize scene object"""
+ path = scene_config["path"]
+ if not os.path.exists(path):
+ self.logger.error(f"Scene config file {path} does not exist.")
+ raise FileNotFoundError(f"Scene config file {path} not found.")
+ with open(path, "r", encoding="utf-8") as f:
+ self.scene = yaml.safe_load(f)
+
+ scenes = self.scene.get("scene", [])
+ for scene_info in scenes:
+ scene_name = scene_info.pop("name", None)
+ if scene_name:
+ self.collaborator.record_environment(scene_name, json.dumps(scene_info))
+ else:
+ print("Warning: Missing 'name' in scene_info:", scene_info)
+
def _handle_register(self, robot_name: Dict) -> None:
"""Listen for robot registrations."""
- robot_info = self.collaborator.retrieve_agent(robot_name)
+ robot_info = self.collaborator.read_agent_info(robot_name)
self.logger.info(
f"AGENT_REGISTRATION: {robot_name} \n {json.dumps(robot_info)}"
)
@@ -170,8 +189,8 @@ def reasoning_and_subtasks_is_right(self, reasoning_and_subtasks: dict) -> bool:
if isinstance(subtask, dict) and "robot_name" in subtask
}
- # Retrieve list of all registered robots from the collaborator
- robots_list = set(self.collaborator.retrieve_all_agents_name())
+ # Read list of all registered robots from the collaborator
+ robots_list = set(self.collaborator.read_all_agents_name())
# Check if all workers are registered
return worker_list.issubset(robots_list)
diff --git a/master/agents/planner.py b/master/agents/planner.py
index 91b47f6..59d6493 100644
--- a/master/agents/planner.py
+++ b/master/agents/planner.py
@@ -67,11 +67,12 @@ def display_profiling_info(self, description: str, message: any):
def forward(self, task: str) -> str:
"""Get the sub-tasks from the task."""
- all_robots_name = self.collaborator.retrieve_all_agents_name()
- all_robots_info = self.collaborator.retrieve_all_agents()
+ all_robots_name = self.collaborator.read_all_agents_name()
+ all_robots_info = self.collaborator.read_all_agents_info()
+ all_environments_info = self.collaborator.read_environment()
content = MASTER_PLANNING_PLANNING.format(
- robot_name_list=all_robots_name, robot_tools_info=all_robots_info, task=task
+ robot_name_list=all_robots_name, robot_tools_info=all_robots_info, task=task, scene_info=all_environments_info
)
messages = [
diff --git a/master/agents/prompts.py b/master/agents/prompts.py
index 3e89d9d..1431fb3 100644
--- a/master/agents/prompts.py
+++ b/master/agents/prompts.py
@@ -1,24 +1,27 @@
-MASTER_PLANNING_PLANNING = """
-
-Please only use {robot_name_list} with skills {robot_tools_info}.
-Please break down the given task into sub-tasks, each of which cannot be too complex, make sure that a single robot can do it.
-It can't be too simple either, e.g. it can't be a sub-task that can be done by a single step robot tool.
-Each sub-task in the output needs a concise name of the sub-task, which includes the robots that need to complete the sub-task.
-Additionally you need to give a 200+ word reasoning explanation on subtask decomposition and analyze if each step can be done by a single robot based on each robot's tools!
-
-## The output format is as follows, in the form of a JSON structure:
-{{
- "reasoning_explanation": xxx,
- "subtask_list": [
- {{"robot_name": xxx, "subtask": xxx, "subtask_order": xxx}},
- {{"robot_name": xxx, "subtask": xxx, "subtask_order": xxx}},
- {{"robot_name": xxx, "subtask": xxx, "subtask_order": xxx}},
- ]
-}}
-
-## Note: 'subtask_order' means the order of the sub-task.
-If the tasks are not sequential, please set the same 'task_order' for the same task. For example, if two robots are assigned to the two tasks, both of which are independance, they should share the same 'task_order'.
-If the tasks are sequential, the 'task_order' should be set in the order of execution. For example, if the task_2 should be started after task_1, they should have different 'task_order'.
-
-# The task to be completed is: {task}. Your output answer:
-"""
+MASTER_PLANNING_PLANNING = """
+
+Please only use {robot_name_list} with skills {robot_tools_info}.
+You must also consider the following scene information when decomposing the task:
+{scene_info}
+
+Please break down the given task into sub-tasks, each of which cannot be too complex, make sure that a single robot can do it.
+It can't be too simple either, e.g. it can't be a sub-task that can be done by a single step robot tool.
+Each sub-task in the output needs a concise name of the sub-task, which includes the robots that need to complete the sub-task.
+Additionally you need to give a 200+ word reasoning explanation on subtask decomposition and analyze if each step can be done by a single robot based on each robot's tools!
+
+## The output format is as follows, in the form of a JSON structure:
+{{
+ "reasoning_explanation": xxx,
+ "subtask_list": [
+ {{"robot_name": xxx, "subtask": xxx, "subtask_order": xxx}},
+ {{"robot_name": xxx, "subtask": xxx, "subtask_order": xxx}},
+ {{"robot_name": xxx, "subtask": xxx, "subtask_order": xxx}},
+ ]
+}}
+
+## Note: 'subtask_order' means the order of the sub-task.
+If the tasks are not sequential, please set the same 'task_order' for the same task. For example, if two robots are assigned to the two tasks, both of which are independance, they should share the same 'task_order'.
+If the tasks are sequential, the 'task_order' should be set in the order of execution. For example, if the task_2 should be started after task_1, they should have different 'task_order'.
+
+# The task to be completed is: {task}. Your output answer:
+"""
diff --git a/master/config.yaml b/master/config.yaml
index 8bdfe76..08945a1 100644
--- a/master/config.yaml
+++ b/master/config.yaml
@@ -69,4 +69,8 @@ logger:
robot_memory_yaml: ".log/robot_memory.yaml"
# Output reasoning context, time cost and other information
-profiling: true
\ No newline at end of file
+profiling: true
+
+# scene profile
+profile:
+ path: ./scene/profile.yaml
\ No newline at end of file
diff --git a/master/run.py b/master/run.py
index 498350c..8908aaf 100644
--- a/master/run.py
+++ b/master/run.py
@@ -48,7 +48,7 @@ def robot_status():
JSON response with robot status
"""
try:
- registered_robots = master_agent.collaborator.retrieve_all_agents()
+ registered_robots = master_agent.collaborator.read_all_agents_info()
registered_robots_status = []
for robot_name, robot_info in registered_robots.items():
registered_robots_status.append(
diff --git a/master/scene/profile.yaml b/master/scene/profile.yaml
new file mode 100644
index 0000000..c9ee442
--- /dev/null
+++ b/master/scene/profile.yaml
@@ -0,0 +1,27 @@
+scene:
+ - name: kitchenTable
+ type: table
+ contains:
+ - apple
+ - pear
+ - banana
+ - knife
+
+ - name: customTable
+ type: table
+ contains:
+ - basket
+ - plate
+ - cup
+
+ - name: servingTable
+ type: table
+ contains:
+ - bowl
+ - fork
+ - spoon
+
+ - name: basket
+ type: container
+ contains:
+ - egg
diff --git a/slaver/agents/models.py b/slaver/agents/models.py
index fbd161a..2fce7b1 100644
--- a/slaver/agents/models.py
+++ b/slaver/agents/models.py
@@ -307,11 +307,19 @@ def __call__(
stop_sequences: Optional[List[str]] = None,
tools_to_call_from: Optional[List[str]] = None,
) -> ChatMessage:
- content = task
+
+ content = (
+ "Rules:\n"
+ "- Only call a tool IF AND ONLY IF the action is required by the task AND has NOT already been completed.\n"
+ "- Do NOT call the same tool multiple times for the same object/location.\n"
+ "- Do NOT make assumptions beyond the task description.\n\n"
+ )
+
+ content += f"Task: {task}\n\n"
if len(current_status) > 0:
- content += "\n Current Status: "
+ content += "Completed Actions:\n"
for current_short_statu in current_status:
- content += f"{current_short_statu} "
+ content += f"- {current_short_statu}\n"
completion_kwargs = {
"messages": [{"role": "user", "content": content}],
"model": model_path,
diff --git a/slaver/agents/slaver_agent.py b/slaver/agents/slaver_agent.py
index 1fe4240..f6d5f69 100644
--- a/slaver/agents/slaver_agent.py
+++ b/slaver/agents/slaver_agent.py
@@ -10,7 +10,7 @@
from mcp import ClientSession
from rich.panel import Panel
from rich.text import Text
-from tools.memory import ActionStep, AgentMemory
+from tools.memory import ActionStep, AgentMemory, SceneMemory
from tools.monitoring import AgentLogger, LogLevel, Monitor
logger = getLogger(__name__)
@@ -51,6 +51,7 @@ def __init__(
self.step_number = 0
self.state = {}
self.memory = AgentMemory()
+ self.scene = SceneMemory(collaborator)
self.logger = AgentLogger(level=verbosity_level, log_file=log_file)
self.monitor = Monitor(self.model, self.logger)
self.step_callbacks = step_callbacks if step_callbacks is not None else []
@@ -160,8 +161,36 @@ async def _execute_tool_call(
f"Observations: {observation.replace('[', '|')}", # escape potential rich-tag-like components
level=LogLevel.INFO,
)
+
+ # Construct memory input
+ memory_input = {
+ "tool_name": tool_name,
+ "arguments": tool_arguments,
+ "result": observation,
+ }
+ try:
+ await self.memory_predict(memory_input)
+ except Exception as e:
+ print(f"[Scene Update Error] `{e}`")
+
return observation
+ async def memory_predict(self, memory_input: dict) -> str:
+ """
+ Use the model to predict the scene-level effect of the current tool execution.
+ Possible effects: add_object, remove_object, move_object, position.
+ """
+
+ prompt = self.scene.get_action_type_prompt(memory_input)
+
+ model_message: ChatMessage = self.model(
+ task=prompt, current_status="", model_path=self.model_path
+ )
+
+ action_type = model_message.content.strip().lower()
+
+ self.scene.apply_action(action_type, json.loads(memory_input["arguments"]))
+
async def step(self, memory_step: ActionStep) -> Union[None, Any]:
"""
Perform one step in the ReAct framework: the agent thinks, acts, and observes the result.
@@ -195,4 +224,11 @@ async def step(self, memory_step: ActionStep) -> Union[None, Any]:
else:
return "final_answer"
+ current_call = {"tool_name": tool_name, "tool_arguments": tool_arguments}
+
+ if self.tool_call and self.tool_call[-1] == current_call:
+ return "final_answer"
+ else:
+ self.tool_call.append(current_call)
+
return await self._execute_tool_call(tool_name, tool_arguments, memory_step)
diff --git a/slaver/config.yaml b/slaver/config.yaml
index ce28432..dfeaaed 100644
--- a/slaver/config.yaml
+++ b/slaver/config.yaml
@@ -1,6 +1,6 @@
tool:
# Has the model undergone targeted training on tool_calls
- support_tool_calls: true
+ support_tool_calls: false
# Cloud Server Infos
model:
diff --git a/slaver/run.py b/slaver/run.py
index ee9dc29..eebe6be 100644
--- a/slaver/run.py
+++ b/slaver/run.py
@@ -182,6 +182,11 @@ async def connect_to_robot(self):
await self.session.initialize()
+ # init robot
+ self.collaborator.record_environment(
+ "robot", json.dumps({"position": None, "holding": None, "status": "idle"})
+ )
+
# List available tools
response = await self.session.list_tools()
self.tools = [
diff --git a/slaver/tools/memory.py b/slaver/tools/memory.py
index 7c4b47b..e564ffe 100644
--- a/slaver/tools/memory.py
+++ b/slaver/tools/memory.py
@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+import json
from dataclasses import asdict, dataclass
from logging import getLogger
from typing import TYPE_CHECKING, Any, Dict, List, Optional, TypedDict, Union
@@ -185,4 +186,131 @@ def replay(self, logger: AgentLogger, detailed: bool = False):
)
-__all__ = ["AgentMemory"]
+class SceneMemory:
+ def __init__(self, collaborator):
+ self.collaborator = collaborator
+
+ def add_object(self, target: str):
+ robot_info = self.collaborator.read_environment("robot")
+ if not robot_info:
+ print("[Error] robot_info not found")
+ return
+
+ position = robot_info.get("position")
+ holding = robot_info.get("holding")
+
+ if holding != target:
+ print(f"[Warning] Robot is not holding '{target}', but holding '{holding}'")
+ return
+
+ scene_obj = self.collaborator.read_environment(position)
+ if not scene_obj:
+ print(f"[Error] Scene object at position '{position}' not found")
+ return
+
+ contains = scene_obj.get("contains", [])
+ if target not in contains:
+ contains.append(target)
+ scene_obj["contains"] = contains
+
+ robot_info["holding"] = None
+
+ self.collaborator.record_environment("robot", json.dumps(robot_info))
+ self.collaborator.record_environment(position, json.dumps(scene_obj))
+
+ def remove_object(self, target: str):
+ robot_info = self.collaborator.read_environment("robot")
+ if not robot_info:
+ print("[Error] robot_info not found")
+ return
+
+ position = robot_info.get("position")
+ scene_obj = self.collaborator.read_environment(position)
+ if not scene_obj:
+ print(f"[Error] Scene object at position '{position}' not found")
+ return
+
+ contains = scene_obj.get("contains", [])
+ if target not in contains:
+ print(f"[Warning] Object '{target}' not found in '{position}'")
+ return
+
+ contains.remove(target)
+ scene_obj["contains"] = contains
+ robot_info["holding"] = target
+
+ self.collaborator.record_environment("robot", json.dumps(robot_info))
+ self.collaborator.record_environment(position, json.dumps(scene_obj))
+
+ def move_to(self, target: str):
+ robot_info = self.collaborator.read_environment("robot")
+ if not robot_info:
+ print("[Error] robot_info not found")
+ return
+
+ robot_info["position"] = target
+ success = self.collaborator.record_environment("robot", json.dumps(robot_info))
+ if not success:
+ print(f"[Error] Failed to update robot position to '{target}'")
+
+ def apply_action(self, action_type: str, args: dict):
+ """
+ Apply scene update based on action_type: 'add_object', 'remove_object', or 'position'
+ """
+ print(f"[Scene Update] Applying `{action_type}` with args {args}")
+ try:
+ if "remove_object" in action_type:
+ target = args.get("object")
+ if target:
+ self.remove_object(target)
+ else:
+ print("[Scene Update] Missing `object` for remove_object")
+
+ elif "add_object" in action_type:
+ target = args.get("object")
+ if target:
+ self.add_object(target)
+ else:
+ print("[Scene Update] Missing `object` for add_object")
+
+ elif "position" in action_type:
+ target = args.get("target")
+ if target:
+ self.move_to(target)
+ else:
+ print("[Scene Update] Missing `target` for position")
+
+ else:
+ print(f"[Scene Update] Unknown action `{action_type}`")
+ except Exception as e:
+ print(f"[Scene Update] Error applying action `{action_type}`: {e}")
+
+ @staticmethod
+ def get_action_type_prompt(memory_input: Dict) -> str:
+ return f"""
+You are a robot task planner responsible for updating a symbolic scene memory.
+
+Each tool the robot calls has a side effect on the world, which can be one of the following **scene-level action types**:
+
+- `add_object`: An object that was previously not in the environment (e.g., held by the robot) is placed back into the environment, like placing an apple into a basket.
+- `remove_object`: An object is taken out of the environment (e.g., from a table) and held by the robot, such as grasping or picking up something.
+- `position`: The environment is not changed; the robot itself may move (e.g., navigation), but no object is added, removed, or moved.
+
+---
+
+Given the following tool execution, predict what scene-level action type this tool represents.
+
+Tool name: {memory_input['tool_name']}
+Arguments: {json.dumps(memory_input['arguments'], ensure_ascii=False)}
+Result: {memory_input['result']}
+
+---
+
+Answer strictly with one of the following:
+[add_object, remove_object, position]
+
+Answer with only one action type from the list above. Do not include any explanation.
+"""
+
+
+__all__ = ["AgentMemory", "SceneMemory"]
diff --git a/slaver/tools/monitoring.py b/slaver/tools/monitoring.py
index de2e1fe..d723068 100644
--- a/slaver/tools/monitoring.py
+++ b/slaver/tools/monitoring.py
@@ -1,325 +1,325 @@
-#!/usr/bin/env python
-# coding=utf-8
-
-###############################################################
-# Copyright 2025 BAAI. All rights reserved.
-###############################################################
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import json
-import logging
-from enum import IntEnum
-from typing import List, Optional, Union
-
-from rich import box
-from rich.console import Console, Group
-from rich.panel import Panel
-from rich.rule import Rule
-from rich.syntax import Syntax
-from rich.table import Table
-from rich.text import Text
-from rich.tree import Tree
-from tools.utils import escape_code_brackets
-
-__all__ = ["AgentLogger", "LogLevel", "Monitor"]
-
-
-class Monitor:
- def __init__(self, tracked_model, logger):
- self.step_durations = []
- self.tracked_model = tracked_model
- self.logger = logger
- if (
- getattr(self.tracked_model, "last_input_token_count", "Not found")
- != "Not found"
- ):
- self.total_input_token_count = 0
- self.total_output_token_count = 0
-
- def get_total_token_counts(self):
- return {
- "input": self.total_input_token_count,
- "output": self.total_output_token_count,
- }
-
- def reset(self):
- self.step_durations = []
- self.total_input_token_count = 0
- self.total_output_token_count = 0
-
- def update_metrics(self, step_log):
- """Update the metrics of the monitor.
-
- Args:
- step_log ([`MemoryStep`]): Step log to update the monitor with.
- """
- step_duration = step_log.duration
- self.step_durations.append(step_duration)
- console_outputs = (
- f"[Step {len(self.step_durations)}: Duration {step_duration:.2f} seconds"
- )
-
- if getattr(self.tracked_model, "last_input_token_count", None) is not None:
- self.total_input_token_count += self.tracked_model.last_input_token_count
- self.total_output_token_count += self.tracked_model.last_output_token_count
- console_outputs += f"| Input tokens: {self.total_input_token_count:,} | Output tokens: {self.total_output_token_count:,}"
- console_outputs += "]"
- self.logger.log(Text(console_outputs, style="dim"), level=1)
-
-
-class LogLevel(IntEnum):
- OFF = -1 # No output
- ERROR = 0 # Only errors
- INFO = 1 # Normal output (default)
- DEBUG = 2 # Detailed output
-
-
-YELLOW_HEX = "#d4b702"
-
-
-class AgentLogger:
- def __init__(self, level: LogLevel = LogLevel.INFO, log_file: str = "agent.log"):
- """
- Initialize the logger.
-
- Args:
- level (LogLevel): Logging level. Defaults to LogLevel.INFO.
- log_file (str): Path to the log file. Defaults to "agent.log".
- """
- self.level = level
- self.console = Console()
-
- # Configure file logging
- if log_file:
- self.log_file = log_file
- self._init_file_logger(log_file)
-
- def _init_file_logger(self, log_file):
- """Initialize the file logger."""
- self.file_logger = logging.getLogger(log_file.split(".")[-2])
- self.file_logger.setLevel(logging.DEBUG) # File logger captures all levels
-
- # Create a file handler
- file_handler = logging.FileHandler(self.log_file)
- file_handler.setLevel(logging.DEBUG)
-
- # Set log format
- formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
- file_handler.setFormatter(formatter)
-
- # Add the handler to the logger
- self.file_logger.addHandler(file_handler)
-
- def log(self, *args, level: Union[str, LogLevel] = LogLevel.INFO, **kwargs) -> None:
- """Logs a message to the console.
-
- Args:
- level (LogLevel, optional): Defaults to LogLevel.INFO.
- """
- if isinstance(level, str):
- level = LogLevel[level.upper()]
- if level <= self.level:
- self.console.print(*args, **kwargs)
-
- def log2file(self, log_message, level: LogLevel = LogLevel.INFO) -> None:
- """Logs a message to the file logger."""
- if hasattr(self, "file_logger"):
- if level == LogLevel.DEBUG:
- self.file_logger.debug(log_message)
- elif level == LogLevel.INFO:
- self.file_logger.info(log_message)
- elif level == LogLevel.ERROR:
- self.file_logger.error(log_message)
-
- def log_error(self, error_message: str) -> None:
- self.log(
- escape_code_brackets(error_message), style="bold red", level=LogLevel.ERROR
- )
- self.log2file(error_message, level=LogLevel.ERROR)
-
- def log_markdown(
- self,
- content: str,
- title: Optional[str] = None,
- level=LogLevel.INFO,
- style=YELLOW_HEX,
- ) -> None:
- markdown_content = Syntax(
- content,
- lexer="markdown",
- theme="github-dark",
- word_wrap=True,
- )
- if title:
- self.log(
- Group(
- Rule(
- "[bold italic]" + title,
- align="left",
- style=style,
- ),
- markdown_content,
- ),
- level=level,
- )
- # self.log2file("\n=====> Markdown <=====\n" + title + "\n" + content.split("'function':")[-1].split("function=")[-1] + "\n=====> Markdown <=====", level=level)
- else:
- self.log(markdown_content, level=level)
-
- self.log2file(
- "\n=====> Markdown <=====\n"
- + content.split("'function':")[-1].split("function=")[-1]
- + "\n=====> Markdown <=====",
- level=level,
- )
-
- def log_code(self, title: str, content: str, level: int = LogLevel.INFO) -> None:
- self.log(
- Panel(
- Syntax(
- content,
- lexer="python",
- theme="monokai",
- word_wrap=True,
- ),
- title="[bold]" + title,
- title_align="left",
- box=box.HORIZONTALS,
- ),
- level=level,
- )
- self.log2file(
- "\n=====> Code <=====\n" + title + "\n" + content + "\n=====> Code <=====",
- level=level,
- )
-
- def log_rule(self, title: str, level: int = LogLevel.INFO) -> None:
- self.log(
- Rule(
- "[bold]" + title,
- characters="β",
- style=YELLOW_HEX,
- ),
- level=LogLevel.INFO,
- )
- self.log2file(
- "\n=====> Step <=====\n" + title + "\n=====> Step <=====", level=level
- )
-
- def log_task(
- self,
- content: str,
- subtitle: str,
- title: Optional[str] = None,
- level: int = LogLevel.INFO,
- ) -> None:
- self.log(
- Panel(
- f"\n[bold]{escape_code_brackets(content)}\n",
- title="[bold]New run" + (f" - {title}" if title else ""),
- subtitle=subtitle,
- border_style=YELLOW_HEX,
- subtitle_align="left",
- ),
- level=level,
- )
- if title and subtitle:
- self.log2file(
- "\n=====> Task <=====\n"
- + title
- + " -> "
- + subtitle
- + "\n"
- + content
- + "\n=====> Task <=====",
- level=level,
- )
- elif title:
- self.log2file(
- "\n=====> Task <=====\n"
- + title
- + "\n"
- + content
- + "\n=====> Task <=====",
- level=level,
- )
- else:
- self.log2file(
- "\n=====> Task <=====\n" + content + "\n=====> Task <=====", level=level
- )
-
- def log_messages(self, messages: List) -> None:
- messages_as_string = "\n".join(
- [json.dumps(dict(message), indent=4) for message in messages]
- )
- self.log(
- Syntax(
- messages_as_string,
- lexer="markdown",
- theme="github-dark",
- word_wrap=True,
- )
- )
- self.log2file(
- "\n=====> Message <=====\n" + messages_as_string + "\n=====> Message <====="
- )
-
- def visualize_agent_tree(self, agent):
- def create_tools_section(tools_dict):
- table = Table(show_header=True, header_style="bold")
- table.add_column("Name", style="#1E90FF")
- table.add_column("Description")
- table.add_column("Arguments")
-
- for name, tool in tools_dict.items():
- args = [
- f"{arg_name} (`{info.get('type', 'Any')}`{', optional' if info.get('optional') else ''}): {info.get('description', '')}"
- for arg_name, info in getattr(tool, "inputs", {}).items()
- ]
- table.add_row(
- name, getattr(tool, "description", str(tool)), "\n".join(args)
- )
-
- return Group("π οΈ [italic #1E90FF]Tools:[/italic #1E90FF]", table)
-
- def get_agent_headline(agent, name: Optional[str] = None):
- name_headline = f"{name} | " if name else ""
- return f"[bold {YELLOW_HEX}]{name_headline}{agent.__class__.__name__} | {agent.model.model_id}"
-
- def build_agent_tree(parent_tree, agent_obj):
- """Recursively builds the agent tree."""
- parent_tree.add(create_tools_section(agent_obj.tools))
-
- if agent_obj.managed_agents:
- agents_branch = parent_tree.add("π€ [italic #1E90FF]Managed agents:")
- for name, managed_agent in agent_obj.managed_agents.items():
- agent_tree = agents_branch.add(
- get_agent_headline(managed_agent, name)
- )
- if managed_agent.__class__.__name__ == "CodeAgent":
- agent_tree.add(
- f"β [italic #1E90FF]Authorized imports:[/italic #1E90FF] {managed_agent.additional_authorized_imports}"
- )
- agent_tree.add(
- f"π [italic #1E90FF]Description:[/italic #1E90FF] {managed_agent.description}"
- )
- build_agent_tree(agent_tree, managed_agent)
-
- main_tree = Tree(get_agent_headline(agent))
- if agent.__class__.__name__ == "CodeAgent":
- main_tree.add(
- f"β [italic #1E90FF]Authorized imports:[/italic #1E90FF] {agent.additional_authorized_imports}"
- )
- build_agent_tree(main_tree, agent)
- self.console.print(main_tree)
+#!/usr/bin/env python
+# coding=utf-8
+
+###############################################################
+# Copyright 2025 BAAI. All rights reserved.
+###############################################################
+# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import json
+import logging
+from enum import IntEnum
+from typing import List, Optional, Union
+
+from rich import box
+from rich.console import Console, Group
+from rich.panel import Panel
+from rich.rule import Rule
+from rich.syntax import Syntax
+from rich.table import Table
+from rich.text import Text
+from rich.tree import Tree
+from tools.utils import escape_code_brackets
+
+__all__ = ["AgentLogger", "LogLevel", "Monitor"]
+
+
+class Monitor:
+ def __init__(self, tracked_model, logger):
+ self.step_durations = []
+ self.tracked_model = tracked_model
+ self.logger = logger
+ if (
+ getattr(self.tracked_model, "last_input_token_count", "Not found")
+ != "Not found"
+ ):
+ self.total_input_token_count = 0
+ self.total_output_token_count = 0
+
+ def get_total_token_counts(self):
+ return {
+ "input": self.total_input_token_count,
+ "output": self.total_output_token_count,
+ }
+
+ def reset(self):
+ self.step_durations = []
+ self.total_input_token_count = 0
+ self.total_output_token_count = 0
+
+ def update_metrics(self, step_log):
+ """Update the metrics of the monitor.
+
+ Args:
+ step_log ([`MemoryStep`]): Step log to update the monitor with.
+ """
+ step_duration = step_log.duration
+ self.step_durations.append(step_duration)
+ console_outputs = (
+ f"[Step {len(self.step_durations)}: Duration {step_duration:.2f} seconds"
+ )
+
+ if getattr(self.tracked_model, "last_input_token_count", None) is not None:
+ self.total_input_token_count += self.tracked_model.last_input_token_count
+ self.total_output_token_count += self.tracked_model.last_output_token_count
+ console_outputs += f"| Input tokens: {self.total_input_token_count:,} | Output tokens: {self.total_output_token_count:,}"
+ console_outputs += "]"
+ self.logger.log(Text(console_outputs, style="dim"), level=1)
+
+
+class LogLevel(IntEnum):
+ OFF = -1 # No output
+ ERROR = 0 # Only errors
+ INFO = 1 # Normal output (default)
+ DEBUG = 2 # Detailed output
+
+
+YELLOW_HEX = "#d4b702"
+
+
+class AgentLogger:
+ def __init__(self, level: LogLevel = LogLevel.INFO, log_file: str = "agent.log"):
+ """
+ Initialize the logger.
+
+ Args:
+ level (LogLevel): Logging level. Defaults to LogLevel.INFO.
+ log_file (str): Path to the log file. Defaults to "agent.log".
+ """
+ self.level = level
+ self.console = Console()
+
+ # Configure file logging
+ if log_file:
+ self.log_file = log_file
+ self._init_file_logger(log_file)
+
+ def _init_file_logger(self, log_file):
+ """Initialize the file logger."""
+ self.file_logger = logging.getLogger(log_file.split(".")[-2])
+ self.file_logger.setLevel(logging.DEBUG) # File logger captures all levels
+
+ # Create a file handler
+ file_handler = logging.FileHandler(self.log_file)
+ file_handler.setLevel(logging.DEBUG)
+
+ # Set log format
+ formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
+ file_handler.setFormatter(formatter)
+
+ # Add the handler to the logger
+ self.file_logger.addHandler(file_handler)
+
+ def log(self, *args, level: Union[str, LogLevel] = LogLevel.INFO, **kwargs) -> None:
+ """Logs a message to the console.
+
+ Args:
+ level (LogLevel, optional): Defaults to LogLevel.INFO.
+ """
+ if isinstance(level, str):
+ level = LogLevel[level.upper()]
+ if level <= self.level:
+ self.console.print(*args, **kwargs)
+
+ def log2file(self, log_message, level: LogLevel = LogLevel.INFO) -> None:
+ """Logs a message to the file logger."""
+ if hasattr(self, "file_logger"):
+ if level == LogLevel.DEBUG:
+ self.file_logger.debug(log_message)
+ elif level == LogLevel.INFO:
+ self.file_logger.info(log_message)
+ elif level == LogLevel.ERROR:
+ self.file_logger.error(log_message)
+
+ def log_error(self, error_message: str) -> None:
+ self.log(
+ escape_code_brackets(error_message), style="bold red", level=LogLevel.ERROR
+ )
+ self.log2file(error_message, level=LogLevel.ERROR)
+
+ def log_markdown(
+ self,
+ content: str,
+ title: Optional[str] = None,
+ level=LogLevel.INFO,
+ style=YELLOW_HEX,
+ ) -> None:
+ markdown_content = Syntax(
+ content,
+ lexer="markdown",
+ theme="github-dark",
+ word_wrap=True,
+ )
+ if title:
+ self.log(
+ Group(
+ Rule(
+ "[bold italic]" + title,
+ align="left",
+ style=style,
+ ),
+ markdown_content,
+ ),
+ level=level,
+ )
+ # self.log2file("\n=====> Markdown <=====\n" + title + "\n" + content.split("'function':")[-1].split("function=")[-1] + "\n=====> Markdown <=====", level=level)
+ else:
+ self.log(markdown_content, level=level)
+
+ self.log2file(
+ "\n=====> Markdown <=====\n"
+ + content.split("'function':")[-1].split("function=")[-1]
+ + "\n=====> Markdown <=====",
+ level=level,
+ )
+
+ def log_code(self, title: str, content: str, level: int = LogLevel.INFO) -> None:
+ self.log(
+ Panel(
+ Syntax(
+ content,
+ lexer="python",
+ theme="monokai",
+ word_wrap=True,
+ ),
+ title="[bold]" + title,
+ title_align="left",
+ box=box.HORIZONTALS,
+ ),
+ level=level,
+ )
+ self.log2file(
+ "\n=====> Code <=====\n" + title + "\n" + content + "\n=====> Code <=====",
+ level=level,
+ )
+
+ def log_rule(self, title: str, level: int = LogLevel.INFO) -> None:
+ self.log(
+ Rule(
+ "[bold]" + title,
+ characters="β",
+ style=YELLOW_HEX,
+ ),
+ level=LogLevel.INFO,
+ )
+ self.log2file(
+ "\n=====> Step <=====\n" + title + "\n=====> Step <=====", level=level
+ )
+
+ def log_task(
+ self,
+ content: str,
+ subtitle: str,
+ title: Optional[str] = None,
+ level: int = LogLevel.INFO,
+ ) -> None:
+ self.log(
+ Panel(
+ f"\n[bold]{escape_code_brackets(content)}\n",
+ title="[bold]New run" + (f" - {title}" if title else ""),
+ subtitle=subtitle,
+ border_style=YELLOW_HEX,
+ subtitle_align="left",
+ ),
+ level=level,
+ )
+ if title and subtitle:
+ self.log2file(
+ "\n=====> Task <=====\n"
+ + title
+ + " -> "
+ + subtitle
+ + "\n"
+ + content
+ + "\n=====> Task <=====",
+ level=level,
+ )
+ elif title:
+ self.log2file(
+ "\n=====> Task <=====\n"
+ + title
+ + "\n"
+ + content
+ + "\n=====> Task <=====",
+ level=level,
+ )
+ else:
+ self.log2file(
+ "\n=====> Task <=====\n" + content + "\n=====> Task <=====", level=level
+ )
+
+ def log_messages(self, messages: List) -> None:
+ messages_as_string = "\n".join(
+ [json.dumps(dict(message), indent=4) for message in messages]
+ )
+ self.log(
+ Syntax(
+ messages_as_string,
+ lexer="markdown",
+ theme="github-dark",
+ word_wrap=True,
+ )
+ )
+ self.log2file(
+ "\n=====> Message <=====\n" + messages_as_string + "\n=====> Message <====="
+ )
+
+ def visualize_agent_tree(self, agent):
+ def create_tools_section(tools_dict):
+ table = Table(show_header=True, header_style="bold")
+ table.add_column("Name", style="#1E90FF")
+ table.add_column("Description")
+ table.add_column("Arguments")
+
+ for name, tool in tools_dict.items():
+ args = [
+ f"{arg_name} (`{info.get('type', 'Any')}`{', optional' if info.get('optional') else ''}): {info.get('description', '')}"
+ for arg_name, info in getattr(tool, "inputs", {}).items()
+ ]
+ table.add_row(
+ name, getattr(tool, "description", str(tool)), "\n".join(args)
+ )
+
+ return Group("π οΈ [italic #1E90FF]Tools:[/italic #1E90FF]", table)
+
+ def get_agent_headline(agent, name: Optional[str] = None):
+ name_headline = f"{name} | " if name else ""
+ return f"[bold {YELLOW_HEX}]{name_headline}{agent.__class__.__name__} | {agent.model.model_id}"
+
+ def build_agent_tree(parent_tree, agent_obj):
+ """Recursively builds the agent tree."""
+ parent_tree.add(create_tools_section(agent_obj.tools))
+
+ if agent_obj.managed_agents:
+ agents_branch = parent_tree.add("π€ [italic #1E90FF]Managed agents:")
+ for name, managed_agent in agent_obj.managed_agents.items():
+ agent_tree = agents_branch.add(
+ get_agent_headline(managed_agent, name)
+ )
+ if managed_agent.__class__.__name__ == "CodeAgent":
+ agent_tree.add(
+ f"β [italic #1E90FF]Authorized imports:[/italic #1E90FF] {managed_agent.additional_authorized_imports}"
+ )
+ agent_tree.add(
+ f"π [italic #1E90FF]Description:[/italic #1E90FF] {managed_agent.description}"
+ )
+ build_agent_tree(agent_tree, managed_agent)
+
+ main_tree = Tree(get_agent_headline(agent))
+ if agent.__class__.__name__ == "CodeAgent":
+ main_tree.add(
+ f"β [italic #1E90FF]Authorized imports:[/italic #1E90FF] {agent.additional_authorized_imports}"
+ )
+ build_agent_tree(main_tree, agent)
+ self.console.print(main_tree)