Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NeurIPS2022 competition agent to agent zoo #1838

Merged
merged 21 commits into from
Feb 10, 2023
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ Copy and pasting the git commit messages is __NOT__ enough.

## [Unreleased]
### Added
- Added a zoo agent, named Interaction-aware Motion Prediction, from NeurIPS2022 submission. This zoo agent runs in benchmark `driving_smarts==0.0`.
### Changed
### Deprecated
### Fixed
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
agent:
interface:
action_space: "TargetPose" # we can only allow `RelativeTargetPose` and `TargetPose`
img_meters: 50 # Observation image area size in meters.
img_pixels: 112 # Observation image size in pixels.
observation_options: "unformatted"
locator: "zoo.policies:interaction-aware-motion-prediction-agent-v0"
22 changes: 11 additions & 11 deletions docs/benchmarks/benchmark.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _benchmark:

Driving SMARTS Competition Benchmark
====================================
Driving SMARTS Competition
==========================

The Driving SMARTS competition benchmark is a benchmark derived from the
NeurIPS 2022 Driving SMARTS Competition.
Expand Down Expand Up @@ -30,7 +30,7 @@ available.

.. code:: bash

> scl benchmark list
$ scl benchmark list
BENCHMARK_NAME BENCHMARK_ID VERSIONS
- Driving SMARTS: driving_smarts 0.0 0.1

Expand Down Expand Up @@ -133,7 +133,7 @@ file.

.. code:: bash

> scl benchmark run driving_smarts "./baselines/driving_smarts/v0/agent_config.yaml" --auto-install # --auto-install only needs to be used to get dependencies.
$ scl benchmark run driving_smarts "./baselines/driving_smarts/v0/agent_config.yaml" --auto-install # --auto-install only needs to be used to get dependencies.
Starting `Driving SMARTS V1` benchmark.
This is a cleaned up version of the Driving SMARTS benchmark.

Expand All @@ -151,17 +151,17 @@ file.

`Driving SMARTS V0` result:
- completion: 1
- humanness: 0.2
- rules: 0.8
- time: 0.3
- overall: 1.7
- humanness: 0.7
- rules: 0.9
- time: 0.2
- overall: 0.504

A particular version of a benchmark can be targeted using a modified
syntax ``benchmark_name==version``:

.. code:: bash

> scl benchmark run driving_smarts==0.0 ...
$ scl benchmark run driving_smarts==0.0 "./baselines/driving_smarts/v0/agent_config.yaml" --auto-install

Advanced Configuration
----------------------
Expand All @@ -178,7 +178,7 @@ what benchmarks are currently available. This can be passed using

.. code:: bash

> scl benchmark run --benchmark-listing benchmark_listing.yaml driving_smarts "./baselines/driving_smarts/v0/agent_config.yaml"
$ scl benchmark run --benchmark-listing benchmark_listing.yaml driving_smarts "./baselines/driving_smarts/v0/agent_config.yaml"

WARNING! Since with ``scl benchmark run`` this listing directs to a code
``entrypoint`` do not use this with a listing file from an unknown
Expand All @@ -192,7 +192,7 @@ file.

.. code:: bash

> scl benchmark list --benchmark-listing benchmark_listing.yaml
$ scl benchmark list --benchmark-listing benchmark_listing.yaml

Listing File
^^^^^^^^^^^^
Expand Down
7 changes: 5 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,11 @@

extlinks = {
"examples": (
"https://github.com/huawei-noah/SMARTS/blob/develop/examples/%s",
"https://github.com/huawei-noah/SMARTS/blob/master/examples/%s",
"%s",
),
"scenarios": (
"https://github.com/huawei-noah/SMARTS/blob/develop/scenarios/%s",
"https://github.com/huawei-noah/SMARTS/blob/master/scenarios/%s",
"%s",
),
}
Expand All @@ -65,6 +65,7 @@
"setup.py",
"scenarios",
"smarts/ros",
"zoo/policies/interaction_aware_motion_prediction",
]
apidoc_extra_args = [
"--force",
Expand All @@ -77,6 +78,7 @@
autodoc_mock_imports = [
"cpuinfo",
"cv2",
"gymnasium",
"lxml",
"mdutils",
"moviepy",
Expand All @@ -86,6 +88,7 @@
"ray",
"tabulate",
"tools",
"torch",
"waymo_open_dataset",
]

Expand Down
54 changes: 54 additions & 0 deletions docs/ecosystem/agent_zoo.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
.. _agent_zoo:

Agent Zoo
=========

Contributed agents are hosted at `Agent Zoo <https://github.com/huawei-noah/SMARTS/tree/master/zoo/policies>`_ .

These agents should be installed prior to being used in SMARTS environments.

.. code-block:: bash

$ cd <path>/SMARTS
# Install a zoo agent.
$ scl zoo install <agent>
# e.g. scl zoo install zoo/policies/interaction_aware_motion_prediction

.. note::

To build a wheel, execute:

.. code-block:: bash

$ scl zoo build <agent>
# e.g. scl zoo build zoo/policies/interaction_aware_motion_prediction

Agents
------

#. Interaction-aware motion prediction agent

* This agent was contributed as part of NeurIPS2022 Driving SMARTS competition.
* Agent `code <https://github.com/huawei-noah/SMARTS/tree/master/zoo/policies/interaction_aware_motion_prediction>`_.
* Run as follows:

.. code-block:: bash

$ cd <path>/SMARTS
# Install a zoo agent.
$ scl zoo install zoo/policies/interaction_aware_motion_prediction
$ scl benchmark run driving_smarts==0.0 "./baselines/driving_smarts/v0/interaction_aware_motion_prediction.yaml" --auto-install

Contribute agents
-----------------

.. note::
This section is only for contributing to the agent zoo.

First, add the new agent to `Agent Zoo <https://github.com/huawei-noah/SMARTS/tree/master/zoo/policies>`_. It should contain

* package setup file,
* inference code with trained model, and
* algorithm explanation.

Then, register the newly added zoo agent in ``SMARTS/zoo/policies/__init__.py``.
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ If you use SMARTS in your research, please cite the `paper <https://arxiv.org/ab
ecosystem/sumo.rst
ecosystem/waymo.rst
ecosystem/rllib.rst
ecosystem/agent_zoo.rst

.. toctree::
:hidden:
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Pre-Push Checklist
------------------

1. Do your best to see that your code compiles locally.
2. Do not push to ``master``. Instead make a branch and a pull request to the ``develop`` branch.
2. Do not push directly to ``master``. Instead make a branch and a pull request to the ``master`` branch.

Submission of a Pull Request
----------------------------
Expand Down
7 changes: 4 additions & 3 deletions docs/setup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Setup
Prerequisites
-------------

+ python3 (3.7 and 3.8)
+ python3 (3.7 or 3.8)
+ ubuntu (>=16.04)

Installation
Expand All @@ -20,15 +20,16 @@ Run the following commands to setup the SMARTS simulator.
$ cd <path/to/SMARTS>

# For latest stable release
$ git checkout master
$ git checkout tags/<tag_name>
# e.g., git checkout tags/v1.0.3

.. note::

Alternatively, to use the current development (i.e., bleeding edge) version:

.. code-block:: bash

$ git checkout develop
$ git checkout master

.. code-block:: bash

Expand Down
18 changes: 10 additions & 8 deletions smarts/benchmark/entrypoints/benchmark_runner_v0.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,13 @@
# THE SOFTWARE.
import logging
import os
from typing import Callable, Dict, Generator, List, Tuple
from typing import List, Tuple

import gymnasium as gym
import psutil
import ray

from smarts.benchmark.driving_smarts import load_config
from smarts.benchmark.driving_smarts.v0 import DEFAULT_CONFIG
from smarts.core.utils.logging import suppress_output
from smarts.env.gymnasium.wrappers.metrics import Metrics, Score
from smarts.zoo import registry as agent_registry
Expand All @@ -53,18 +52,21 @@ def _eval_worker_local(name, env_config, episodes, agent_config, error_tolerant=
**agent_config["interface"],
)
env = Metrics(env)
agent = agent_registry.make_agent(
locator=agent_config["locator"],
**agent_config["kwargs"],
)
agents = {
agent_id: agent_registry.make_agent(
locator=agent_config["locator"], **agent_config.get("kwargs", {})
)
for agent_id in env.agent_ids
}

observation, info = env.reset()
current_resets = 0
try:
while current_resets < episodes:
try:
action = {
agent_id: agent.act(obs) for agent_id, obs in observation.items()
agent_id: agents[agent_id].act(obs)
for agent_id, obs in observation.items()
}
# assert env.action_space.contains(action)
except Exception:
Expand Down Expand Up @@ -185,7 +187,7 @@ def format_scores_total(named_scores: List[Tuple[str, Score]], scenario_count):
print(format_one_line_scores(named_scores))
print()
print("`Driving SMARTS V0` averaged result:")
print(format_scores_total(named_scores, len(env_args) or 1))
print(format_scores_total(named_scores, len(env_args)))


def benchmark_from_configs(benchmark_config, agent_config, debug_log=False):
Expand Down
33 changes: 25 additions & 8 deletions smarts/env/gymnasium/driving_smarts_competition_env.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
import os
import pathlib
from functools import partial
from typing import Any, Dict, Optional, Tuple
from typing import Any, Dict, Optional, Tuple, Union

import gymnasium as gym
import numpy as np
Expand All @@ -42,6 +42,7 @@
)
from smarts.core.controllers import ActionSpaceType
from smarts.env.gymnasium.hiway_env_v1 import HiWayEnvV1, SumoOptions
from smarts.env.utils.observation_conversion import ObservationOptions
from smarts.sstudio.scenario_construction import build_scenario

logger = logging.getLogger(__file__)
Expand All @@ -64,6 +65,7 @@ def driving_smarts_competition_v0_env(
visdom: bool = False,
sumo_headless: bool = True,
envision_record_data_replay_path: Optional[str] = None,
observation_options: Union[ObservationOptions, str] = ObservationOptions.default,
):
"""An environment with a mission to be completed by a single or multiple ego agents.

Expand Down Expand Up @@ -188,19 +190,22 @@ def driving_smarts_competition_v0_env(
),
)

observation_options = ObservationOptions[observation_options]
env = HiWayEnvV1(
scenarios=[env_specs["scenario"]],
agent_interfaces=agent_interfaces,
sim_name="Driving_SMARTS_v0",
headless=headless,
visdom=visdom,
fixed_timestep_sec=0.1,
seed=seed,
sumo_options=SumoOptions(headless=sumo_headless),
visualization_client_builder=visualization_client_builder,
observation_options=observation_options,
)
env.action_space = env_action_space
if ActionSpaceType[action_space] == ActionSpaceType.TargetPose:
env = _LimitTargetPose(env)
env = _LimitTargetPose(env, observation_options=observation_options)
return env


Expand Down Expand Up @@ -395,13 +400,18 @@ def resolve_agent_interface(
class _LimitTargetPose(gym.Wrapper):
"""Uses previous observation to limit the next TargetPose action range."""

def __init__(self, env: gym.Env):
def __init__(
self,
env: gym.Env,
observation_options: ObservationOptions = ObservationOptions.default,
):
"""
Args:
env (gym.Env): Environment to be wrapped.
"""
super().__init__(env)
self._prev_obs: Dict[str, Dict[str, Any]]
self._observation_options = observation_options

def step(
self, action: Dict[str, np.ndarray]
Expand Down Expand Up @@ -451,12 +461,19 @@ def reset(self, **kwargs):

def _store(self, obs: Dict[str, Any]) -> Dict[str, Dict[str, Any]]:
filtered_obs: Dict[str, Dict[str, Any]] = {}

if self._observation_options == ObservationOptions.unformatted:
filter_func = lambda agent_obs: copy.deepcopy(
agent_obs.ego_vehicle_state.position[:2]
)
else:
filter_func = lambda agent_obs: copy.deepcopy(
agent_obs["ego_vehicle_state"]["position"][:2]
)

for agent_name, agent_obs in obs.items():
filtered_obs[agent_name] = {
"position": copy.deepcopy(
agent_obs["ego_vehicle_state"]["position"][:2]
)
}
filtered_obs[agent_name] = {"position": filter_func(agent_obs)}

return filtered_obs

def _limit(
Expand Down
21 changes: 0 additions & 21 deletions smarts/env/gymnasium/utils/__init__.py

This file was deleted.

Loading