Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix broken examples documention links. #2118

Merged
merged 5 commits into from
Dec 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/ecosystem/argoverse.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,6 @@ You can then run any of the examples with this scenario:

.. code-block:: sh

$ scl run --envision examples/egoless.py scenarios/argoverse
$ scl run --envision examples/e1_egoless.py scenarios/argoverse

.. image:: /_static/argoverse-replay.gif
2 changes: 1 addition & 1 deletion docs/ecosystem/rllib.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ many docs about ``Ray`` and ``RLlib``. We recommend to read the following pages
Resume training
---------------

With respect to ``SMARTS/examples/rl/rllib`` examples, if you want to continue an aborted experiment, you can set ``resume_training=True``. But note that ``resume_training=True`` will continue to use the same configuration as was set in the original experiment.
With respect to ``SMARTS/examples/e12_rllib`` examples, if you want to continue an aborted experiment, you can set ``resume_training=True``. But note that ``resume_training=True`` will continue to use the same configuration as was set in the original experiment.
To make changes to a started experiment, you can edit the latest experiment file in ``./results``.

Or if you want to start a new experiment but train from an existing checkpoint, you will need to look into `How to Save and Load Trial Checkpoints <https://docs.ray.io/en/latest/tune/tutorials/tune-trial-checkpoints>`_.
2 changes: 1 addition & 1 deletion docs/ecosystem/waymo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ You can then run any of the examples with this scenario:

.. code-block:: sh

$ scl run --envision examples/egoless.py scenarios/waymo
$ scl run --envision examples/e1_egoless.py scenarios/waymo

.. image:: /_static/waymo-replay.gif

Expand Down
55 changes: 55 additions & 0 deletions docs/examples/base_examples.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
.. _base_examples:

Base Examples
==============

Several examples are made available to help you get started with SMARTS.


Comment on lines +1 to +8
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding instructions to run the examples.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The examples have help integrated or run just on call.

#. Egoless

+ script: :examples:`e1_egoless.py`
+ No agent
+ Using a SMARTS environment without an agent.

#. Single-agent

+ script: :examples:`e2_single_agent.py`
+ Single-agent
+ Using a SMARTS environment with a single agent.

#. Multi-agent

+ script: :examples:`e3_multi_agent.py`
+ Multi-agent
+ Using a SMARTS environment with multiple agents.

#. Environment configuration

+ script: :examples:`e4_environment_config.py`
+ Demonstrates how to configure a SMARTS environment.

#. Agent zoo

+ script: :examples:`e5_agent_zoo.py`
+ Demonstrates how to use the agent zoo to retrieve an agent.

#. Agent configuration

+ script: :examples:`e6_agent_action_space.py`
+ Demonstrates how to configure the action space for an agent.

#. Experiment base

+ script: :examples:`e7_experiment_base.py`
+ An example of how to put together a larger experiment with configuration files.

#. Parallel Environments

+ script: :examples:`e8_parallel_environment.py`
+ A primitive use of SMARTS environments in parallel.

#. Jupyter Notebook

+ script: :examples:`e9_notebook.py`
+ A primitive use of SMARTS in a Jupyter Notebook.
44 changes: 0 additions & 44 deletions docs/examples/control.rst

This file was deleted.

6 changes: 0 additions & 6 deletions docs/examples/drive.rst

This file was deleted.

20 changes: 0 additions & 20 deletions docs/examples/egoless.rst

This file was deleted.

6 changes: 0 additions & 6 deletions docs/examples/platoon.rst

This file was deleted.

26 changes: 20 additions & 6 deletions docs/examples/rl_model.rst
Original file line number Diff line number Diff line change
@@ -1,10 +1,24 @@
.. _rl_model:

RL Model
========
RL Models
=========

.. toctree::
:maxdepth: 1
These examples demonstrate some ways to use SMARTS to generate RL models.

drive.rst
platoon.rst

#. Driving SMARTS Drive

+ directory: :examples:`e10_drive`
+ This example was developed in conjunction with the :ref:`Driving SMARTS 2023.1 & 2023.2 <driving_smarts_2023_1>` benchmark, hence refer to it for details.


#. Driving SMARTS Platoon

+ directory: :examples:`e11_platoon`
+ This example was developed in conjunction with the :ref:`Driving SMARTS 2023.3 <driving_smarts_2023_3>` benchmark, hence refer to it for details.


#. RLlib

+ directory: :examples:`e12_rllib`
+ See the :ref:`RLlib integration <rllib>` for additional examples.
3 changes: 1 addition & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,7 @@ If you use SMARTS in your research, please cite the `paper <https://arxiv.org/ab
:maxdepth: 3
:caption: Examples

examples/egoless.rst
examples/control.rst
examples/base_examples.rst
examples/rl_model.rst

.. toctree::
Expand Down
5 changes: 2 additions & 3 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,7 @@ Explore

Explore more examples.

(i) :ref:`Egoless <egoless>`
(ii) :ref:`Control theory <control>`
(iii) :ref:`RL model <rl_model>`
(i) :ref:`Base Examples <base_examples>`
(ii) :ref:`RL model <rl_model>`

A handful of pre-built scenarios are available at :scenarios:`scenarios <>` folder.
8 changes: 4 additions & 4 deletions docs/resources/containers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ SMARTS docker images are hosted at `dockerhub <https://hub.docker.com/u/huaweino

# Run an example.
# Add --headless if visualisation is not needed.
$ python examples/control/chase_via_points.py scenarios/sumo/loop
$ python examples/e2_single_agent.py scenarios/sumo/loop

# Visit http://localhost:8081 in the host machine to see the running simulation in Envision.

Expand All @@ -50,10 +50,10 @@ Instructions for running SMARTS within a `singularity <https://apptainer.org/>`_
# 1. Run container in interactive mode.
$ singularity shell --containall --bind ../SMARTS:/src ./utils/singularity/smarts.sif
# Inside the container
Singularity> python3.8 /src/examples/control/chase_via_points.py /src/scenarios/sumo/loop/ --headless
Singularity> python3.8 /src/examples/e2_single_agent.py /src/scenarios/sumo/loop/ --headless

# 2. Run commands within the container from the host system.
$ singularity exec --containall --bind ../SMARTS:/src ./utils/singularity/smarts.sif python3.8 /src/examples/control/chase_via_points.py /src/scenarios/sumo/loop/ --headless
$ singularity exec --containall --bind ../SMARTS:/src ./utils/singularity/smarts.sif python3.8 /src/examples/e2_single_agent.py /src/scenarios/sumo/loop/ --headless

# 3. Run container instance in the background.
$ singularity instance start --containall --bind ../SMARTS:/src ./utils/singularity/smarts.sif smarts_train /src/examples/control/chase_via_points.py /src/scenarios/sumo/loop/ --headless
$ singularity instance start --containall --bind ../SMARTS:/src ./utils/singularity/smarts.sif smarts_train /src/examples/e2_single_agent.py /src/scenarios/sumo/loop/ --headless
2 changes: 1 addition & 1 deletion docs/resources/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -145,4 +145,4 @@ Things inevitably become slow, when this happens, Flame Graph is a great tool to
$ mkdir -p flamegraph_dir
$ curl https://raw.githubusercontent.com/brendangregg/FlameGraph/master/flamegraph.pl > ./utils/third_party/tools/flamegraph.pl
$ chmod 777 {$flamegraph_dir}/flamegraph.pl
$ make flamegraph scenario=./scenarios/sumo/loop script=./examples/control/chase_via_points.py
$ make flamegraph scenario=./scenarios/sumo/loop script=./examples/e2_single_agent.py
2 changes: 1 addition & 1 deletion docs/sim/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ An example is shown below to run SMARTS with Envision.
# Build scenarios/sumo/loop
$ scl scenario build --clean scenarios/sumo/loop
# Run the chase_via_points.py example with the loop scenario
$ scl run --envision examples/control/chase_via_points.py scenarios/sumo/loop
$ scl run --envision examples/e2_single_agent.py scenarios/sumo/loop

``--envision`` flag is added to ``scl run`` to enable the Envision server. Visit `http://localhost:8081/ <http://localhost:8081/>`_ in your browser to see the environment visualization. Select the simulator instance in the top left drop-down. If you are using SMARTS on a remote machine you will need to forward port 8081.

Expand Down
2 changes: 1 addition & 1 deletion examples/replay/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ This module shows how you can write your own experiments that will allow you to

## Wrapping your Social Agent
You need to wrap your social agent using the `ReplayAgent` wrapper to save agent observations and actions at each step.
Checkout `examples/replay/replay_klws_agent.py` on an example on how you can write your own experiment to replay your agent:
Look at `examples/replay/replay_klws_agent.py` as an example on how you can write your own experiment to replay your agent:
```python
from smarts.zoo.registry import make as zoo_make
from zoo import policies
Expand Down
2 changes: 1 addition & 1 deletion scenarios/INTERACTION/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,5 +26,5 @@ their level of exactness may not be enough for some model-training situations,
so you may want or need to refine them with SUMO's [netedit tool](https://sumo.dlr.de/docs/Netedit/index.html).

An example of how traffic history might be replayed in SMARTS can be found in the
[examples/smarts/traffic_histories_to_observations.py](../../examples/smarts/traffic_histories_to_observations.py)
[smarts/dataset/traffic_histories_to_observations.py](../../smarts/dataset/traffic_histories_to_observations.py)
script.
2 changes: 1 addition & 1 deletion scenarios/NGSIM/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,5 +28,5 @@ their level of exactness may not be enough for some model-training situations,
so you may want or need to refine them with SUMO's [netedit tool](https://sumo.dlr.de/docs/Netedit/index.html).

An example of how traffic history might be replayed in SMARTS can be found in the
[examples/smarts/traffic_histories_to_observations.py](../../examples/smarts/traffic_histories_to_observations.py)
[smarts/dataset/traffic_histories_to_observations.py](../../smarts/dataset/traffic_histories_to_observations.py)
script.