Skip to content

Commit

Permalink
added literal code refs
Browse files Browse the repository at this point in the history
  • Loading branch information
mpgussert committed Nov 21, 2024
1 parent c62377b commit 7639860
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 0 deletions.
4 changes: 4 additions & 0 deletions docs/source/overview/sensors/contact_sensor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -136,3 +136,7 @@ Notice that even with filtering, both sensors report the net contact force actin
[2.4322e-05, 0.0000e+00, 1.8102e+02]]], device='cuda:0')
In this case, the contact sensor has two bodies: the left and right hind feet. When the force matrix is queried, the result is ``None`` because this is a many body sensor, and presently Isaac Lab only supports "many to one" contact force filtering. Unlike the single body contact sensor, the reported force tensor has multiple entries, with each "row" corresponding to the contact force on a single body of the sensor (matching the ordering at construction).

.. literalinclude:: ../../../source/standalone/demos/sensors/contact_sensor_demo.py
:language: python
:linenos:
4 changes: 4 additions & 0 deletions docs/source/overview/sensors/frame_transformer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -149,3 +149,7 @@ By activating the visualizer, we can see that the frames of the feet are rotated
[ 0.0000e+00, 0.0000e+00, 0.0000e+00]]], device='cuda:0')
Here, the sensor is tracking all rigid body children of ``Robot/base``, but this expression is **inclusive**, meaning that the source body itself is also a target. This can be seen both by examining the source and target list, where ``base`` appears twice, and also in the returned data, where the sensor returns the relative transform to itself, (0, 0, 0).

.. literalinclude:: ../../../source/standalone/demos/sensors/frame_transformer_sensor_demo.py
:language: python
:linenos:
4 changes: 4 additions & 0 deletions docs/source/overview/sensors/ray_caster.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,3 +99,7 @@ Querying the sensor for data can be done at simulation run time like any other s
Here we can see the data returned by the sensor itself. Notice first that there are 3 closed brackets at the beginning and the end: this is because the data returned is batched by the number of sensors. The ray cast pattern itself has also been flattened, and so the dimensions of the array are ``[N, B, 3]`` where ``N`` is the number of sensors, ``B`` is the number of cast rays in the pattern, and 3 is the dimension of the casting space. Finally, notice that the first several values in this casting pattern are the same: this is because the lidar pattern is spherical and we have specified our FOV to be hemispherical, which includes the poles. In this configuration, the "flattening pattern" becomes apparent: the first 180 entries will be the same because it's the bottom pole of this hemisphere, and there will be 180 of them because our horizontal FOV is 180 degrees with a resolution of 1 degree.

You can use this script to experiment with pattern configurations and build an intuition about how the data is stored by altering the ``triggered`` variable on line 99.

.. literalinclude:: ../../../source/standalone/demos/sensors/raycaster_sensor_demo.py
:language: python
:linenos:

0 comments on commit 7639860

Please sign in to comment.