diff --git a/docs/source/overview/sensors/contact_sensor.rst b/docs/source/overview/sensors/contact_sensor.rst index e85bff5b46b..525b0dedb21 100644 --- a/docs/source/overview/sensors/contact_sensor.rst +++ b/docs/source/overview/sensors/contact_sensor.rst @@ -136,3 +136,7 @@ Notice that even with filtering, both sensors report the net contact force actin [2.4322e-05, 0.0000e+00, 1.8102e+02]]], device='cuda:0') In this case, the contact sensor has two bodies: the left and right hind feet. When the force matrix is queried, the result is ``None`` because this is a many body sensor, and presently Isaac Lab only supports "many to one" contact force filtering. Unlike the single body contact sensor, the reported force tensor has multiple entries, with each "row" corresponding to the contact force on a single body of the sensor (matching the ordering at construction). + + .. literalinclude:: ../../../source/standalone/demos/sensors/contact_sensor_demo.py + :language: python + :linenos: \ No newline at end of file diff --git a/docs/source/overview/sensors/frame_transformer.rst b/docs/source/overview/sensors/frame_transformer.rst index 6612d13aa0c..30c1233cdcf 100644 --- a/docs/source/overview/sensors/frame_transformer.rst +++ b/docs/source/overview/sensors/frame_transformer.rst @@ -149,3 +149,7 @@ By activating the visualizer, we can see that the frames of the feet are rotated [ 0.0000e+00, 0.0000e+00, 0.0000e+00]]], device='cuda:0') Here, the sensor is tracking all rigid body children of ``Robot/base``, but this expression is **inclusive**, meaning that the source body itself is also a target. This can be seen both by examining the source and target list, where ``base`` appears twice, and also in the returned data, where the sensor returns the relative transform to itself, (0, 0, 0). + + .. literalinclude:: ../../../source/standalone/demos/sensors/frame_transformer_sensor_demo.py + :language: python + :linenos: \ No newline at end of file diff --git a/docs/source/overview/sensors/ray_caster.rst b/docs/source/overview/sensors/ray_caster.rst index 608a1a344e4..a5d245b33d4 100644 --- a/docs/source/overview/sensors/ray_caster.rst +++ b/docs/source/overview/sensors/ray_caster.rst @@ -99,3 +99,7 @@ Querying the sensor for data can be done at simulation run time like any other s Here we can see the data returned by the sensor itself. Notice first that there are 3 closed brackets at the beginning and the end: this is because the data returned is batched by the number of sensors. The ray cast pattern itself has also been flattened, and so the dimensions of the array are ``[N, B, 3]`` where ``N`` is the number of sensors, ``B`` is the number of cast rays in the pattern, and 3 is the dimension of the casting space. Finally, notice that the first several values in this casting pattern are the same: this is because the lidar pattern is spherical and we have specified our FOV to be hemispherical, which includes the poles. In this configuration, the "flattening pattern" becomes apparent: the first 180 entries will be the same because it's the bottom pole of this hemisphere, and there will be 180 of them because our horizontal FOV is 180 degrees with a resolution of 1 degree. You can use this script to experiment with pattern configurations and build an intuition about how the data is stored by altering the ``triggered`` variable on line 99. + + .. literalinclude:: ../../../source/standalone/demos/sensors/raycaster_sensor_demo.py + :language: python + :linenos: \ No newline at end of file