Skip to content

Releases: haosulab/ManiSkill

v0.4.2

03 Apr 00:44
Compare
Choose a tag to compare

Fixes

  • Fix the order of keys of observation spaces. If you previously relied on the order of keys (e.g., stacking dict observations into a flat array), this fix might affect your codes.

What's Changed

New Contributors

Full Changelog: haosulab/ManiSkill2@v0.4.1...v0.4.2

v0.4.1

02 Mar 18:50
Compare
Choose a tag to compare

Highlights

  • Improve documents (docker, challenge submission)
  • Update tutorials (add missing dependencies and fix links)
  • Fix a missing file for Hang-v0 in the wheel

What's Changed

Full Changelog: haosulab/ManiSkill2@v0.4.0...v0.4.1

v0.4.0: New vectorized environments, improved renderer, hands-on tutorials, pip-installable, better documentations and other enhancements

10 Feb 06:26
Compare
Choose a tag to compare

ManiSkill2 v0.4.0 Release Notes

ManiSkill2 v0.4.0 introduces many new features and makes it easier to start a journey of robot learning. Here are the highlights:

  • New vectorized environments supported by the RPC-based render system (sapien.RenderServer and sapien.RenderClient).
  • The renderer is significantly improved. sapien.VulkanRenderer and sapien.KuafuRenderer are merged into a unified renderer sapien.SapienRenderer.
  • Hands-on tutorials are provided for new users. Most of them can run on Google Colab.
  • mani_skill2 is a pip-installable package now!
  • Documentation is improved. The descriptions of environments are improved and their thumbnails are added.
  • We experimentally support adding visual backgrounds and enabling realistic stereo depth cameras.
  • Customization of environments (configuring cameras) is easier now!

Given many new features, we refactor ManiSkill2, which leads to many changes between v0.4.0 and v0.3.0. The instructions to migrate are presented below.

New Features

Installation

Installation becomes easier: pip install mani-skill2.

Note that to fully uninstall mani_skill2, you might need manually remove the generated cache files.

We include some examples in the package.

# Example with random actions. Can be used to test the installation
python -m mani_skill2.examples.demo_random_action
# Interactive play
python -m mani_skill2.examples.demo_manual_control -e PickCube-v0
pip_install.mp4

Vectorized Environments

We provide an implementation of vectorized environments (for rigid-body environments) powered by the SAPIEN RPC-based render server-client system.

from mani_skill2.vector import VecEnv, make
env: VecEnv = make("PickCube-v0", num_envs=4)

Please see mani_skill2.examples.demo_vec_env for an example: python -m mani_skill2.examples.demo_vec_env -e PickCube-v0 -n 4.

We provide examples to use our VecEnv with Stable-baselines3 at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/2_reinforcement_learning.ipynb and https://github.com/haosulab/ManiSkill2/tree/main/examples/tutorials/reinforcement-learning

FPS

Improved Renderer

It is easier to enable ray tracing:

# Enable ray tracing by changing shaders
env = gym.make("PickCube-v0", shader_dir="rt")

v0.3.0 experimentally supports ray tracing by KuafuRenderer. v0.4.0 uses SapienRenderer instead to provide a more seamless experience. Ray tracing is still not supported for soft-body environments currently.

Colab Tutorials

Quickstart Reinforcement learning Imitation learning

colab

Camera Configurations

It is easier to change camera configurations in v0.4.0:

# Change camera resolutions
env = gym.make(
    "PickCube-v0",
    # only change "base_camera" and keep other cameras for observations unchanged
    camera_cfgs=dict(base_camera=dict(width=320, height=240)), 
    # change for all cameras for visualization
    render_camera_cfgs=dict(width=640, height=480),
)

To include GT segmentation masks for all cameras in observations, you can set add_segmentation=True in camera_cfgs to initialize an environment.

# Add segmentation masks to observations (equivalent to adding Segmentation texture for each camera)
env = gym.make("PickCube-v0", camera_cfgs=dict(add_segmentation=True))

v0.3.0 uses gym.make(..., enable_gt_seg=True) to enable GT segmentation masks (visual_seg and actor_seg). v0.4.0 uses env = gym.make(..., camera_cfgs=dict(add_segmentation=True)). Besides, there will be Segmentation in observations instead, where Segmentation[..., 0:1] == visual_seg and Segmentation[..., 1:2] == actor_seg.

More examples can be found at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/customize_environments.ipynb

Visual Background

We experimentally support adding visual backgrounds.

# Download the background asset first: python -m mani_skill2.utils.download_asset minimal_bedroom
env = gym.make("PickCube-v0", bg_name="minimal_bedroom")

Stereo Depth Camera

We experimentally support realistic stereo depth cameras.

env = gym.make(
    "PickCube-v0",
    obs_mode="rgbd",
    shader_dir="rt",
    camera_cfgs={"use_stereo_depth": True, "height": 512, "width": 512},
)

Breaking Changes

Assets

mani_skill2 is pip-installable. The basic assets (the robot description of the Panda arm, PartNet-mobility metadata, essential assets for soft-body environments) are located at mani_skill2/assets, which are packed into the pip wheel. Task-specific assets need to be downloaded. The extra assets are downloaded to ./data by default.

  • Improve the script to download assets: python -m mani_skill2.utils.download_asset ${ASSET_UID/ENV_ID}. The positional argument can be a UID of the asset, an environment ID, or "all".

mani_skill2.utils.download (v0.3.0) is renamed to mani_skill2.utils.download_asset (v0.4.0).

# Download YCB object models
python -m mani_skill2.utils.download_asset ycb
# Download the required assets for PickSingleYCB-v0, which are just YCB object models
python -m mani_skill2.utils.download_asset PickSingleYCB-v0
  • When mani_skill2 is imported, it uses the environment variable MS2_ASSET_DIR to decide where assets are stored, which is set to ./data if not specified. It also takes effect for downloading assets.

Demonstrations

We add a script to download demonstrations: python -m mani_skill2.utils.download_demo ${ENV_ID} -o ${DEMO_DIR}.

There are some minor changes to the file structure, but no updates to the data itself.

Observations

The observation modes that include robot segmentation masks are renamed to pointcloud+robot_seg and rgbd+robot_seg from pointcloud_robot_seg and rgbd_robot_seg.

v0.3.0 uses xxx_robot_seg while v0.4.0 uses xxx+robot_seg. However, the concrete implementation only checks the keyword robot_seg. Thus, the previous codes will not be broken by this change.

For RGB-D observations, we move all camera parameters from the key image to a new key camera_param. Please see https://haosulab.github.io/ManiSkill2/concepts/observation.html#image for more details.

In v0.3.0, camera parameters are within obs["image"]. In v0.4.0, there is a separate key obs["camera_param"] for camera parameters. It will make users easier to discard camera parameters if they do not need them.

Fixes

  • Fix undefined behavior due to solver_velocity_iterations=0
  • Fix paths to download assets of "PickClutterYCB-v0", "OpenCabinetDrawer-v1", "OpenCabinetDoor-v1"

Pull Requests

Full Changelog: haosulab/ManiSkill2@v0.3.0...v0.4.0

v0.3.0: all environments released and many improvements

29 Nov 05:10
9885db8
Compare
Choose a tag to compare

Added

  • Add soft-body envs: Pinch-v0 and Write-v0
  • Add PickClutterYCB-v0
  • Migrate all ManiSkill1 environments

Breaking Changes

  • download and replay_trajectory are moved from tools to mani_skill2.utils and mani_skill2.trajectory. It is to enable users to call these utilities at other directories.
  • Change the pose of the base camera for pick-and-place environments. It is to ease RGBD-based approaches to observe goal positions.

Other Changes

  • We call self.seed(2022) in sapien_env::BaseEnv.__init__ to improve reproducibility.
  • Refactor evaluation
  • Improve the error message when assets are missing

What's Changed

New Contributors

Full Changelog: haosulab/ManiSkill2@v0.2.1...v0.3.0

v0.2.1

22 Sep 22:59
dd6554f
Compare
Choose a tag to compare

What's Changed

Other Changes

  • Fix StackCube-v0 success metric
  • Refactor PickSingle and AssemblingKits

New Contributors

Full Changelog: haosulab/ManiSkill2@v0.2.0...v0.2.1

v0.2.0

15 Aug 22:24
Compare
Choose a tag to compare

Added

  • Support new observation modes: rgbd_robot_seg and pointcloud_robot_seg
  • Support enable_gt_seg option for environments.
  • Add two new rigid-body environments: AssemblingKits-v0 and PandaAvoidObstacles-v0

Breaking Changes

  • TurnFaucet-v0: Add target_link_pos to observations
  • PickSingleEGAD-v0: Reduce the density of EGAD objects and update EGAD object information
  • Remove tcp_goal_pos in PickCube, LiftCube, PickSingle
  • Update TurnFaucet assets. Need to re-download assets
  • Change segmentation images from 2-dim to 3-dim
  • Replace xyz with xyzw in obs["pointcloud"]. We use the homogeneous representation to handle infinite points (beyond the far of camera).

Fixed

  • TurnFaucet-v0: Cache the initial joint positions so that they will not be affected by previous episodes
  • Pour-v0: Fix agent initialization typo
  • Excavate-v0: Fix hand camera position and max number of particles

What's Changed

New Contributors

Full Changelog: https://github.com/haosulab/ManiSkill2/commits/v0.2.0