Skip to content

Releases: haosulab/ManiSkill

v3.0.0.dev13

13 Apr 05:15
Compare
Choose a tag to compare
v3.0.0.dev13 Pre-release
Pre-release

What's Changed

Full Changelog: haosulab/ManiSkill2@v3.0.0.dev12...v3.0.0.dev13

v3.0.0.dev11

05 Apr 06:19
Compare
Choose a tag to compare
v3.0.0.dev11 Pre-release
Pre-release

What's Changed

Full Changelog: haosulab/ManiSkill2@v3.0.0.dev10...v3.0.0.dev11

v3.0.0.dev10

02 Apr 19:22
Compare
Choose a tag to compare
v3.0.0.dev10 Pre-release
Pre-release

What's Changed

  • docs on how to contribute tasks by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/250
  • Rotate Cube Task using Trifingerpro robot by @Kami-code in https://github.com/haosulab/ManiSkill2/pull/249
  • Fix pd ee pose for non panda robots, name refactors to be more explicit, and links have references to their joints by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/253
  • Pick clutter task (GPU). Also includes a few changes to how Actor.merge and Articulation.merge work in terms of the state dict the simulation has. In particular merging is now considered as a way of "viewing" existing data (like a reference/pointer), allowing for some more flexible control and usage as done in PickClutter where each sub-scene has a different number of objects as well and we need to reset all of them to initial poses. by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/254
  • Fix doc typos and refactor some old links/names by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/255
  • Improved docs w.r.t to installation instructions and around errors with fast kinematics
  • PPO example code supports an evaluation mode
  • force_use_gpu_sim -> sim_backend. Specify which simulation backend (cpu or gpu or auto) to use. In auto mode, GPU is used if num_envs > 1 when creating the environment, CPU otherwise
  • Remove circular import errors with registration tools so users can easily inherit existing tasks in maniskill to build on them
  • Updated tutorial task code for PushCube to be more explicit in class arguments for camera configurations
  • Remove dependencies on the old pytorch_kinematics library, it is no longer needed
  • Example RL code working google colab (not yet a tutorial, to be completed)
  • Fix bug with replay trajectory tool when trying to handle trajectories generated during GPU simulation
  • Fix bug with replay trajectory tool that doesn't set the first env state when using env states
  • Fix bug with RecordEpisode wrapper where it saved a null value for seed when no seed is given. Instead the seed key is removed entirely
  • Several old code files have been removed

New Contributors

Full Changelog: haosulab/ManiSkill2@v3.0.0.dev9...v3.0.0.dev10

v3.0.0.dev9

28 Mar 22:50
Compare
Choose a tag to compare
v3.0.0.dev9 Pre-release
Pre-release

What's Changed

Full Changelog: haosulab/ManiSkill2@v3.0.0.dev8...v3.0.0.dev9

v0.6.0.dev3

12 Jan 01:09
Compare
Choose a tag to compare
v0.6.0.dev3 Pre-release
Pre-release
  • YCB related tasks load faster
  • EGAD related tasks are deprecated as the collision meshes are too poor to really be useful at the moment
  • Various bug fixes
  • Example motion planning code has been provided for some rigid body tasks

v0.6.0.dev2

22 Dec 08:11
Compare
Choose a tag to compare
v0.6.0.dev2 Pre-release
Pre-release

See #175 for all changes

Install with pip install man-skill2==0.6.0.dev2

main changes are integrating SAPIEN 3 (lots of new features and nicer GUI), more visual variety in table top tasks, and experimental scene code for loading e.g. AI2THOR House scenes

v0.5.3

22 Sep 20:06
Compare
Choose a tag to compare

What's Changed

Full Changelog: haosulab/ManiSkill2@v0.5.2...v0.5.3

v0.5.2

23 Aug 18:32
Compare
Choose a tag to compare

What Changed

  • Fix soft body env demo download links

Full Changelog: haosulab/ManiSkill2@v0.5.1...v0.5.2

v0.5.1

23 Aug 18:23
Compare
Choose a tag to compare

What's Changed

Full Changelog: haosulab/ManiSkill2@v0.5.0...v0.5.1

v0.5.0

23 Aug 17:21
e1a678d
Compare
Choose a tag to compare

ManiSkill2 Release Notes

This update migrates ManiSkill2 over to using the new gymnasium package along with a number of other changes.

Breaking Changes

  • env.render now accepts no arguments. The old render functions are separated out as other functions and env.render calls them and chooses which one based on the env.render_mode attribute (set usually upon env creation).
  • env.step returns observation, reward, terminated, truncated, info. See https://gymnasium.farama.org/content/migration-guide/#environment-step for details. For ManiSkill2, the old done signal is now called terminated and truncated is False. All environments by default have a 200 max episode steps so truncated=True after 200 steps.
  • env.reset returns a tuple observation, info. For ManiSkill2, info is always an empty dictionary. Moreover, env.reset accepts two new keyword arguments: seed: int, options: dict | None. Note that options is usually used to configure various random settings/numbers of an environment. Previously ManiSkill2 used to use custom keyword arguments such as reconfigure. These keyword arguments are still usable but must be passed through an options dict e.g. env.reset(options=dict(reconfigure=True)).
  • env.seed has now been removed in favor of using env.reset(seed=val) per the Gymnasium API.
  • ManiSkill VectorEnv is now also modified to adhere to the Gymnasium Vector Env API. Note this means that vec_env.observation_space and vec_env.action_space are batched under the new API, and the individual environment spaces are defined as vec_env.single_observation_space and vec_env.single_action_space
  • All reward functions have been changed to be scaled to the range of [0, 1], generally making any value-learning kind of approach more stable and avoiding gradient explosions. On any environment a reward of 1 indicates success as well and is also indicated by the boolean stored in info["success"]. The scaled dense rewards are the new default reward function and is called normalized_dense. To use the old <0.5.0 ManiSkill2 dense rewards, set reward_mode to dense.

New Additions

Code

  • Environment code come with separated render functions representing the old render modes. There is now env.render_human for creating a interactive GUI and viewer, env.render_rgb_array for generating RGB images of the current env from a 3rd person perspective, and env.render_cameras which renders all the cameras (including rgb, depth, segmentation if available) and compacts them into one rgb image that is returned. Note that human and rgb_array are used only for visualization purposes. They may include artifacts like indicators of where the goal is for visualization purposes, see PickCube-v0 or PandaAvoidObstacles-v0 for examples. cameras mode is reflective of what the actual visual observations are returned by calls to env.reset and env.step.
  • The ManiSkill2 VecEnv creator function make_vec_env now accepts a max_episode_steps argument which overrides the default max_episode_steps specified when registering the environment. The default max_episode_steps is 200 for all environments, but note it may be more efficient for RL training and evaluation to use a smaller value as shown in the RL tutorials.

Data

Tutorials

  • All tutorials have been updated to reflect new gym API, new stable baselines 3, and should be more stable on google colab

Not Code

  • New CONTRIBUTING.md document has been added, with details on how to locally develop on ManiSkill2 and test it

Bug Fixes

  • Closes #124 with using the newest version of Sapien, 2.2.2.
  • Closes #119 via #123 where scalar values returned by the state part of a dictionary would cause errors.
  • Fixes a compatability bug with Gymnasium AsyncVectorEnv where Gymnasium also could not handle scalar values as it expects shape (1, ), not shape (). This is done by modifying environments to instead of returning floats for certain scalar observation values to return numpy array versions of them. So far only affected TurnFaucet-v0. Partially closes #125 where TurnFaucet-v0 had non-deterministic rewards due to computing rewards based on unseeded sampled points from various meshes.

Miscellaneous Changes

  • Dockerfile now accepts a python version as an argument
  • README and documentation updated to reflect new gym API
  • mani_skill2.examples.demo_vec_env module now accepts a --vecenv-type argument which can be either ms2 or gym and defaults to ms2. Lets users benchmark the speed difference themselves. Module was further cleaned to print more nicely
  • Various example scripts that have main functions now accept an args argument and allow for using those scripts from within python and not just the CLI. Used for testing purposes.
  • Fix some lack of quietness on some example scripts
  • Replaying trajectories accepts a new --count argument that lets you specify how many trajectories to replay. There is no data shuffling so the replayed trajectories will always be the same and in the same order. By default this is None meaning all trajectories are replayed.

What's Changed

Full Changelog: haosulab/ManiSkill2@v0.4.2...v0.5.0