Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: spaces.Discrete - start argument ignored #1295

Closed
emrul opened this issue Jan 24, 2023 · 5 comments
Closed

[Bug]: spaces.Discrete - start argument ignored #1295

emrul opened this issue Jan 24, 2023 · 5 comments
Labels
bug Something isn't working documentation Improvements or additions to documentation duplicate This issue or pull request already exists

Comments

@emrul
Copy link

emrul commented Jan 24, 2023

🐛 Bug

I'm a bit new to SB3 and Gym (used it many years ago) so I raise this report hesitantly as it's quite possible I've done something wrong.

However, it seems to me that the start parameter of spaces.Action is ignored.

I've included a minimal repro below:

  • Action space is 3
  • start is -1
  • It would be expected therefore for env to return values from [-1, 0, 1] and if I sample the gym directly this works
  • What appears to be happening is env is returning values from [0, 1, 2]

Note: I'm using the branch of SB3 that has support for gymnasium - specifically I upgraded SB3 using pip install git+https://github.com/carlosluis/stable-baselines3@fix_tests. It isn't clear to me whether this is part of my issue or not.

To Reproduce

import sys
import numpy as np
import gymnasium as gym
sys.modules["gym"] = gym
from gymnasium import spaces
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3 import PPO


class CustomGymEnv(gym.Env):
    def __init__(self, action_space, obs_space):
        """
        Custom gym environment for testing purposes
        """
        self.action_space = action_space
        self.observation_space = obs_space
        self.current_step = 0
        self.ep_length = 4

    def reset(self, seed=None):
        self.current_step = 0
        self._choose_next_state()
        return self.state

    def step(self, action):
        assert self.action_space.contains(action), f"{action} {type(action)} invalid"
        reward = float(np.random.rand())
        self._choose_next_state()
        self.current_step += 1
        done = self.current_step >= self.ep_length
        return self.state, reward, done, done, {}

    def _choose_next_state(self):
        self.state = self.observation_space.sample()

    def render(self, mode="human"):
        if mode == "rgb_array":
            return np.zeros((4, 4, 3))

    def seed(self, seed=None):
        if seed is not None:
            np.random.seed(seed)
            self.observation_space.seed(seed)


env_maker = lambda: CustomGymEnv(spaces.Discrete(3, start=-1), spaces.Box(low=np.zeros(2), high=np.ones(2)))
env = DummyVecEnv([env_maker])

model = PPO("MlpPolicy", env, verbose=1)
#model.learn(total_timesteps=1000)


obs = env.reset()
done = False
while not done:
    act, _states = model.predict(obs)
    obs, reward, done, info = env.step(act)
    env.render()

Relevant log output / Error message

File "....../test.py", line 26, in step 
assert self.action_space.contains(action), f"{action} {type(action)} invalid"
AssertionError: 2 <class 'numpy.int64'> invalid


### System Info

  • OS: macOS-13.0.1-x86_64-i386-64bit Darwin Kernel Version 22.1.0: Sun Oct 9 20:14:54 PDT 2022; root:xnu-8792.41.9~2/RELEASE_X86_64
  • Python: 3.10.8
  • Stable-Baselines3: 2.0.0a0
  • PyTorch: 1.13.1
  • GPU Enabled: False
  • Numpy: 1.24.1
  • Gym: 0.26.2

### Checklist

- [X] I have checked that there is no similar [issue](https://github.com/DLR-RM/stable-baselines3/issues) in the repo
- [X] I have read the [documentation](https://stable-baselines3.readthedocs.io/en/master/)
- [X] I have provided a minimal working example to reproduce the bug
- [X] I've used the [markdown code blocks](https://help.github.com/en/articles/creating-and-highlighting-code-blocks) for both code and stack traces.
@emrul emrul added the bug Something isn't working label Jan 24, 2023
@qgallouedec
Copy link
Collaborator

qgallouedec commented Jan 24, 2023

Happy comeback then!

Discrete observation space with a non-zero start is not supported by Stable-Baselines3. You can use a wrapper or update your observation space.

It seems like you're using custom env. I highly recommend you to check your env before using.

from stable_baselines3.common.env_checker import check_env

env = CustomEnv()
check_env(env)

I'm pretty that it should raise the warning mentioned above ;)

@araffin araffin added documentation Improvements or additions to documentation duplicate This issue or pull request already exists labels Jan 24, 2023
@araffin
Copy link
Member

araffin commented Jan 24, 2023

Duplicate of #913
It seems that I added the warning for observation but forgot to do so for action space.

@emrul
Copy link
Author

emrul commented Jan 24, 2023

Thanks @araffin - I did look for a dup but couldn't find one. Anyway, agree that for me at least this is easy enough to work around.

@emrul emrul closed this as completed Jan 24, 2023
@araffin araffin reopened this Jan 24, 2023
@emrul
Copy link
Author

emrul commented Jan 24, 2023

@qgallouedec - thanks for pointing me to the env checker! That found a couple of problems that I've now addressed!!

araffin added a commit to carlosluis/stable-baselines3 that referenced this issue Jan 25, 2023
@araffin
Copy link
Member

araffin commented Jan 25, 2023

It seems that I added the warning for observation but forgot to do so for action space.

Done =)

@araffin araffin closed this as completed Jan 25, 2023
araffin added a commit that referenced this issue Apr 14, 2023
* Fix failing set_env test

* Fix test failiing due to deprectation of env.seed

* Adjust mean reward threshold in failing test

* Fix her test failing due to rng

* Change seed and revert reward threshold to 90

* Pin gym version

* Make VecEnv compatible with gym seeding change

* Revert change to VecEnv reset signature

* Change subprocenv seed cmd to call reset instead

* Fix type check

* Add backward compat

* Add `compat_gym_seed` helper

* Add goal env checks in env_checker

* Add docs on  HER requirements for envs

* Capture user warning in test with inverted box space

* Update ale-py version

* Fix randint

* Allow noop_max to be zero

* Update changelog

* Update docker image

* Update doc conda env and dockerfile

* Custom envs should not have any warnings

* Fix test for numpy >= 1.21

* Add check for vectorized compute reward

* Bump to gym 0.24

* Fix gym default step docstring

* Test downgrading gym

* Revert "Test downgrading gym"

This reverts commit 0072b77.

* Fix protobuf error

* Fix in dependencies

* Fix protobuf dep

* Use newest version of cartpole

* Update gym

* Fix warning

* Loosen required scipy version

* Scipy no longer needed

* Try gym 0.25

* Silence warnings from gym

* Filter warnings during tests

* Update doc

* Update requirements

* Add gym 26 compat in vec env

* Fixes in envs and tests for gym 0.26+

* Enforce gym 0.26 api

* format

* Fix formatting

* Fix dependencies

* Fix syntax

* Cleanup doc and warnings

* Faster tests

* Higher budget for HER perf test (revert prev change)

* Fixes and update doc

* Fix doc build

* Fix breaking change

* Fixes for rendering

* Rename variables in monitor

* update render method for gym 0.26 API

backwards compatible (mode argument is allowed) while using the gym 0.26 API (render mode is determined at environment creation)

* update tests and docs to new gym render API

* undo removal of render modes metatadata check

* set rgb_array as default render mode for gym.make

* undo changes & raise warning if not 'rgb_array'

* Fix type check

* Remove recursion and fix type checking

* Remove hacks for protobuf and gym 0.24

* Fix type annotations

* reuse existing render_mode attribute

* return tiled images for 'human' render mode

* Allow to use opencv for human render, fix typos

* Add warning when using non-zero start with Discrete (fixes #1197)

* Fix type checking

* Bug fixes and handle more cases

* Throw proper warnings

* Update test

* Fix new metadata name

* Ignore numpy warnings

* Fixes in vec recorder

* Global ignore

* Filter local warning too

* Monkey patch not needed for gym 26

* Add doc of VecEnv vs Gym API

* Add render test

* Fix return type

* Update VecEnv vs Gym API doc

* Fix for custom render mode

* Fix return type

* Fix type checking

* check test env test_buffer

* skip render check

* check env test_dict_env

* test_env test_gae

* check envs in remaining tests

* Update tests

* Add warning for Discrete action space with non-zero (#1295)

* Fix atari annotation

* ignore get_action_meanings [attr-defined]

* Fix mypy issues

* Add patch for gym/gymnasium transition

* Switch to gymnasium

* Rely on signature instead of version

* More patches

* Type ignore because of Farama-Foundation/Gymnasium#39

* Fix doc build

* Fix pytype errors

* Fix atari requirement

* Update env checker due to change in dtype for Discrete

* Fix type hint

* Convert spaces for saved models

* Ignore pytype

* Remove gitlab CI

* Disable pytype for convert space

* Fix undefined info

* Fix undefined info

* Upgrade shimmy

* Fix wrappers type annotation (need PR from Gymnasium)

* Fix gymnasium dependency

* Fix dependency declaration

* Cap pygame version for python 3.7

* Point to master branch (v0.28.0)

* Fix: use main not master branch

* Rename done to terminated

* Fix pygame dependency for python 3.7

* Rename gym to gymnasium

* Update Gymnasium

* Fix test

* Fix tests

* Forks don't have access to private variables

* Fix linter warnings

* Update read the doc env

* Fix env checker for GoalEnv

* Fix import

* Update env checker (more info) and fix dtype

* Use micromamab for Docker

* Update dependencies

* Clarify VecEnv doc

* Fix Gymnasium version

* Copy file only after mamba install

* [ci skip] Update docker doc

* Polish code

* Reformat

* Remove deprecated features

* Ignore warning

* Update doc

* Update examples and changelog

* Fix type annotation bundle (SAC, TD3, A2C, PPO, base class) (#1436)

* Fix SAC type hints, improve DQN ones

* Fix A2C and TD3 type hints

* Fix PPO type hints

* Fix on-policy type hints

* Fix base class type annotation, do not use defaults

* Update version

* Disable mypy for python 3.7

* Rename Gym26StepReturn

* Update continuous critic type annotation

* Fix pytype complain

---------

Co-authored-by: Carlos Luis <[email protected]>
Co-authored-by: Quentin Gallouédec <[email protected]>
Co-authored-by: Thomas Lips <[email protected]>
Co-authored-by: tlips <[email protected]>
Co-authored-by: tlpss <[email protected]>
Co-authored-by: Quentin GALLOUÉDEC <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working documentation Improvements or additions to documentation duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

3 participants