Skip to content

Conversation

@gjoliver
Copy link
Member

Why are these changes needed?

This is a more exploratory PR. I am not saying we should commit this for sure.
When I was reading the monitoring wrapper related code, I noticed we have this logic to modify the class type of a MultiAgentEnv instance back & forth directly. I was a bit confused about the code.
So I made this change and mostly just want to double check with you to see why we need to do that instead of just having MultiAgentEnv and VideoMonitor classes inherit the right type.
Don't know if I am breaking anything somewhere actually :)
Thanks.

Checks

  • [*] I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • [*] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • [*] Unit tests
    • Release tests
    • This PR is not tested :(

Copy link
Contributor

@sven1977 sven1977 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! We should probably add some tests for rendering and recording for different env types (simple gym.Env -> BaseEnv; MultiAgentEnv -> BaseEnv).


@PublicAPI
class MultiAgentEnv:
class MultiAgentEnv(gym.Env):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow! This is great! I tried this a few moths ago and one of the gym.Env parent methods would complain due to the rewards not being floats anymore (but multi-agent Dict[str, float]s).

@gjoliver gjoliver force-pushed the multi-agent-gym-env branch from dedeb7d to 5f4da39 Compare September 2, 2021 05:42
@sven1977 sven1977 merged commit 336e799 into ray-project:master Sep 3, 2021
@sven1977 sven1977 changed the title Make MultiAgentEnv inherit gym.Env to avoid direct class type manipulation [RLlib] Make MultiAgentEnv inherit gym.Env to avoid direct class type manipulation Sep 3, 2021
@gjoliver gjoliver deleted the multi-agent-gym-env branch September 3, 2021 21:11
@rusu24edward
Copy link

@sven1977 I know I'm a few versions late to the discussion, but this change imposes some additional requirements that weren't there before, and many of my simulations don't use all everything from the gym.Env API. The old rllib.MultiAgentEnv only required step and reset, which worked very well for me. I can probably create a wrapper that adapts my simulations to this new requirement, but I also wanted to ask if there's another Env class I can target now that MultiAgentEnv requires gym.Env interface support?

@gjoliver
Copy link
Member Author

does your env have observation and action spaces?
that's pretty much all we need in addition to step and reset.

@rusu24edward
Copy link

I can probably create a wrapper that adapts my simulations to this new requirement

I can put them in here.

I also wanted to ask if there's another Env class I can target now that MultiAgentEnv requires gym.Env interface support?

Would still like to hear about this.

@rusu24edward
Copy link

Actually, this change didn't effect my environments. It was #21063 that refactored MultiAgentEnvs. I'll update my code to connect with this better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants