-
Notifications
You must be signed in to change notification settings - Fork 31
Running episodes in parallel #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hello, Thanks for your interest in cherry! Currently, there are two ways to run episodes in parallel. Use Use
Note that some environment wrappers will raise errors when using Hope this helps, PS: Obviously, a third way is to use |
Hey, thanks for the comments. Upon looking at the options I decided to go with import cherry as ch
from cherry.envs import Runner
import gym
from gym.spaces import Tuple, Box
class ExampleEnv(gym.Env):
def __init__(self):
self.observation_space = Tuple((
Box(-1, 1, shape=(10, 10)),
Box(-1, 1, shape=(10,))
))
self.action_space = Box(-1, 1, shape=(1,))
def reset(self):
return self.observation_space.sample()
def step(self, action):
return self.observation_space.sample(), 1, True, {}
def make_env():
return ExampleEnv()
vector_env = gym.vector.AsyncVectorEnv([make_env] * 2)
env = ch.envs.Torch(vector_env)
policy = lambda x: ch.totensor(vector_env.action_space.sample())
env = Runner(env)
replay = env.run(policy, episodes=1)
print(replay) Error:
Does the Runnable wrapper support Tuple observations or am I doing something wrong? |
Thanks for the reproducible issue. As of now If you only need one entry of the state tuple, you can use a wrapper as in the grid-world example. If you do need the full tuple, a solution is to implement your own loop that gather experience and when you use |
Thanks, closing this as resolved. |
Hey, library looks great.
I was wondering how to run multiple episodes at the same time using multiple workers. The Runner wrapper doesn't seem to support amounts of workers. For example, increasing the
episodes
value in the PPO example seems to run said episodes sequentially.Any documentation on this would be appreciated.
The text was updated successfully, but these errors were encountered: