-
Notifications
You must be signed in to change notification settings - Fork 725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
V3.0 implementation design #576
Comments
Paradigm: I agree on using eager-mode. This should make things much easier. However I am uncertain about the MPI: I favor dropping support for this. I do not see the benefit of it at this point, but it has been a source of headaches (e.g. Windows support, importing MPI-dependent algorithms). Monitor: I do not know about "on by default", but I agree on having some unified structure for tracking episode stats which can then be read in callbacks (see e.g. #563). I would still keep the monitor wrapper which would just print these results to a .csv file like previously. Roadmap: I would go with the simplest algorithms, e.g. PPO and A2C, and see how things go from there (or would TD3 be easy after PPO?). This should be by default, but very first thing to do would be to gather some benchmark results with current stable-baselines (already in rl-zoo), and then run experiments against these and call it a day once similar performance is reached. One thing I would add is the support for Tuple/Dict observation/action spaces, as discussed in many issues (e.g. #502). Judging by all the questions this is probably one of the biggest limitations of using stable-baselines in new kind of tasks. This would include some non-backend related modifications as well (e.g. how observations/actions are handled, as they can not be stacked to numpy arrays). I can work on the model saving/loading and conversion of older models, as well as finish the refactoring of |
Monitor
Yes that's the idea. In fact, if you specify Roadmap
PPO and A2C have equivalent complexity and in fact, TD3 would be the easiest as it does not require a probability distribution. And yes that's the all point of having the zoo ;) (we have the hyperparameters and the expected performances) Tuple/Dict observation/action spaces,
this is tricky but should be possible with the eager mode (at least for the observation), most of the work would be done by a
Perfect =) Note that I would put the migration scripts in a repo in Stable-Baselines Team rather than in the package. |
By brain did a derp and totally missed the point this would get rid of the wrapper and move the whole logging inside stable-baselines rather than env side. Sounds good, although I have found Monitor wrapper useful outside stable-baselines as well (not that it is a complicated piece of code)
I am ok with v3.1 support for this, but we could keep it in mind when doing v3.0 in case some design decisions come up that could influence this. Indeed on backend side this is not a big thing. For action spaces you could assume independence between actions in which case you can multiply probabilities together (not the ideal solution as things probably are not independent, but works for any case).
👍 . I just want to add that I won't have too much time to put on this until after Christmas holidays. |
Thanks for taking the lead on this @araffin. I agree with the plan overall. MPI: In favour of dropping support, it has caused me so many headaches, and we can get similar performance with other techniques. Tuple/dict observation/action spaces: This would be useful for me but I agree it should not be our immediate target. GAIL: If SB moves to TF 2.0, we'll migrate https://github.com/humancompatibleai/imitation/ to TF 2.0 as well. Shouldn't be that hard as the only TensorFlow code we have is some implementations of discriminator and reward networks. Base classes: While we're making breaking API changes, I'd like to rethink In particular, I've found the current design really awkward when I want to wrap policies. This has been quite common in my use cases (e.g. adding noise to actions of a policy to test stability, or normalizing observations which I cannot do at the VecEnv level in a multi-agent context since different policies have different parameters). The options here are either to try and extract the |
Hi Guys, thank you for your contributions to this project. I have been working with its parts on and off throughout a couple of months, so I thought I will share a few thoughts with you. On choosing the TF style:
I believe that the portability of TF graphs is a powerful concept which in TF2.0 is enabled through tf.function (and would be compromised through bare eager execution), therefore I would hope to reinforce you in your suggestion for this additional reason. As a matter of fact, graph portability is how I got interested in SB project as I was executing graphs in C++ with this project being an example. On MPI:
I am not fully aware of the history of baselines or which points in PG methods are universally suitable for parallelization, but I would think that MPI is applicable when you cannot fit into one physical node e.g. you require 100 logical cores and above and can tolerate the cost of communication. I would suspect that most people don't do that? So yet again, I would think that indeed dropping an immediate hurdle for prospect gain is a good choice. On the feasibility of TF2 algorithms implementation: I actually was playing with porting SAC and DDPG (here), and managed to benchmark former against 2 very different environments successfully (didn't know zoo has hyperparameters available lol). SAC_TF2 seemed to behave just like your implementation. It's definitely not a library-quality, but perhaps still can be helpful as a first draft of an idea. On generic parts of the algorithms: That's a hard one when looking at details. Simple things like f.e. MLP creation inside of policies could be shared of course but writing generic stuff without obscuring ideas with many layers of indirection is problematic, to say the least. What I like most about this library is its relative readability which helped me a lot as a learner. I have worked with just 3 of your implementations, which may not be enough to make a proper judgment but what caught my eye was the PPO's (2) Runner separation which felt to me quite applicable to the other 2 implementations I touched: SAC and DDPG where this wasn't used. I believe that one of the ideas for changes in Python TF frontend was to encourage splitting up things a bit more and Runner seems to fit nicely in that. On naming:
Great idea. There were examples that troubled me even bit more than this, where parameters are only seemingly different and I had to perform some mental translation to see they are not. This happens for instance in learning loops that present many flavors of similar things. E.g. I believe that Hope sth makes sense out of those :) |
Really looking for the v3 version and more than willing to help.
class Callback(KerasCallback):
def _set_env(self, env):
self.env = env
def on_episode_begin(self, episode, logs={}):
"""Called at beginning of each episode"""
pass
def on_episode_end(self, episode, logs={}):
"""Called at end of each episode"""
pass
def on_step_begin(self, step, logs={}):
"""Called at beginning of each step"""
pass
def on_step_end(self, step, logs={}):
"""Called at end of each step"""
pass
def on_action_begin(self, action, logs={}):
"""Called at beginning of each action"""
pass
def on_action_end(self, action, logs={}):
"""Called at end of each action"""
pass src: https://github.com/keras-rl/keras-rl/blob/master/rl/callbacks.py
class Processor(object):
"""Abstract base class for implementing processors.
A processor acts as a coupling mechanism between an `Agent` and its `Env`. This can
be necessary if your agent has different requirements with respect to the form of the
observations, actions, and rewards of the environment. By implementing a custom processor,
you can effectively translate between the two without having to change the underlaying
implementation of the agent or environment.
Do not use this abstract base class directly but instead use one of the concrete implementations
or write your own.
"""
def process_step(self, observation, reward, done, info):
"""Processes an entire step by applying the processor to the observation, reward, and info arguments.
# Arguments
observation (object): An observation as obtained by the environment.
reward (float): A reward as obtained by the environment.
done (boolean): `True` if the environment is in a terminal state, `False` otherwise.
info (dict): The debug info dictionary as obtained by the environment.
# Returns
The tupel (observation, reward, done, reward) with with all elements after being processed.
"""
observation = self.process_observation(observation)
reward = self.process_reward(reward)
info = self.process_info(info)
return observation, reward, done, info
def process_observation(self, observation):
"""Processes the observation as obtained from the environment for use in an agent and
returns it.
# Arguments
observation (object): An observation as obtained by the environment
# Returns
Observation obtained by the environment processed
"""
return observation
def process_reward(self, reward):
"""Processes the reward as obtained from the environment for use in an agent and
returns it.
# Arguments
reward (float): A reward as obtained by the environment
# Returns
Reward obtained by the environment processed
"""
return reward
def process_info(self, info):
"""Processes the info as obtained from the environment for use in an agent and
returns it.
# Arguments
info (dict): An info as obtained by the environment
# Returns
Info obtained by the environment processed
"""
return info
def process_action(self, action):
"""Processes an action predicted by an agent but before execution in an environment.
# Arguments
action (int): Action given to the environment
# Returns
Processed action given to the environment
"""
return action
def process_state_batch(self, batch):
"""Processes an entire batch of states and returns it.
# Arguments
batch (list): List of states
# Returns
Processed list of states
"""
return batch
@property
def metrics(self):
"""The metrics of the processor, which will be reported during training.
# Returns
List of `lambda y_true, y_pred: metric` functions.
"""
return []
@property
def metrics_names(self):
"""The human-readable names of the agent's metrics. Must return as many names as there
are metrics (see also `compile`).
"""
return [] |
Callbacks are already implemented (and there will be a collection for v2.10 see #348 ) but as discussed in #571 this is not possible to have per-episode callback.
This already exists and is called a For those two features, I recommend you to look at our recent tutorial (that covers both callbacks and wrappers): https://github.com/araffin/rl-tutorial-jnrr19 EDIT: for the callback, we may add additional events like "on_training_end" but I'm not sure if this is really needed in our case |
I know I am a bit late to the party, but I guess that my PR would be totally broken in SB V3. Special with recurrent support in question marks. |
After thinking more about the first steps (I created a PR for that #580 ), I think it would clearer to start from scratch in another repo (in the stable-baselines team) for designing the new interface (and creating the first algorithms). Once we have a working version we can start merging the first repo into the original one. |
I tend to agree a through-out modification like this would probably be easier done by starting (mostly) from scratch, especially given all the oddball remnants of the original baselines repo which still hang around and cause confusion every now and then. One nitpicky/tangent-comment though: If most of the codebase will be redone, can we still call this a "fork of OpenAI baselines", since most of the code does not really originate from there anymore? Just something that popped to my head. |
yes, the idea is to add things (mostly copying) when needed instead of deleting useless things.
We are already more "inspired by OpenAI Baselines" rather than a "fork" of it (we are still using the same structure, tricks and it gives credibility to stable baselines). It does not really matter for me (more a question of branding), but we can change that later. |
I've started a first draft for td3 here: https://github.com/Stable-Baselines-Team/stable-baselines-tf2 @Antymon @AdamGleave
|
It is puzzling though. I didn't experience difficulties in that area despite using tf version 2.0, especially not in the eager mode (it is easier to mess up with tf.function). Well, glad you found something that works anyway (and hope that didn't highlight a valid problem). Which gym environment you used for benchmarking? |
I'm using |
I've added PPO (missing some tricks like grad norm clipping and orthogonal init) both for Discrete and Continuous actions. I plan also to type everything but I will focus on the callback collection for now. |
Only skimmed it very quickly but looks good, seems clearer than the TF1 code. |
I concur with Adam this looks much cleaner than with TF1. I also like the use of |
It is... It runs at 1100FPS with pytorch and 210FPS with tensorflow 2... I used the exact same code for testing the two (just replace the import) (and the tf2 code is very similar to the pytorch one anyway). So yes, I would appreciate help to optimize the new version ;). EDIT: I created a colab notebook so you can also try it yourself |
I have got good news! The commits: Stable-Baselines-Team/stable-baselines-tf2@769b03e and Stable-Baselines-Team/stable-baselines-tf2@2ca1d07 |
Great news, well done! |
Well done @araffin ! Can you advise us what made the biggest difference exactly? Would my implementations also benefit from your changes? |
Regarding the structure of the project, it would be beneficial to create some form of separation of concerns and responsibilities. At the moment, the algorithms employ every trick in the book, which, while great it comes with the burden of code duplication, rigidness and complexity that turns algorithms into god classes with over 200 loc functions. In reality, most algorithms aren't a lot more than their objective function and the extra tricks can be added through mixins. |
Did you take a look at the tf2 draft? |
How much modularity is a tricky balancing act for RL algorithms. A lot of modularity can make an algorithm very hard to understand unless you already have a good grasp of the framework. Spinning Up intentionally moved in the opposite direction: everything you need to understand an algorithm being in one file. Although this was a decision based on pedagogy not long-term SWE, I think it does illustrate a tension here. |
I completely agree with @AdamGleave on that point.
I wanted to take that example to ^^ I see Stable Baselines to be in the middle, we don't aim at full modularity (and want self contained implementations) but we try to avoid code duplication too (for instance in the tf2 draft, the |
I find the draft to be good. I am suggesting slightly more modularity over the current implementation and spinning up, but perhaps not as much as ray[rllib]. @AdamGleave is right modularity in RL is indeed a double edged sword. I think the modularity can be postponed until the algorithms are implemented with eager mode as that will reduce a lot of clutter. |
In the current version of SB, the user specifies the number of environment steps. It would be great if we could also specify the number of episodes. |
This will be the case for SAC and TD3 (only valid when using one environment), it is not possible for ppo because of how the algorithm is defined. EDIT: it will be only for gradient update |
Here is a summary of what will the new version look like. The backend choice is summarized in issue #733 . Whathever the backend, we plan to rewrite the lib more or less from scratch but keeping the api and most of the useful things that are currently in the common folder. Dropped Features
New Features
Delayed Features (so planned but not high-priority)
Internal changes
Most of the previous changes are already implemented in the tf2 draft: https://github.com/Stable-Baselines-Team/stable-baselines-tf2 |
@araffin I could contribute Energy Based Prioritisation (EBP) to HER algorithm. Nice improvement overall without the increasing computational time. Not difficult to implement and the user could opt to use Let me know if this is something you would be interested in. |
The initial version(s) will focus on porting what is currently in stable-baselines (minus the parts that will be dropped) into new backend, and making sure the performance does not change from current implementations. New features like that would be considered later. |
It will be quite useful to implement |
It would be helpful to support viskit logging. Viskit is developed specifically for RL and plots RL experiments much better than tensorboard. https://github.com/rlworkgroup/viskit |
You mean dowel ? Actually, the logger from Stable-Baselines is quite close to the one from rllab, which is the previous version of garage, also developped by the rl-group. |
@araffin are you referring to dowel or viskit? viskit is definitely a little stale and I've been meaning to merge @vitchyr's improvements, but low-priority as it seem to see little usage compared to TensorBoard. dowel is most-definitely actively-developed. One of the nice things about pulling single-purpose software into single-purpose packages is that we don't have to change it very much :). If you are interested in more typing for RL, take a look at akro. It currently depends on gym.spaces, but we plan on removing that dep and making shapes part of the type signatures in the near future. It has some nice helpers for manipulating data from known spaces, e.g. concatenating a bunch of dict observations. |
For the logger, I'm referring to dowel. It would be nice to have but in my opinion it adds too many dependencies for now (e.g. tensorboardX should be optional).
For now, |
@araffin I'd love to hear your feedback on dowel. Feel free to hop on over to the Issues page and leave some questions/concerns! Is the tbX dependency the only thing keeping you from using dowel (note tbX doesn't rely on TF and only depends on numpy, protobuf, and six)? I'm happy to update it to make tbX an optional extra if that's what's blocking you. |
A beta version is now online: https://github.com/DLR-RM/stable-baselines3 |
That's nice of you. I would say that for now we have a simple logger implementation that fits our needs, so there is no real need to change it. But maybe if we encounter some limitations and don't have the time to maintain it, we may switch to dowel ;) |
closing this issue as any new features/changes should be discussed on the new repo. |
Version3 is now online: https://github.com/DLR-RM/stable-baselines3
Hello,
Before starting the migration to tf2 for stable baselines v3, I would like to discuss some design point we should agree on.
Which tf paradigm should we use?
I would go for pytorch-like "eager mode", wrapping the method using a
tf.function
to improve the performance (as it is done here).The define-by-run is usually easier to read and debug (and I can compare it to my internal pytorch version). Wrapping it up with a
tf.function
should preserve performances.What is the roadmap?
My idea would be:
I would go for PPO/TD3 and I can be in charge of that.
This would allow to discuss concrete implementation details.
I'm afraid that the remaining ones (ACKTR, GAIL and ACER) are not the easiest one to implement.
And for GAIL, we can refer to https://github.com/HumanCompatibleAI/imitation by @AdamGleave et al.
Is there other breaking changes we should do? Change in the interface?
Some answers to this questions are linked here: #366
There are different things that I would like to change/add.
First, it would be adding the evaluation in the training loop. That is to say, we allow use to pass an
eval_env
on which the agent will be evaluated everyeval_freq
forn_eval_episodes
. This is a true measure of the agent performance compared to training reward.I would like to manipulate only
VecEnv
in the algorithm (and wrap the gym.Env automatically if necessary) this simplify the thing (so we don't have to think about what is the type of the env). Currently, we are using anUnVecEnvWrapper
which makes things complicated for DQN for instance.Should we maintain MPI support? I would favor switching to
VecEnv
too, this remove a dependency and unify the rest. (and would maybe allow to have an easy way to multiprocess SAC/DDPG or TD3 (cf #324 )). This would mean that we will remove PPO1 too.Next thing I would like to make default is the Monitor wrapper. This allow to retrieve statistics about the training and would remove the need of a buggy version of
total_episode_reward_logger
for computing reward (cf #143).As discussed in an other issue, I would like to unify the learning rate schedule too (would not be too difficult).
I would like to unify also the parameters name (ex:
ent_coef
vsent_coeff
).Anyway, I plan to do a PR and we can then discuss on that.
Regarding the transition
As we will be switching to keras interface (at least for most of the layers), this will break previously saved models. I propose to create scripts that allow to convert old models to new SB version rather than try to be backward-compatible.
Pinging @hill-a @erniejunior @AdamGleave @Miffyli
PS: I hope I did not forget any important point
EDIT: the draft repo is here: https://github.com/Stable-Baselines-Team/stable-baselines-tf2 (ppo and td3 included for now)
The text was updated successfully, but these errors were encountered: