-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Blog post] New Step API #1
Comments
for the TL;DR, I was more thinking if how pytorch does it for breaking changes, for instance: https://github.com/pytorch/pytorch/releases/tag/v1.12.0 (see "Backwards Incompatible changes" section) |
the sentence "Gym API’s done signal only referred to the fact that the environment needed resetting with info, “TimeLimit.truncation”=True or False specifying if truncation or termination." sounds weird, should be probably cut in two. |
TL;DR
In openai#2752, we have recently changed the Gym
Env.step
API.In gym versions prior to v25, the step API was
In gym versions 26, the step API was changed to
For training agents, where
done
was used previously, thenterminated
should be used.In v26, all internal gym environments and wrappers solely support the (new) terminated / truncated step API with support for the (old) done step API being provided through the
EnvCompatibility
wrapper for converting between the old and new APIs, accessible throughgym.make(..., apply_api_compatibility=True)
.It should be noted that v25 has the changes listed below but is turned off by default and require parameters not discussed in this blog post. Therefore, we recommend users to either update to v26 or use v23.1 that do not include these changes.
For a detailed explanation of the changes and reasoning, read the rest of this post.
(New) Terminated / Truncated Step API
In this post, we explain the motivation for the change, what the new
Env.step
API is, why alternative implementations were not selected and the suggested code changes for developers.Introduction
To prevent an agent from wandering in circles forever, not doing anything, and for other practical reasons, Gym lets environments have the option to specify a time limit that the agent must complete the environment within. Importantly, this time limit is outside of the agent’s knowledge as it is not contained within their observations. Therefore, when the agent reaches the time limit, the environment should be reset however this type of reset should be treated differently from when the agent reaches a goal and the environment ends. We refer to the first type as truncation, when the agent reaches the time limit (maximum number of steps) for the environment, and the second type as termination, when the environment state reaches a specific condition (i.e. the agent reaches the goal). For a more precise discussion of how Gym works in relation to RL theory, see the theory section.
The problem is that most users of Gym have treated termination and truncation as identical. Gym's step API
done
signal only referred to the fact that the environment needed resetting withinfo
,“TimeLimit.truncation”=True or False
specifying if the cause istruncation
ortermination
.This matters for most Reinforcement Learning algorithms [1] that perform bootstrapping to update the Value function or related estimates (i.e. Q-value), used by DQN, A2C, etc. In the following example for updating the Q-value, the next Q-value depends on if the environment has terminated.
This can be seen in Algorithm 1 (Page 5) of the original DQN paper, however, we noted that this case is often ignored when writing the pseudocode for Reinforcement Learning algorithms.
Therefore, if the environment has truncated and not terminated, case 2 of the bootstrapping should be computed, however, if the case is determined by
done
, this can result in the wrong implementation. This was the main motivation for changing the step API to encourage accurate implementations, a critical factor for academia when replicating work.The reason that most users are unaware of this difference between truncation and termination is that documentation on this issue was missing. As a result, a large amount of tutorial code has incorrectly implemented RL algorithms. This can be seen in the top 4 tutorials found searching google for “DQN tutorial”, [1], [2], [3], [4] (checked 21 July 2022) where only a single website (Tensorflow Agents) implements truncation and termination correctly. Importantly, the reason that Tensorflow Agent does not fall for this issue is that Google has recognised this issue with the Gym
step
implementation and has designed their own API where thestep
function returns thediscount factor
instead ofdone
. See time step code block.(New) Terminated / Truncated Step API
In this Section, we discuss the (new) terminated / truncated step API along with the changes made to Gym that will affect users. We should note that these changes might not be implemented by all python modules or tutorials that use Gym. In
v0.25
, this behaviour will be turned off by default (in a majority of cases) but inv0.26+
, support for the old step API is provided solely through theEnvCompatibility
andStepAPICompatibility
wrapper.terminal_reward
,terminal_observation
etc. is replaced withfinal_reward
,final_observation
etc. The intention is to reserve the 'termination' word for only ifterminated=True
. (for some motivation, Sutton and Barto use terminal states to specifically refer to special states whose values are 0, states at the end of the MDP. This is not true for a truncation where the value of the final state need not be 0. So the current usage ofterminal_obs
etc. would be incorrect if we adopt this definition)Suggested Code changes
We believe there are primarily two changes that will have to be made by developers updating to the new Step API.
env.step
to take 5 elements,obs, reward, termination, truncation, info = env.step(action)
. To loop through the environment then you need to check if the environment needs resetting withdone = terminated or truncated
.terminated
andtruncated
frominfo["TimeLimit.truncated"]
to correctly implement many RL algorithms. We should note that it is not possible for bothterminated
andtruncated
to be both true with the (old) done step API which is possible for the new API. For the (new) terminated / truncated step API,terminated
andtruncated
is known immediately fromenv.step
. To useterminated
andtruncated
is unique for each algorithms implementation but thetermination
information is generally critical for bootstrapped estimated training algorithms and in replay buffers can generally replacedone
. However, check if the training code has been updated.Backward compatibility
To allow conversions between the done step API and termination / truncation step API, we provide
convert_to_terminated_truncated_step_api
andconvert_to_done_step_api
inutils/step_api_compatibility.py
. These functions work with vector environments (with both list and dictionary-based info). These functions are incorporated with theStepAPICompatibility
andEnvCompatibility
wrappers.step_api_compatibility
functionThis function is similar to the wrapper, it is used for backward compatibility in wrappers, vector envs. It is used at interfaces between environments, wrappers, vectorisation and outside code. Example usage,
With the step compatibility functions, whenever an environment (or sub-environment with vectorisation) is terminated or truncated,
"TimeLimit.truncated"
is added to the stepinfo
. However, as the info cannot specify ifterminated
andtruncated
is True only one being True, in cases of converting,termination
is favored overtruncation
. I.e. ifterminated=True
andtruncated=True
thendone=True
andinfo['TimeLimit.truncated']=False
. The revert is also assumed ifdone=True
andinfo["TimeLimit.truncated"]=True
, thenterminated=False
andtruncated=True
.Alternative Implementations
While developing this new Step API, a number of developers asked why alternative implementations were not taken. There are four primary alternative approaches that we considered:
done
which is a python bool with a custom bool implementation that can act identically to boolean except in addition encoding thetruncation
information. Similar to this is a proposal to replacedone
as an integer to allow the four possibletermination
andtruncation
states. However, the primary problem with both of these implementations is that it is backwards compatible meaning that (old) done code that is not properly implemented with new custom boolean or integer step API could cause significant bugs to occur. As a result, we believe this proposal could cause significantly more issues.step
function return thediscount_factor
instead ofdone
. This allows them to have variablediscount_factors
over an episode and can address the issue with termination and truncation. However, we identify two problems with this proposal. The first is similar to the custom boolean implementation that while the change is backwards compatible this can instead cause assessing if the tutorial code is updated to the new API. The second issue is that Gym provides an API solely for environments, and is agnostic to the solving method. So adding the discount factor would change one of the core Gym philosophies.Related Reinforcement Learning Theory
Reinforcement Learning tasks into grouped into two - episodic tasks and continuing tasks. Episodic tasks refer to environments that terminate in a finite number of steps. This can further be divided into Finite-Horizon tasks which end in a fixed number of steps and Indefinite Horizon tasks which terminate in an arbitrary number of steps but must end (eg. goal completion, task failure). In comparison, Continuing tasks refer to environments which have no end (eg. some control process tasks).
The state that leads to an episode ending in episodic tasks is referred to as a terminal state, and the value of this state is 0. The episode is said to have terminated when the agent reaches this state. All this is encapsulated within the Markov Decision Process (MDP) which defines a task (Environment).
A critical difference occurs in practice when we choose to end the episode for reasons outside the scope of the agent (MDP). This is typically in the form of time limits set to limit the number of timesteps per episode (useful for several reasons - batching, diversifying experience etc.). This kind of truncation is essential in training continuing tasks that have no end, but also useful in episodic tasks that can take an arbitrary number of steps to end. This condition can also be in the form of an out-of-bounds limit, where the episode ends if a robot steps out of a boundary, but this is more due to a physical restriction and not part of the task itself.
We can thus differentiate the reason for an episode ending into two categories - the agent reaching a terminal state as defined under the MDP of the task, and the agent satisfying a condition that is out of the scope of the MDP. We refer to the former condition as termination and the latter condition as truncation.
Note that while finite horizon tasks end due to a time limit, this would be considered a termination since the time limit is built into the task. For these tasks, to preserve the Markov property, it is essential to add information about ‘time remaining’ in the state. For this reason, Gym includes a
TimeObservation
wrapper for users who wish to include the current time step in the agent’s observation.The text was updated successfully, but these errors were encountered: