Skip to content

Conversation

@sven1977
Copy link
Contributor

@sven1977 sven1977 commented Nov 11, 2024

Cleanup examples folder vol 32: Enable RLlib + Serve example in CI and translate to new API stack.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
@sven1977 sven1977 added serve Ray Serve Related Issue rllib RLlib related issues rllib-docs-or-examples Issues related to RLlib documentation or rllib/examples rllib-newstack labels Nov 11, 2024
Copy link
Contributor

@simonsays1980 simonsays1980 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. For the future we should extend this example by a connector run as this will probably be the default in practice.

action = self.algo.compute_single_action(obs)
# Compute and return the action for the given observation (create a batch
# with B=1 and convert to torch).
output = self.rl_module.forward_inference(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a better practical usability we might need to enlarge this example by a connector run. This might be the default we will see in practice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, makes sense!
Maybe add another example where we restore the EnvToModule pipeline + the RLModule.
Like we do in the ray.rllib.examples.inference.policy_inference_after_training_w_connector.py script.


# If episode done -> reset to get initial observation of new episode.
if done:
if terminated or truncated:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome example!!

@sven1977 sven1977 enabled auto-merge (squash) November 12, 2024 11:03
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Nov 12, 2024
@sven1977 sven1977 merged commit c47bd45 into ray-project:master Nov 12, 2024
6 of 7 checks passed
@sven1977 sven1977 deleted the enable_rllib_plus_serve_example_in_ci branch November 12, 2024 16:58
JP-sDEV pushed a commit to JP-sDEV/ray that referenced this pull request Nov 14, 2024
mohitjain2504 pushed a commit to mohitjain2504/ray that referenced this pull request Nov 15, 2024
…in CI and translate to new API stack. (ray-project#48687)

Signed-off-by: mohitjain2504 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-backlog go add ONLY when ready to merge, run all tests rllib RLlib related issues rllib-docs-or-examples Issues related to RLlib documentation or rllib/examples serve Ray Serve Related Issue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants