From 712406f63e0331d3111ecb1f21a13dc5a7071a29 Mon Sep 17 00:00:00 2001 From: Chris Wing Date: Tue, 9 Dec 2025 11:41:03 -0800 Subject: [PATCH] Change NeMo Gym from framework to library Signed-off-by: Chris Wing --- README.md | 6 +++--- docs/get-started/setup-installation.md | 4 ++-- docs/index.md | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index bbcf42484..e2d010022 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # NeMo Gym -NeMo Gym is a framework for building reinforcement learning (RL) training environments for large language models (LLMs). It provides infrastructure to develop environments, scale rollout collection, and integrate seamlessly with your preferred training framework. +NeMo Gym is a library for building reinforcement learning (RL) training environments for large language models (LLMs). It provides infrastructure to develop environments, scale rollout collection, and integrate seamlessly with your preferred training framework. NeMo Gym is a component of the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/), NVIDIA’s GPU-accelerated platform for building and training generative AI models. @@ -22,7 +22,7 @@ NeMo Gym is a component of the [NVIDIA NeMo Framework](https://docs.nvidia.com/n NeMo Gym is designed to run on standard development machines: -- **GPU**: Not required for NeMo Gym framework operation +- **GPU**: Not required for NeMo Gym library operation - GPU may be needed for specific resource servers or model inference (see individual server documentation) - **CPU**: Any modern x86_64 or ARM64 processor (e.g., Intel, AMD, Apple Silicon) - **RAM**: Minimum 8 GB (16 GB+ recommended for larger environments) @@ -145,7 +145,7 @@ If you use NeMo Gym in your research, please cite it using the following BibTeX ```bibtex @misc{nemo-gym, - title = {NeMo Gym: An Open Source Framework for Scaling Reinforcement Learning Environments for LLM}, + title = {NeMo Gym: An Open Source Library for Scaling Reinforcement Learning Environments for LLM}, howpublished = {\url{https://github.com/NVIDIA-NeMo/Gym}}, author={NVIDIA}, year = {2025}, diff --git a/docs/get-started/setup-installation.md b/docs/get-started/setup-installation.md index 41a554935..1bffefa0b 100644 --- a/docs/get-started/setup-installation.md +++ b/docs/get-started/setup-installation.md @@ -23,7 +23,7 @@ NeMo Gym is designed to run on standard development machines without specialized hardware: -- **GPU**: Not required for NeMo Gym framework operation +- **GPU**: Not required for NeMo Gym library operation - GPU may be needed for specific resource servers or model inference (see individual server documentation). E.g. if you are intending to train your model with NeMo-RL, GPU resources are required (see training documentation) - **CPU**: Any modern x86_64 or ARM64 processor (e.g., Intel, AMD, Apple Silicon) - **RAM**: Minimum 8 GB (16 GB+ recommended for larger environments and datasets) @@ -327,7 +327,7 @@ Your directory should look like this: Gym/ ├── env.yaml # Your API credentials (git-ignored) ├── .venv/ # Virtual environment (git-ignored) -├── nemo_gym/ # Core framework code +├── nemo_gym/ # Core library code ├── resources_servers/ # Tools and environments ├── responses_api_models/ # Model integrations ├── responses_api_agents/ # Agent implementations diff --git a/docs/index.md b/docs/index.md index be45b40e8..290cbc8b8 100644 --- a/docs/index.md +++ b/docs/index.md @@ -2,7 +2,7 @@ # NeMo Gym Documentation -[NeMo Gym](https://github.com/NVIDIA-NeMo/Gym) is a framework for building reinforcement learning (RL) training environments for large language models (LLMs). It provides infrastructure to develop environments, scale rollout collection, and integrate seamlessly with your preferred training framework. +[NeMo Gym](https://github.com/NVIDIA-NeMo/Gym) is a library for building reinforcement learning (RL) training environments for large language models (LLMs). It provides infrastructure to develop environments, scale rollout collection, and integrate seamlessly with your preferred training framework. A training environment consists of three server components: **Agents** orchestrate the rollout lifecycle—calling models, executing tool calls via resources, and coordinating verification. **Models** provide stateless text generation using LLM inference endpoints. **Resources** define tasks, tool implementations, and verification logic.