Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core][Autoscaler] Refactor v2 Log Formatting #49350

Merged
merged 16 commits into from
Mar 6, 2025

Conversation

ryanaoleary
Copy link
Contributor

@ryanaoleary ryanaoleary commented Dec 19, 2024

Why are these changes needed?

Currently the V2 Autoscaler formats logs by converting the V2 data structure ClusterStatus to the V1 structures AutoscalerSummary and LoadMetricsSummary and then passing them to the legacy format_info_string. It'd be useful for the V2 autoscaler to directly format ClusterStatus to the correct output log format. This PR refactors utils.py to directly format ClusterStatus. Additionally, this PR changes the node reports to output instance_id rather than ip_address, since the latter is not necessarily unique for failed nodes.

Related issue number

Closes #37856

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@ryanaoleary ryanaoleary changed the title Initial commit to refactor v2 format code [Core][Autoscaler] Refactor v2 format code Dec 19, 2024
@ryanaoleary ryanaoleary changed the title [Core][Autoscaler] Refactor v2 format code [Core][Autoscaler] Refactor v2 Log Formatting Dec 19, 2024
@ryanaoleary ryanaoleary force-pushed the refactor-v2-logs branch 2 times, most recently from 7589e14 to 30fe22c Compare January 3, 2025 02:37
@ryanaoleary ryanaoleary marked this pull request as ready for review February 13, 2025 00:44
@ryanaoleary ryanaoleary requested review from hongchaodeng and a team as code owners February 13, 2025 00:44
@ryanaoleary
Copy link
Contributor Author

cc: @rickyyx, I think I'm going to add some different unit test cases but this should be good to review

Signed-off-by: Ryan O'Leary <[email protected]>
@ryanaoleary
Copy link
Contributor Author

@kevin85421 This PR would be helpful to add since it moves away from referencing the legacy autoscaler from the V2 autoscaler. This refactor will also make it easier to make other improvements to V2 log formatting, since we won't have to parse it to the V1 format first.

Copy link
Member

@kevin85421 kevin85421 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewing

@@ -555,10 +555,10 @@ def test_cluster_status_formatter():
Pending:
worker_node, 1 launching
worker_node_gpu, 1 launching
127.0.0.3: worker_node, starting ray
instance4: worker_node, starting ray
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember that the instance ID for KubeRay is the Pod name. Could you check whether the ray status result also shows the Pod name so that we can map K8s Pods to Ray instances?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think ray status currently shows the Pod name, this is from my manual testing:

(base) ray@raycluster-autoscaler-head-p77pc:~$ ray status --verbose
======== Autoscaler status: 2025-02-25 22:35:58.558544 ========
GCS request time: 0.001412s

Node status
---------------------------------------------------------------
Active:
 (no active nodes)
Idle:
 1 headgroup
Pending:
 : small-group, 
Recent failures:
 (no failures)

Resources
---------------------------------------------------------------
Total Usage:
 0B/1.86GiB memory
 0B/495.58MiB object_store_memory

Total Demands:
 {'CPU': 1.0, 'TPU': 4.0}: 1+ pending tasks/actors

Node: 65d0a32bfeee84475a235b3c290824ec3ac0b1ab5148d96fc674ce93
 Idle: 82253 ms
 Usage:
  0B/1.86GiB memory
  0B/495.58MiB object_store_memory

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will the key in the key-value pairs in Pending be the Pod name? That's my expectation.

In addition, have you manually tested this PR? The test below shows that either "head node" or "worker node" is appended to the end of the Node: ... line. For example,

Node: fffffffffffffffffffffffffffffffffffffffffffffffffff00001 (head_node)

However, the above output from your manual testing is:

Node: 65d0a32bfeee84475a235b3c290824ec3ac0b1ab5148d96fc674ce93

Copy link
Contributor Author

@ryanaoleary ryanaoleary Mar 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I misunderstood your initial comment, I thought you were asking whether ray status currently shows the Pod name in KubeRay. The above snippet was using the ray 2.41 image. I've been running into issues building an image lately to test the new changes with the following Dockerfile:

# Use the latest Ray master as base.
FROM rayproject/ray:nightly-py310
# Invalidate the cache so that fresh code is pulled in the next step.
ARG BUILD_DATE
# Retrieve your development code.
ADD . ray
# Install symlinks to your modified Python code.
RUN python ray/python/ray/setup-dev.py -y

where the RayCluster Pods will immediately crash and terminate after pulling the image. Describing the RayCluster just shows:

Normal   DeletedHeadPod         5m21s (x8 over 5m22s)  raycluster-controller  Deleted head Pod default/raycluster-autoscaler-head-ll2vs; Pod status: Running; Pod restart policy: Never; Ray container terminated status: &ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2025-03-04 11:52:38 +0000 UTC,FinishedAt:2025-03-04 11:52:38 +0000 UTC,ContainerID:containerd://81a7332de2046c934ba6725cbb72eb3b228ee8aa66bc26f2db5f6741607ae82f,}
  Normal   DeletedHeadPod         26s (x159 over 5m9s)   raycluster-controller  (combined from similar events): Deleted head Pod default/raycluster-autoscaler-head-5g2c4; Pod status: Running; Pod restart policy: Never; Ray container terminated status: &ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2025-03-04 11:57:33 +0000 UTC,FinishedAt:2025-03-04 11:57:34 +0000 UTC,ContainerID:containerd://f423d2877a176beaecb88e6d1d8e61456233b1359c9e8b94e333ea4560e86b1c,}

The head Pod keeps immediately crashing and re-creating, so I can't get any more useful logs from the container. I tried building an image using the latest changes from master (i.e. I didn't use any of my python changes) and it still had the same issue, is this a problem you've seen before? As soon as I have a working image I can run a manual test to check for Pod name in the key-value pairs in Pending.

Copy link
Contributor Author

@ryanaoleary ryanaoleary Mar 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just able to manually test it with my changes, here is the output of ray status --verbose with a Pending node:

======== Autoscaler status: 2025-03-04 12:11:15.078526 ========
GCS request time: 0.001526s

Node status
---------------------------------------------------------------
Active:
 (no active nodes)
Idle:
 1 headgroup
Pending:
 a4dfeafc-8a5e-47ff-9721-cdd559c00dfc: small-group, 
Recent failures:
 (no failures)

Resources
---------------------------------------------------------------
Total Usage:
 0B/1.86GiB memory
 0B/511.68MiB object_store_memory

Total Demands:
 {'CPU': 1.0}: 1+ pending tasks/actors

Node: dcb068352f72b5244cfdefaa70055d5cd51b5cd29778295b41cd0775 (headgroup)
 Idle: 10641 ms
 Usage:
  0B/1.86GiB memory
  0B/511.68MiB object_store_memory
  
(base) ray@raycluster-autoscaler-head-hr8pd:~$ ray status --verbose
======== Autoscaler status: 2025-03-04 12:11:36.180572 ========
GCS request time: 0.001749s

Node status
---------------------------------------------------------------
Active:
 1 small-group
Idle:
 1 headgroup
Pending:
 (no pending nodes)
Recent failures:
 (no failures)

Resources
---------------------------------------------------------------
Total Usage:
 1.0/1.0 CPU
 0B/2.79GiB memory
 0B/781.36MiB object_store_memory

Total Demands:
 (no resource demands)

Node: 1c0ff9b00d40e332469adb4fdfacd9d0f21599bac65e50666e808a4d (small-group)
 Usage:
  1.0/1.0 CPU
  0B/953.67MiB memory
  0B/269.68MiB object_store_memory
 Activity:
  Resource: CPU currently in use.
  Busy workers on node.

Node: dcb068352f72b5244cfdefaa70055d5cd51b5cd29778295b41cd0775 (headgroup)
 Idle: 31744 ms
 Usage:
  0B/1.86GiB memory
  0B/511.68MiB object_store_memory
 Activity:
  (no activity)

it look like instance_id isn't set to the Pod name, but some other generated unique ID. Looking at the Autoscaler logs, if we wanted it to output Pod name here we should use cloud_instance_id:

2025-03-04 12:11:34,960 - INFO - Update instance ALLOCATED->RAY_RUNNING (id=a4dfeafc-8a5e-47ff-9721-cdd559c00dfc, type=small-group, cloud_instance_id=raycluster-autoscaler-small-group-worker-qbd8r, ray_id=): ray node 1c0ff9b00d40e332469adb4fdfacd9d0f21599bac65e50666e808a4d is RUNNING

@ryanaoleary
Copy link
Contributor Author

@kevin85421 I went through and re-factored the code to make it easier to review. I also made sure the PR is consistent in using string concatenation with .join() to format the multi-line strings. The PR is passing the tests and should be ready for another review.

@@ -555,10 +555,10 @@ def test_cluster_status_formatter():
Pending:
worker_node, 1 launching
worker_node_gpu, 1 launching
127.0.0.3: worker_node, starting ray
instance4: worker_node, starting ray
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will the key in the key-value pairs in Pending be the Pod name? That's my expectation.

In addition, have you manually tested this PR? The test below shows that either "head node" or "worker node" is appended to the end of the Node: ... line. For example,

Node: fffffffffffffffffffffffffffffffffffffffffffffffffff00001 (head_node)

However, the above output from your manual testing is:

Node: 65d0a32bfeee84475a235b3c290824ec3ac0b1ab5148d96fc674ce93

@ryanaoleary ryanaoleary requested a review from kevin85421 March 4, 2025 20:18
@ryanaoleary
Copy link
Contributor Author

RayCluster manifest with Autoscaler v2 used for manual testing:

apiVersion: ray.io/v1
kind: RayCluster
metadata:
  name: raycluster-autoscaler
spec:
  # The version of Ray you are using. Make sure all Ray containers are running this version of Ray.
  # Use the Ray nightly or Ray version >= 2.10.0 and KubeRay 1.1.0 or later for autoscaler v2.
  enableInTreeAutoscaling: true
  autoscalerOptions:
    upscalingMode: Default
    idleTimeoutSeconds: 60
    imagePullPolicy: IfNotPresent
    # Optionally specify the Autoscaler container's securityContext.
    securityContext: {}
    env: []
    envFrom: []
    resources:
      limits:
        cpu: "500m"
        memory: "512Mi"
      requests:
        cpu: "500m"
        memory: "512Mi"
  # Ray head pod template
  headGroupSpec:
    rayStartParams:
      # Setting "num-cpus: 0" to avoid any Ray actors or tasks being scheduled on the Ray head Pod.
      num-cpus: "0"
    # Pod template
    template:
      spec:
        containers:
        # The Ray head container
        - name: ray-head
          image: us-central1-docker.pkg.dev/ryanaoleary-gke-dev/ryanaoleary-ray/ray-logs:latest
          ports:
          - containerPort: 6379
            name: gcs
          - containerPort: 8265
            name: dashboard
          - containerPort: 10001
            name: client
          resources:
            limits:
              cpu: "1"
              memory: "2G"
            requests:
              cpu: "1"
              memory: "2G"
          env:
            - name: RAY_enable_autoscaler_v2 # Pass env var for the autoscaler v2.
              value: "1"
          volumeMounts:
            - mountPath: /home/ray/samples
              name: ray-example-configmap
        volumes:
          - name: ray-example-configmap
            configMap:
              name: ray-example
              defaultMode: 0777
              items:
                - key: detached_actor.py
                  path: detached_actor.py
                - key: terminate_detached_actor.py
                  path: terminate_detached_actor.py
        restartPolicy: OnFailure # No restart to avoid reuse of pod for different ray nodes.
  workerGroupSpecs:
  # the Pod replicas in this group typed worker
  - replicas: 0
    minReplicas: 0
    maxReplicas: 10
    groupName: small-group
    rayStartParams: {}
    # Pod template
    template:
      spec:
        containers:
        - name: ray-worker
          image: us-central1-docker.pkg.dev/ryanaoleary-gke-dev/ryanaoleary-ray/ray-logs:latest
          resources:
            limits:
              cpu: "1"
              memory: "1G"
            requests:
              cpu: "1"
              memory: "1G"
        restartPolicy: OnFailure # Never restart a pod to avoid pod reuse
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: ray-example
data:
  detached_actor.py: |
    import ray
    import sys

    @ray.remote(num_cpus=1)
    class Actor:
      pass

    ray.init(namespace="default_namespace")
    Actor.options(name=sys.argv[1], lifetime="detached").remote()

  terminate_detached_actor.py: |
    import ray
    import sys

    ray.init(namespace="default_namespace")
    detached_actor = ray.get_actor(sys.argv[1])
    ray.kill(detached_actor)

@kevin85421 kevin85421 added the go add ONLY when ready to merge, run all tests label Mar 4, 2025
@kevin85421
Copy link
Member

@ryanaoleary would you mind fixing the CI errors?

@ryanaoleary
Copy link
Contributor Author

I see it passing the test it failed in the CI (test_ray_status) locally, rebasing with master again to see if that fixes the issue:
test_ray_status_output.txt

@ryanaoleary
Copy link
Contributor Author

It's now passing the CI after 2084439 cc: @kevin85421

@ryanaoleary ryanaoleary requested a review from kevin85421 March 6, 2025 09:56
@edoakes edoakes merged commit 4a6fbd9 into ray-project:master Mar 6, 2025
5 checks passed
elimelt pushed a commit to elimelt/ray that referenced this pull request Mar 9, 2025
Currently the V2 Autoscaler formats logs by converting the V2 data
structure `ClusterStatus` to the V1 structures `AutoscalerSummary` and
`LoadMetricsSummary` and then passing them to the legacy
`format_info_string`. It'd be useful for the V2 autoscaler to directly
format `ClusterStatus` to the correct output log format. This PR
refactors `utils.py` to directly format `ClusterStatus`. Additionally,
this PR changes the node reports to output `instance_id` rather than
`ip_address`, since the latter is not necessarily unique for failed
nodes.

## Related issue number

Closes ray-project#37856

---------

Signed-off-by: ryanaoleary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
park12sj pushed a commit to park12sj/ray that referenced this pull request Mar 18, 2025
Currently the V2 Autoscaler formats logs by converting the V2 data
structure `ClusterStatus` to the V1 structures `AutoscalerSummary` and
`LoadMetricsSummary` and then passing them to the legacy
`format_info_string`. It'd be useful for the V2 autoscaler to directly
format `ClusterStatus` to the correct output log format. This PR
refactors `utils.py` to directly format `ClusterStatus`. Additionally,
this PR changes the node reports to output `instance_id` rather than
`ip_address`, since the latter is not necessarily unique for failed
nodes.

## Related issue number

Closes ray-project#37856

---------

Signed-off-by: ryanaoleary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
jaychia pushed a commit to jaychia/ray that referenced this pull request Mar 19, 2025
Currently the V2 Autoscaler formats logs by converting the V2 data
structure `ClusterStatus` to the V1 structures `AutoscalerSummary` and
`LoadMetricsSummary` and then passing them to the legacy
`format_info_string`. It'd be useful for the V2 autoscaler to directly
format `ClusterStatus` to the correct output log format. This PR
refactors `utils.py` to directly format `ClusterStatus`. Additionally,
this PR changes the node reports to output `instance_id` rather than
`ip_address`, since the latter is not necessarily unique for failed
nodes.

## Related issue number

Closes ray-project#37856

---------

Signed-off-by: ryanaoleary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Jay Chia <[email protected]>
jaychia pushed a commit to jaychia/ray that referenced this pull request Mar 19, 2025
Currently the V2 Autoscaler formats logs by converting the V2 data
structure `ClusterStatus` to the V1 structures `AutoscalerSummary` and
`LoadMetricsSummary` and then passing them to the legacy
`format_info_string`. It'd be useful for the V2 autoscaler to directly
format `ClusterStatus` to the correct output log format. This PR
refactors `utils.py` to directly format `ClusterStatus`. Additionally,
this PR changes the node reports to output `instance_id` rather than
`ip_address`, since the latter is not necessarily unique for failed
nodes.

## Related issue number

Closes ray-project#37856

---------

Signed-off-by: ryanaoleary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Jay Chia <[email protected]>
Drice1999 pushed a commit to Drice1999/ray that referenced this pull request Mar 23, 2025
Currently the V2 Autoscaler formats logs by converting the V2 data
structure `ClusterStatus` to the V1 structures `AutoscalerSummary` and
`LoadMetricsSummary` and then passing them to the legacy
`format_info_string`. It'd be useful for the V2 autoscaler to directly
format `ClusterStatus` to the correct output log format. This PR
refactors `utils.py` to directly format `ClusterStatus`. Additionally,
this PR changes the node reports to output `instance_id` rather than
`ip_address`, since the latter is not necessarily unique for failed
nodes.

## Related issue number

Closes ray-project#37856

---------

Signed-off-by: ryanaoleary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
dhakshin32 pushed a commit to dhakshin32/ray that referenced this pull request Mar 27, 2025
Currently the V2 Autoscaler formats logs by converting the V2 data
structure `ClusterStatus` to the V1 structures `AutoscalerSummary` and
`LoadMetricsSummary` and then passing them to the legacy
`format_info_string`. It'd be useful for the V2 autoscaler to directly
format `ClusterStatus` to the correct output log format. This PR
refactors `utils.py` to directly format `ClusterStatus`. Additionally,
this PR changes the node reports to output `instance_id` rather than
`ip_address`, since the latter is not necessarily unique for failed
nodes.

## Related issue number

Closes ray-project#37856

---------

Signed-off-by: ryanaoleary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Dhakshin Suriakannu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[autoscaler] Refactor ray status output code
3 participants