Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

plugin,etc: Rewrite to get state from Pod annotations #1163

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

sharnoff
Copy link
Member

@sharnoff sharnoff commented Dec 3, 2024

Note

Hey! This is a big PR — sorry! There's a lot of smaller commits (see below), plus one GIANT commit that does the bulk of the rewrite, building on those smaller commits.

There's an overview of the changes in the commit message for the full rewrite. That might be a useful place to start to get your bearings on the changes.


Commits broken down by theme:

Background work: neonvm
  1. neonvm-controller: Use non-controller owner ref for migration source (786bab3)

    In short, there's currently no reliable way to tell whether a Pod is the source or target in a live migration without checking against the migration object itself.

    This is because the migration object initially has a unique "controller" reference on the target pod (with the source pod having a non-controlling owner reference), and once the migration is complete, the source pod is changed to have the unique controller reference setup.

    So this change is just to keep using a non-controller reference on the source pod after the migration completes.

    That means that we will have the following guarantees:

    • If a Pod has a "controller" owner reference for a live migration, it's the target pod in an ongoing migration.
    • If a Pod has a non-controller owner reference for a live migration, it's the source pod in a maybe-ongoing migration.
    • If the Pod has a "controller" owner reference for a virtual machine, it's the current source of the VM. So if a migration source pod doesn't have an owner reference for the VM, then the migration has completed and there is no longer a use for the old source pod.
  2. neonvm: Add helpers to get pod ownership (b4f1fca)

    We had some existing, similar helpers in pkg/util, but I figured it makes more sense for those to be defined in neonvm/apis/....

    This change originated while working on the stateless scheduler prototype, where I found myself wanting a reliable way to determine what role a pod has in a live migration from only the pod's metadata.

  3. neonvm-controller: Update runner pod metadata while pending (396b281)

    We weren't doing this previously, and it means that propagation of labels/annotations can be arbitrarily delayed while the VM starts.

Related changes in util/watch
  1. util/watch: Store HandlerFuncs[*T] in Store[T] (9f6c320)

    Also, change handleEvent[T any, P ~*T]() from a function to a method on *Store[T], now that we have the handlers translated from P*T.

    This opens up a lot of ways to make the code cleaner, and the handlers part in particular is required to implement (*Store[T]).NopUpdate() in a later commit.

  2. util/watch: Add (*Store[T]).NopUpdate() method (18930b9)

    There's some info about this in the added comment.

    tl;dr: We need it for the stateless scheduler work to be able to re-inject items into the reconcile queue while maintaining the watch as the source of truth for what each object is.

  3. util/watch: Override GVK on incoming objects (2ee8c43)

    The K8s API server and client-go together have the behavior that objects returned from List() calls do not have TypeMeta set.

    For one-off List() requests this is fine becuase you already know the type! But this interacts poorly with the generated implementations of objects' .GetObjectKind().GroupVersionKind(), as those just directly read from the TypeMeta fields (which again: are not set).

    So this commit works around this behavior by getting the GVK at the start of the Watch() call and explicitly setting it on all incoming objects.

  4. util/watch: Add (*Store[T]).Listen() method (0902770)

    The Listen() method returns a util.BroadcastReceiver that will be updated whenever the object is modified or deleted.

    This is required for now for the stateless scheduler work, so that we can be separately notified when there's changes to an object without hooking deeper into the internal state.

    We can probably remove this once the scheduler plugin's agent request handler server is removed, at the end of the stateless scheduler work.

The one big commit doing the rewrite
  1. plugin: Rewrite to get state from Pod annotations (34f1106)

    a.k.a. "Stateless Scheduler".

    This is effectively a full rewrite of the scheduler plugin. At a high level, the existing external interfaces are preserved:

    • The scheduler plugin still exposes an HTTP server for the autoscaler-agent (for now); and
    • The scheduler plugin is still a plugin.

    However, instead of storing the state for approved resources in-memory, in the scheduler plugin, we now treat annotations on the Pod as the source of truth for requested/approved resources.

    A brief overview of the internal changes to make this happen:

    1. The state of resource reservations can be constructed entirely from Node and Pod objects. We do store that, and update as objects change, but it's only for convenience and not a strict requirement.

      One tricky piece is with scheduling. For that, we store a set of pods that have gone through the plugin methods but haven't actually had the spec.nodeName field set.

      For more info, the pkg/plugin/state package contains all the pure logic for manipulating resources.

    2. Each watch event on Node/Pod objects is now placed into a "reconcile" queue similar to the controller framework. Reconcile operations are a tuple of (object, event type, desired reconcile time) and are retried with backoff on error/panic.

      For a detailed look, the 'pkg/plugin/reconcile' package defines the reconcile queue and all its related machinery.

    3. The handler for autoscaler-agent requests no longer accesses the internal state and instead directly patches the VirtualMachine object to set the annotation for requested resources, and then waits for that object to be updated.

      Once the autoscaler-agent is converted to read and write those annotations directly, we will remove the HTTP server.

    4. pkg/util/watch was changed to allow asking to be notified when
      there's changes to an object, via the new (*Store[T]).Listen() API.

      This was required to implement (3), and can be removed once (3) is no longer needed, if it doesn't become used in the autoscaler-agent.

    5. pkg/util/watch was changed to allow triggering no-op update events, which - for our usage - will trigger requeuing the object. This solves two problems:

      1. During initial startup, we need to defer resource approval until all Pods on the Node have been processed -- otherwise, we may end up unintentionally overcommitting resources based on partial information.

        So during startup, we track the set of Pods with deferred approvals, and then requeue them all once startup is over by triggering no-op update events in the watch store.

      2. Whenever we handle changes for some Pod, it's worthwhile to handle certain operations on the Node -- e.g., triggering live migration if the reserved resources are too high.

        While we could do this as part of the Pod reconciling, we get more fair behavior (and, better balancing under load) by instead triggering re-reconciling the Pod's Node.

      Why can't this be done elsewhere? In short, consistency. Fundamentally we need to use a consistent view of the object that we're reconciling (else, it might not be no-op), and the source of truth for the current value of an object within the scheduler plugin is the watch store.


Remaining TODOs:

  • Flesh out the reconcile metrics (3693980)
  • Add metrics for k8s CRUD operations (mainly: rate of VirtualMachine update operations) (3693980)
  • Test node metrics (40b43c1)
  • Test that deferred reconciles on startup work correctly
  • Test that setting VM CPU/memory use fields and enabling autoscaling at the same time works correctly.
  • Load testing on staging

Open questions:

  • Which commits should be squashed, or extracted into separate PRs?
  • We have a single global state mutex (same as before, but different usage pattern). How big a cluster can we support before that falls over? Do we need to have individual state locks per node?

Copy link

github-actions bot commented Dec 3, 2024

No changes to the coverage.

HTML Report

Click to open

sharnoff added a commit that referenced this pull request Dec 4, 2024
Probably an inadvertent merge conflict between #1090 and #989 meaning we
accidentally weren't using go-chef for neonvm-daemon.

Noticed this while working on #1163 locally and saw that it was
re-downloading all of the dependencies for neonvm-daemon every time,
even though I was making changes in the scheduler and the dependencies
hadn't changed.
sharnoff added a commit that referenced this pull request Dec 9, 2024
It wasn't correct; the separator is '/', not ':'.

(I think once upon a time, we used to format it with ':', but that's no
longer the case).

Noticed this as part of #1163.
In short, there's currently no reliable way to tell whether a Pod is the
source or target in a live migration without checking against the
migration object itself.

This is because the migration object initially has a unique "controller"
reference on the target pod (with the source pod having a
non-controlling owner reference), and once the migration is complete,
the source pod is changed to have the unique controller reference setup.

So this change is *just* to keep using a non-controller reference on the
source pod after the migration completes.

That means that we will have the following guarantees:

* If a Pod has a "controller" owner reference for a live migration, it's
  the target pod in an ongoing migration.
* If a Pod has a non-controller owner reference for a live migration,
  it's the source pod in a maybe-ongoing migration.
* If the Pod has a "controller" owner reference for a virtual machine,
  it's the current source of the VM. So if a migration source pod
  *doesn't* have an owner reference for the VM, then the migration has
  completed and there is no longer a use for the old source pod.
We had some existing, similar helpers in 'pkg/util', but I figured it
makes more sense for those to be defined in 'neonvm/apis/...'.

This change originated while working on the stateless scheduler
prototype, where I found myself wanting a reliable way to determine what
role a pod has in a live migration from only the pod's metadata.
We weren't doing this previously, and it means that propagation of
labels/annotations can be arbitrarily delayed while the VM starts.
Also, change handleEvent[T any, P ~*T]() from a function to a method on
*Store[T], now that we have the handlers translated from P -> *T.

This opens up a lot of ways to make the code cleaner, and the handlers
part in particular is required to implement (*Store[T]).NopUpdate() in a
later commit.
There's some info about this in the added comment.

tl;dr: We need it for the stateless scheduler work to be able to
re-inject items into the reconcile queue while maintaining the watch as
the source of truth for what each object is.
The K8s API server and client-go together have the behavior that objects
returned from List() calls do not have TypeMeta set.

For one-off List() requests this is fine becuase you already know the
type! But this interacts poorly with the generated implementations of
objects' .GetObjectKind().GroupVersionKind(), as those just directly
read from the TypeMeta fields (which again: are not set).

So this commit works around this behavior by getting the GVK at the
start of the Watch() call and explicitly setting it on all incoming
objects.
The Listen() method returns a util.BroadcastReceiver that will be
updated whenever the object is modified or deleted.

This is required *for now* for the stateless scheduler work, so that we
can be separately notified when there's changes to an object without
hooking deeper into the internal state.

We can probably remove this once the scheduler plugin's agnent request
handler server is removed, at the end of the stateless scheduler work.
a.k.a. "Stateless Scheduler".

This is effectively a full rewrite of the scheduler plugin. At a high
level, the existing external interfaces are preserved:

- The scheduler plugin still exposes an HTTP server for the
  autoscaler-agent (for now); and
- The scheduler plugin is still a plugin.

However, instead of storing the state for approved resources in-memory,
in the scheduler plugin, we now treat annotations on the Pod as the
source of truth for requested/approved resources.

A brief overview of the internal changes to make this happen:

1. The state of resource reservations can be constructed entirely from
   Node and Pod objects. We *do* store that, and update as objects
   change, but it's only for convenience and not a strict requirement.

   One tricky piece is with scheduling. For that, we store a set of pods
   that have gone through the plugin methods but haven't actually had
   the spec.nodeName field set.

   For more info, the 'pkg/plugin/state' package contains all the pure
   logic for manipulating resources.

2. Each watch event on Node/Pod objects is now placed into a "reconcile"
   queue similar to the controller framework. Reconcile operations are a
   tuple of (object, event type, desired reconcile time) and are retried
   with backoff on error/panic.

   For a detailed look, the 'pkg/plugin/reconcile' package defines the
   reconcile queue and all its related machinery.

3. The handler for autoscaler-agent requests no longer accesses the
   internal state and instead directly patches the VirtualMachine object
   to set the annotation for requested resources, and then waits for
   that object to be updated.

   Once the autoscaler-agent is converted to read and write those
   annotations directly, we will remove the HTTP server.

4. 'pkg/util/watch' was changed to allow asking to be notified when
   there's changes to an object, via the new (*Store[T]).Listen() API.

   This was required to implement (3), and can be removed once (3) is no
   longer needed, if it doesn't become used in the autoscaler-agent.

5. 'pkg/util/watch' was changed to allow triggering no-op update events,
   which - for our usage - will trigger requeuing the object. This
   solves two problems:

   a. During initial startup, we need to defer resource approval until
      all Pods on the Node have been processed -- otherwise, we may end
      up unintentionally overcommitting resources based on partial
      information.

      So during startup, we track the set of Pods with deferred
      approvals, and then requeue them all once startup is over by
      triggering no-op update events in the watch store.

   b. Whenever we handle changes for some Pod, it's worthwhile to handle
      certain operations on the Node -- e.g., triggering live migration
      if the reserved resources are too high.

      While we *could* do this as part of the Pod reconciling, we get
      more fair behavior (and, better balancing under load) by instead
      triggering re-reconciling the Pod's Node.

   Why can't this be done elsewhere? In short, consistency.
   Fundamentally we need to use a consistent view of the object that
   we're reconciling (else, it might not be no-op), and the source of
   truth for the current value of an object *within the scheduler
   plugin* is the watch store.
@sharnoff sharnoff force-pushed the sharnoff/stateless-scheduler branch from cf468d8 to 34f1106 Compare December 10, 2024 20:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant