Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scheduler should be location aware #279

Closed
jbw976 opened this issue Jan 15, 2019 · 3 comments
Closed

Scheduler should be location aware #279

jbw976 opened this issue Jan 15, 2019 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@jbw976
Copy link
Member

jbw976 commented Jan 15, 2019

The workload scheduler will need to optimize across many different attributes in the long term, but a good first step would be location: cloud provider, cluster, region, etc. For example, when this initial implementation is completed, a workload should be able to be scheduled in the same "location" as its resources. More design to be fleshed out in #278.

@ichekrygin
Copy link
Member

I appears this task needs to be revisited in more detail.

I stumbled across multiple issues while working on this issue.

Currently workload defines resources by "Name", and the only property it uses is a "Secret Name" value, which is if not provided derived from the resource name, i.e. the workload does not retrieve the actual resource nor the resource properties. Hence, there is really nothing to go by to create/maintain the affinity between the KubernetesCluster and the resources.

It is also by design, that we do not expose ResourceClaim details such as Provider or Region into application developer scope (the separation of concerns). Hence the application developer who is creating a workload is not (or should not be) aware of the concrete resources details such as Cloud Provider, Region, etc. Thus, using cluster selector or affinity by the application developer will be problematic.

I think we should revisit this issue when we will work on more advanced (topology aware) workload scheduler.

@jbw976 jbw976 removed this from the v0.2 milestone Apr 2, 2019
@negz negz added the enhancement New feature or request label Jun 3, 2019
@displague
Copy link
Member

These older issues that use the term "workload" may have implied the Workload implementation that preceded KubernetesApplication types. If the term "workload" is taken generally we can find more areas to fit the need.

This form of scheduling should extend beyond the KubernetesApplication workload type, into Stacks.

One potential outcome of this could be that a scaling group (perhaps using the scale subresource, or some new geoscaling subresource) of a Stack or KubernetesApplication resource should be able to inform each instance in that group in such a way that the scaling values (or ordinals) can affect variables of that instance.

For example, if a resource has a scale of five, that could drive a KubernetesApplication to modulus through a list of five potential region values. For a Stack, a similar reflection of scale could influence the region where the Stack's managed resources are provisioned.

Should this be handled through some form of template value mapping, keyed by the index of the current unit in the scale?

Should a form of Region or Scaling Policy determine how a scaling factor is applied to a particular resource or all matching resources? A Region Policy may fit well with an abstract Region type (#340).

@negz
Copy link
Member

negz commented Jun 29, 2020

We're deprecating KubernetesApplication (neé Workload) so I don't believe this issue will be relevant going forward.

@negz negz closed this as completed Jun 29, 2020
luebken pushed a commit to luebken/crossplane that referenced this issue Aug 3, 2021
aws_session_token key should be used when credentials are parsed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants