Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding whether Karmada is the appropriate tool for unifying clusters #5570

Open
KhalilSantana opened this issue Sep 19, 2024 · 4 comments
Labels
kind/question Indicates an issue that is a support question.

Comments

@KhalilSantana
Copy link

Please provide an in-depth description of the question you have:

Hello, I'm trying to understand if Karmada is the right tool for my situation, described below. In short, I want to be able to unify multiple Kubernetes clusters and operate them as if they were one, using the official Kubernetes API client python library if possible.

Situation: I have two clusters (Alpha and Beta), running RKE2, Cilium and Multus, and an application that submits Pods to the Kubernetes master via the official K8S Python library. Currently, this application is only able to manage a single Kubernetes Cluster, and re-tooling to natively enable such functionality may prove too costly to be viable within the required timeframe I have. Pod-to-pod networking is mostly done in the Multus interfaces, in such a way that a typical pod doesn't need to address another pod the default Kubernetes networking.

Would Karmada be able to unify these clusters in such a way that my application can:

  • Schedule pods to a specific node of a member cluster (for example, using a nodeSelector with the node's hostname).
  • List pods running in either cluster
  • List and use NetworkAttachmentDefinitions from Multus of each cluster
  • Watch pods for their status (Ready/NotReady)
  • Delete Pods from each cluster
  • Copy data in/out of each pod via a kubectl cp equivalent
  • Other basic pod CRUD
  • Compatible with AArch64 nodes
  • Use either cluster's PVCs (optional)

Looking at the documentation, I believe Karmada may be able to accomplish at least some of these goals, however I'm unclear if that's indeed true, or how to accomplish them.

For example, in order to get a unified view of both clusters I believe I'd need to setup a Proxy for Global Resources, is my understanding correct? All of these resources live in a namespace and this namespace is configured identically across both clusters (same name).

Reading the FAQ I see this disclaimer:

Is creating supported?
For resources not defined in ResourceRegistry, creating requests are redirected to karmada controller panel. So Resources are created in controller panel. For resources defined in ResourceRegistry, proxy doesn't know which cluster to create, and responses MethodNotSupported error.

So as per my understanding it is not possible to submit pods for creation in a specific node of a specific cluster using this proxy mode. Is my understanding correct?

Furthermore, what is the recommended or simplest architecture for my goal of unified clusters? As in, should Karmada be in its own separate cluster from cluster Alpha and Beta, forming three clusters in total (two for workloads, one for karmada/management)?

Is Karmada currently capable of dealing with clusters with mixed processor architectures? Such as cluster Alpha being AArch64 and cluster Beta being a regular x86_64 environment?

Environment:

  • Karmada version: 1.11 (Karmadactl)
  • Kubernetes version: RKE2 1.27+
  • Others: Multus, Cilium 1.14
@KhalilSantana KhalilSantana added the kind/question Indicates an issue that is a support question. label Sep 19, 2024
@KhalilSantana
Copy link
Author

Trying to use the Proxy for global resources in a 3x cluster scenario fails, warning about missing CRDs, however the same setup works when using hack/local-up-karmada.sh (Kind). See below:

VM Cluster

  • 3x Cluster VMs (1,2,3)
  • Each one is running RKE2, and ran karmadactl init,
  • VM1 & VM3 were added to VM2:
ksantana@v2-k8s-vm2:~$ export KUBECONFIG=/etc/karmada/karmada-apiserver.config
ksantana@v2-k8s-vm2:~$ sudo -E kubectl get clusters
NAME   VERSION          MODE   READY   AGE
vm1    v1.27.6+rke2r1   Push   True    115m
vm3    v1.27.6+rke2r1   Push   True    114m
ksantana@v2-k8s-vm2:~$ sudo -E kubectl get crds
NAME                                                         CREATED AT
clusteroverridepolicies.policy.karmada.io                    2024-09-13T01:05:55Z
clusterpropagationpolicies.policy.karmada.io                 2024-09-13T01:05:55Z
clusterresourcebindings.work.karmada.io                      2024-09-13T01:05:56Z
cronfederatedhpas.autoscaling.karmada.io                     2024-09-13T01:05:55Z
federatedhpas.autoscaling.karmada.io                         2024-09-13T01:05:55Z
federatedresourcequotas.policy.karmada.io                    2024-09-13T01:05:55Z
multiclusteringresses.networking.karmada.io                  2024-09-13T01:05:55Z
multiclusterservices.networking.karmada.io                   2024-09-13T01:05:55Z
overridepolicies.policy.karmada.io                           2024-09-13T01:05:55Z
propagationpolicies.policy.karmada.io                        2024-09-13T01:05:55Z
remedies.remedy.karmada.io                                   2024-09-13T01:05:55Z
resourcebindings.work.karmada.io                             2024-09-13T01:05:56Z
resourceinterpretercustomizations.config.karmada.io          2024-09-13T01:05:55Z
resourceinterpreterwebhookconfigurations.config.karmada.io   2024-09-13T01:05:55Z
serviceexports.multicluster.x-k8s.io                         2024-09-13T01:05:55Z
serviceimports.multicluster.x-k8s.io                         2024-09-13T01:05:55Z
workloadrebalancers.apps.karmada.io                          2024-09-13T01:05:54Z
works.work.karmada.io                                        2024-09-13T01:05:56Z
ksantana@v2-k8s-vm2:~$ sudo -E kubectl --context karmada-apiserver apply -f resourceregistry.yaml
error: resource mapping not found for name: "proxy-sample" namespace: "" from "resourceregistry.yaml": no matches for kind "ResourceRegistry" in version "search.karmada.io/v1alpha1"
ensure CRDs are installed first
ksantana@v2-k8s-vm2:~$

For contrast, the cluster generated by hack/local-up-karamada.sh seems to have the same CRDs, leaving me puzzled:

khalil:~ % export KUBECONFIG=~/.kube/karmada.config
khalil:~ % kubectl get crds -A
NAME                                                         CREATED AT
clusteroverridepolicies.policy.karmada.io                    2024-09-19T17:54:44Z
clusterpropagationpolicies.policy.karmada.io                 2024-09-19T17:54:44Z
clusterresourcebindings.work.karmada.io                      2024-09-19T17:54:44Z
cronfederatedhpas.autoscaling.karmada.io                     2024-09-19T17:54:44Z
federatedhpas.autoscaling.karmada.io                         2024-09-19T17:54:44Z
federatedresourcequotas.policy.karmada.io                    2024-09-19T17:54:44Z
multiclusteringresses.networking.karmada.io                  2024-09-19T17:54:44Z
multiclusterservices.networking.karmada.io                   2024-09-19T17:54:44Z
overridepolicies.policy.karmada.io                           2024-09-19T17:54:45Z
propagationpolicies.policy.karmada.io                        2024-09-19T17:54:45Z
remedies.remedy.karmada.io                                   2024-09-19T17:54:45Z
resourcebindings.work.karmada.io                             2024-09-19T17:54:45Z
resourceinterpretercustomizations.config.karmada.io          2024-09-19T17:54:45Z
resourceinterpreterwebhookconfigurations.config.karmada.io   2024-09-19T17:54:45Z
serviceexports.multicluster.x-k8s.io                         2024-09-19T17:54:45Z
serviceimports.multicluster.x-k8s.io                         2024-09-19T17:54:45Z
workloadrebalancers.apps.karmada.io                          2024-09-19T17:54:45Z
works.work.karmada.io                                        2024-09-19T17:54:45Z
khalil:~ % 

But I can apply the proxy global resources in this Kind cluster and get a list of pods & nodes:

khalil:~/ %  kubectl --context karmada-apiserver apply -f resourceregistry.yaml
resourceregistry.search.karmada.io/proxy-sample created
khalil:~/ % kubectl get pods -A
No resources found
khalil:~/ % cp ~/.kube/karmada.config karmada-proxy.config
# Edit kubeconfig file to include the proxy endpoint
khalil:~/ % export KUBECONFIG=./karmada-proxy.yaml
khalil:~/ % kubectl get pods -A
NAMESPACE            NAME                                            CREATED AT
kube-system          coredns-6f6b679f8f-4trfm                        2024-09-19T17:53:40Z
kube-system          coredns-6f6b679f8f-mkfq8                        2024-09-19T17:53:40Z
kube-system          etcd-member1-control-plane                      2024-09-19T17:53:32Z
kube-system          kindnet-z7q8r                                   2024-09-19T17:53:40Z
kube-system          kube-apiserver-member1-control-plane            2024-09-19T17:53:32Z
kube-system          kube-controller-manager-member1-control-plane   2024-09-19T17:53:33Z
kube-system          kube-proxy-cqpfv                                2024-09-19T17:53:40Z
kube-system          kube-scheduler-member1-control-plane            2024-09-19T17:53:32Z
kube-system          metrics-server-7db8654747-xsks5                 2024-09-19T17:55:02Z

So am I missing a step to add these CRDs?

@RainbowMango
Copy link
Member

For contrast, the cluster generated by hack/local-up-karamada.sh seems to have the same CRDs, leaving me puzzled:

The API ResourceRegistry is registered to Karmada APIServer by karmada-search component in an aggregation way, not the CRD.
That's the reason why you can't show the CRD out.

So, for your case, you might need to install the karmada-search and you can use karmadactl addons to do that.

@KhalilSantana
Copy link
Author

So, for your case, you might need to install the karmada-search and you can use karmadactl addons to do that.

I just got ResourceRegistry working in my VM cluster, thank you!

@KhalilSantana
Copy link
Author

From various tests I think I now understand this section of the documentation:

Is creating supported?
For resources not defined in ResourceRegistry, creating requests are redirected to karmada controller panel. So Resources are created in controller panel. For resources defined in ResourceRegistry, proxy doesn't know which cluster to create, and responses MethodNotSupported error.

Basically I can't use the global resource proxy to create Pods with Karmada, which is rather unfortunate but I believe using Deployments and Propagation policies should work for my goal with some adaptations.

Is it possible to use the global proxy for CRDs such as NetworkAttachmentDefinitions? I've installed Multus+Whereabouts on the member clusters, then created a sample NAD in one of them, listing via the member cluster's kubeconfig displays the NAD. Then I've installed just the NAD CRD into the karmada-apiserver, and I've tried using the following resource registry, but it doesn't display the NAD. I'm I doing something wrong?

apiVersion: search.karmada.io/v1alpha1
kind: ResourceRegistry
metadata:
  name: proxy-sample
spec:
  targetCluster:
    clusterNames:
  resourceSelectors:
    - apiVersion: v1
      kind: Pod
    - apiVersion: v1
      kind: Node
    - apiVersion: apiextensions.k8s.io/v1 # also just 'v1' doesn't work
      kind: NetworkAttachmentDefinition
khalil: % kubectl apply -f resourceregistry.yaml
khalil: % export KUBECONFIG=~/.kube/members.config
khalil: % kubectl get pods --context member1
NAME                         READY   STATUS    RESTARTS   AGE
nginx-x86-676b6c5bbc-f77tt   1/1     Running   0          100m
khalil: % kubectl get network-attachment-definition
NAME                       AGE
test-nad-01                6m15s
test-nad-02                6m15s
khalil: % export KUBECONFIG=karmada.config # kubeconfig which uses the global proxy resource
khalil: % kubectl get network-attachment-definition #doesn't find the NADs
No resources found in default namespace.
# But the proxy is correctly proxying Pods
khalil: % kubectl get pods -o wide
NAME                         CREATED AT
nginx-x86-676b6c5bbc-f77tt   2024-09-25T19:49:05Z
nginx-iot-676b6c5bbc-6ms4k   2024-09-25T19:49:24Z

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Indicates an issue that is a support question.
Projects
None yet
Development

No branches or pull requests

2 participants