cluster-proxy-addon
uses a reverse proxy server (anp) to send the request from hub to managed cluster.
And it also contains e2e test for the overall cluster-proxy-addon case.
This feature has 7 relevant repos:
- cluster-proxy-addon
- Contains the ANP image (currently version 0.0.24)
- Contains the HTTP-User-Server image to support users using http way to access the cluster-proxy.
- cluster-proxy
- The main body of the addon.
- The repo has some differences in build and deploy part with the upstream repo: cluster-proxy to suit the downstream needs.
- backplane-operator
- The lastest chart changes should be made in backplane-operator template.
- cluster-proxy-addon-chart
- Only for release-2.5 and early version.
- Using images from
cluster-proxy-addon
andcluster-proxy
to deploy the addon. - The repo is leveraged by multiclusterhub-operator and multiclusterhub-repo.
- release
- Contains the cicd steps of the addon.
- apiserver-network-proxy
- The repo is a forked version to support the downstream needs.
- grpc-go
- The repo is a forked version to support the proxy-in-middle cases.
- Used in: https://github.com/stolostron/apiserver-network-proxy/blob/d562699c78201daef7dec97cd1847e5abffbe2ab/go.mod#L5C43-L5C72
The cluster-proxy-addon exposed a Route
for the users in outside world to access the managed clusters. But for the internal operator/controller/service, it's recommanded to use the Service
to access the managed clusters. The Service
is also more efficient than the Route
in the internal network.
Here is a piece of code to show how to use the cluster-proxy-addon Service
to access the managed clusters' pod logs, the full example can be found in multicloud-operators-foundation:
// There must be a managedserviceaccount with proper rolebinding in the managed cluster.
logTokenSecret, err := c.KubeClient.CoreV1().Secrets(clusterName).Get(ctx, helpers.LogManagedServiceAccountName, v1.GetOptions{})
if err != nil {
return nil, fmt.Errorf("faield to get log token secret in cluster %s. %v", clusterName, err)
}
// Configure a kuberentes Config.
clusterProxyCfg := &rest.Config{
// The `ProxyServiceHost` normally is the service domain name of the cluster-proxy-addon user-server:
// cluster-proxy-addon-user.<component namespace>.svc:9092
Host: fmt.Sprintf("https://%s/%s", c.ProxyServiceHost, clusterName),
TLSClientConfig: rest.TLSClientConfig{
// The CAFile must be the openshift-service-ca.crt, because user-server using openshift service CA to sign the certificate.
// You can mount the openshift-service-ca.crt to the pod, a configmap named `openshift-service-ca.crt` in the every namespace.
CAFile: c.ProxyServiceCAFile,
},
BearerToken: string(logTokenSecret.Data["token"]),
}
clusterProxyKubeClient, err := kubernetes.NewForConfig(clusterProxyCfg)
if err != nil {
return nil, err
}
The full example can be found in:
The community version of cluster-proxy support grpc
mode, does the cluster-proxy-addon
support it?
No, the cluster-proxy-addon
doesn't support grpc
mode. The cluster-proxy-addon
only support http
mode.
This is because for security reasons, cluster-proxy-addon using the flag --enable-kube-api-proxy
. By setting the flag to false
, the cluster-proxy won't use the managedcluster name
as one of the agent-identifiers.
The reason why we don't want to use the managedcluster name
as one of the agent-identifiers is that in some customer's environment, the managedcluster name begins with numbers, which is not a valid domain name. But the agent identifier is used as the domain name in the grpc
mode.
Currently, all requests from the hub to the managed cluster follow pattern:
client -> user-server -> proxy-server(ANP) -> proxy-agent(ANP) -> service-proxy -> target-service