Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow users to customize how the impersonation proxy is deployed on a cluster. #617

Closed
mattmoyer opened this issue May 13, 2021 · 2 comments
Assignees
Labels
enhancement New feature or request estimate/L Estimated effort/complexity/risk is large state/accepted All done!

Comments

@mattmoyer
Copy link
Contributor

mattmoyer commented May 13, 2021

We have some internal ConfigMap API today that lets you force the proxy on or off, or deploy it with a Service that's provisioned manually out-of-band. We should expose this level of control (and more) to users installing the Concierge on a cluster.

This is useful if you want to create a Service with particular options (as in #605) or if you want to disable the automatic Service provisioning and provision your own load balancing out-of-band.

Acceptance Criteria

Scenario: when installing the Concierge, I want to force the impersonation proxy to be enabled
Given I have a running cluster with control-plane nodes (e.g., Kind)
When I install the Concierge with a "mode: enabled"
Then I see that the impersonation Service is provisioned despite the autodetection
And the proxy service serves a certificate with the external name of the Service
Scenario: when installing the Concierge, I want to force the impersonation proxy to be disabled
Given I have a running cluster without control-plane nodes (e.g., a cloud provider cluster)
When I install the Concierge with a "mode: disabled"
Then I see that the impersonation Service is not provisioned despite the autodetection
Scenario: when installing the Concierge, I want to use an externally-provisioned load balancer
Given I have a running cluster
When I install the Concierge with a "mode: enabled" and "service.type: None" and "externalEndpoint: example.com:1234"
Then I see that the impersonation Service is not provisioned
And the proxy port is listening with a certificate issued for "example.com"
And the CredentialIssuer status advertises "https://example.com:1234/" as the impersonation proxy endpoint
Scenario: when installing the Concierge, I want to use an automatically provisioned load balancer Service with custom options
Given I have a running cluster
When I install the Concierge with a "mode: enabled" and "service: {annotations: {k: v}, loadBalancerIP: 1.2.3.4}"
Then I see that the impersonation proxy Service is provisioned
And the Service has the intended annotations and load balancer IP
And the CredentialIssuer status advertises the external name of the Service as the impersonation proxy endpoint
The proxy service serves a certificate with IP "1.2.3.4"
Scenario: when installing the Concierge, I want to use an automatically provisioned Service of type ClusterIP
Given I have a running cluster
When I install the Concierge with a "mode: enabled" and "service: {type: ClusterIP}"
Then I see that the impersonation proxy Service is provisioned
And the Service has the intended type ClusterIP
And the CredentialIssuer status advertises the cluster IP of the Service as the impersonation proxy endpoint
The proxy service serves a certificate with the cluster IP
Scenario: when installing the Concierge, I want to use an automatically provisioned load balancer with custom options and custom DNS
Given I have a running cluster
When I install the Concierge with a "mode: enabled", "service: {annotations: {k: v}, loadBalancerIP: 1.2.3.4}", and "externalEndpoint: example.com:1234"
Then I see that the impersonation proxy Service is provisioned
And the Service has the intended annotations and load balancer IP
And the CredentialIssuer status advertises "https://example.com:1234/" as the impersonation proxy endpoint
The proxy service serves a certificate with name "example.com"

Example CredentialIssuer YAML

Default value (API)

By default, the proxy will be disabled. This is safe but non-functional on many clusters:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: api-defaults
spec: {}

Default value (website)

In the default installation YAML we provide on https://get.pinniped.dev/latest/install-pinniped-concierge.yaml, we'll enable the proxy in fully-automatic mode with some useful default annotations:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: pinniped-website-default
spec:
  impersonationProxy:
    # Enable the proxy if and only if there are no control plane nodes
    mode: auto

    # Provision a Service of type LoadBalancer with default options to tweak the idle timeout on EKS
    service:
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000"

Enable the proxy but with no Service annotations

If you set mode: auto but do not specify any service, Pinniped will provision the Service with no annotations or special settings:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: provision-service-with-no-options
spec:
  impersonationProxy:
    # Enable the proxy iff there are no control plane nodes
    mode: auto

    # Provision a Service of type LoadBalancer with default options (no annotations)

Automatically create a Service, but default to an internal load balancer

This is somewhat safer than the "pinniped-website-default" configuration, but does not work out-of-the-box on many cloud provider clusters. This could be useful as a default for unattended installations:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: internal-service
spec:
  impersonationProxy:
    # Enable the proxy iff there are no control plane nodes
    mode: auto

    # Provision a Service of type LoadBalancer with options to make it private/internal on major cloud providers
    service:
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
        service.beta.kubernetes.io/azure-load-balancer-internal: "true"
        networking.gke.io/load-balancer-type: "Internal"
        service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000"

Bring your own Service

This configuration assumes that you will configure some kind of Service or other load balancing out of band. Pinniped should only issue an appropriate certificate and serve the proxy on the appropriate port:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: provision-out-of-band
spec:
  impersonationProxy:
    # Force-enable the proxy 
    mode: enabled

    # Do not provision a Service
    service:
      type: None
    
    # Advertise a public endpoint for the proxy at some arbitrary host/port
    externalEndpoint: "impersonation-proxy.example.com"

ClusterIP Service for cluster-local clients

This configuration is useful if all the clients (e.g., Kubeapps) are running within the cluster, so the proxy does not need to be exposed outside the cluster:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: provision-cluster-ip
spec:
  impersonationProxy:
    # Force-enable the proxy 
    mode: enabled

    # Provision a service of type ClusterIP
    service:
      type: ClusterIP

Pre-provisioned external IP address with DNS

This configuration assumes that you have pre-reserved some external IP address that is valid on your cluster's load balance r implementation. You also have some DNS name pointed at this IP, which is what you would like your users to reference in their kubeconfigs:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: pre-provisioned-external-ip-with-dns
spec:
  impersonationProxy:
    # Enable the proxy iff there are no control plane nodes
    mode: auto

    # Provision a Service with a particular load balancer IP
    service:
      type: LoadBalancer # (default, could be omitted)
      loadBalancerIP: 30.0.0.x

    # Advertise an external DNS name that points at the 30.0.0.x IP
    externalEndpoint: impersonation-proxy.example.com

Integration with external-dns via annotation

A similar configuration would be to use kubernetes-sigs/external-dns to manage the DNS entry, based on another annotation:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: integration-with-external-dns
spec:
  impersonationProxy:
    # Enable the proxy iff there are no control plane nodes
    mode: auto

    # Provision a Service with options tell external-dns to map an external DNS name to the IP we get assigned
    service:
      type: LoadBalancer # (default, could be omitted)
      annotations:
        external-dns.alpha.kubernetes.io/hostname: impersonation-proxy.example.com.
        external-dns.alpha.kubernetes.io/ttl: "300"

    # Advertise an external DNS name that gets maintained by kubernetes-sigs/external-dns
    externalEndpoint: impersonation-proxy.example.com

Custom TLS (not in scope for this issue)

These options are not meant to be implemented as part of this issue, but are designed as examples of how we can extend this API design to future use cases:

apiVersion: config.concierge.pinniped.dev/v1alpha1
kind: CredentialIssuer
metadata:
    name: using-custom-certificates
spec:
  impersonationProxy:
    # Enable the proxy iff there are no control plane nodes
    mode: auto

    # Provision a service with a particular external IP
    service:
      type: LoadBalancer # (default, could be omitted)
      loadBalancerIP: 30.0.0.x

    # Advertise an external DNS name that points at the 30.0.0.x IP
    externalEndpoint: impersonation-proxy.example.com

    tls:
      # Advertise this CA as the bundle to trust in the CredentialIssuer
      # If empty, advertise it as empty and let clients use a system CA bundle
      certificateAuthorityData: "<my-corp-ca>"

      # Point to some certificate/key to serve with, this would be a cert for "impersonation-proxy.example.com"
      secretName: my-tls-cert

Edited 05/17 to switch from boolean "disable" to enum "mode" and add ClusterIP support (diff)

@pinniped-ci-bot pinniped-ci-bot added enhancement New feature or request priority/backlog Prioritized for an upcoming iteration labels May 13, 2021
@pinniped-ci-bot pinniped-ci-bot added the estimate/L Estimated effort/complexity/risk is large label May 13, 2021
@pinniped-ci-bot pinniped-ci-bot added the state/started Someone is working on it currently label May 13, 2021
@jeuniii
Copy link

jeuniii commented May 14, 2021

@mattmoyer

Thanks for reporting this issue in detail and looks like someone is now working on it which is great ! I personally believe that handing over the configuration of the impersonation Proxy to the user is the right way to go based on different use cases.

Our use case is quite simple as I mentioned in #605, Pinnniped and kubeapps (client) lie on the same cluster. So having a Load Balancer service type in not needed and communication can be established internally using a ClusterIP address alone. And the subsequent certificate can contain the ClusterIP address.

@mattmoyer
Copy link
Contributor Author

Thanks for the feedback @jeuniii (here and in Slack). I updated the issue to allow for Services of type: ClusterIP as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request estimate/L Estimated effort/complexity/risk is large state/accepted All done!
Projects
None yet
Development

No branches or pull requests

4 participants