Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verify status.loadBalancer when deploying a service of type LoadBalancer #547

Merged
merged 1 commit into from
Sep 10, 2019

Conversation

dirceu
Copy link
Contributor

@dirceu dirceu commented Sep 6, 2019

What are you trying to accomplish with this PR?

Fixes part of #543.

Make sure that we only declare a service of type LoadBalancer as deployed after its IP address is published (which is an asynchronous process in the case of most cloud providers).

According to the documentation:

The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service’s .status.loadBalancer field

After being fully provisioned, this will show as something like this:

status:
  loadBalancer:
    ingress:
      - ip: 146.148.47.155

How is this accomplished?

I added an extra condition that will only run if the type of the Service is LoadBalancer; if that's the case, it will check for the presence of status.loadBalancer.ingress.ip to determine if the Service has been fully provisioned.

What could go wrong?

It seems that testing this particular condition is not possible with minikube, as (specifically in that environment) the IP address is never published to the resource data and you need a special command to retrieve it.

Live test

$ kubernetes-deploy dirceu-test tierstaging-us-east1-2 --template-dir ~/Desktop
[INFO][2019-09-09 08:28:54 -0400]
[INFO][2019-09-09 08:28:54 -0400]	------------------------------------Phase 1: Initializing deploy------------------------------------
[INFO][2019-09-09 08:28:55 -0400]	Context tierstaging-us-east1-2 found
[INFO][2019-09-09 08:28:55 -0400]	Namespace dirceu-test found
[INFO][2019-09-09 08:28:55 -0400]	All required parameters and files are present
[INFO][2019-09-09 08:28:55 -0400]	Discovering resources:
[INFO][2019-09-09 08:28:56 -0400]	  - Service/test-lb
[INFO][2019-09-09 08:28:56 -0400]
[INFO][2019-09-09 08:28:56 -0400]	----------------------------Phase 2: Checking initial resource statuses-----------------------------
[INFO][2019-09-09 08:28:56 -0400]	Service/test-lb                                   Not found
[INFO][2019-09-09 08:28:56 -0400]
[INFO][2019-09-09 08:28:56 -0400]	----------------------------------Phase 3: Deploying all resources----------------------------------
[INFO][2019-09-09 08:28:56 -0400]	Deploying Service/test-lb (timeout: 420s)
[INFO][2019-09-09 08:29:30 -0400]	Still waiting for: Service/test-lb
[INFO][2019-09-09 08:30:00 -0400]	Still waiting for: Service/test-lb
[INFO][2019-09-09 08:30:30 -0400]	Still waiting for: Service/test-lb
[INFO][2019-09-09 08:30:54 -0400]	Successfully deployed in 118.0s: Service/test-lb
[INFO][2019-09-09 08:30:54 -0400]
[INFO][2019-09-09 08:30:54 -0400]	------------------------------------------Result: SUCCESS-------------------------------------------
[INFO][2019-09-09 08:30:54 -0400]	Pruned 1 resource and successfully deployed 1 resource
[INFO][2019-09-09 08:30:54 -0400]
[INFO][2019-09-09 08:30:54 -0400]	Successful resources
[INFO][2019-09-09 08:30:54 -0400]	Service/test-lb                                   Doesn't require any endpoints

$ kubectl --context tierstaging-us-east1-2 -n dirceu-test get service test-lb -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"test-lb","name":"test-lb"},"name":"test-lb","namespace":"dirceu-test"},"spec":{"ports":[{"name":"http","port":80,"targetPort":"http"}],"type":"LoadBalancer"}}
  creationTimestamp: "2019-09-09T12:28:57Z"
  labels:
    app: test-lb
    name: test-lb
  name: test-lb
  namespace: dirceu-test
  resourceVersion: "39073112"
  selfLink: /api/v1/namespaces/dirceu-test/services/test-lb
  uid: 64fe58ff-d2fd-11e9-8940-58010a8e017b
spec:
  clusterIP: <REDACTED>
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 32731
    port: 80
    protocol: TCP
    targetPort: http
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: <REDACTED>

@dirceu dirceu requested a review from a team September 6, 2019 18:56
Copy link
Contributor

@dturn dturn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you 🎩 and post the output. I'd like to make sure that reality matches the docs.

lib/kubernetes-deploy/kubernetes_resource/service.rb Outdated Show resolved Hide resolved
@dirceu dirceu force-pushed the add-loadbalancer-condition branch 2 times, most recently from 7891ac5 to 7552dc1 Compare September 9, 2019 12:38
@dirceu dirceu requested a review from dturn September 9, 2019 12:44
@dirceu dirceu requested a review from a team September 10, 2019 12:18
Copy link
Contributor

@RyanBrushett RyanBrushett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@dturn
Copy link
Contributor

dturn commented Sep 10, 2019

You can merge this PR as is, but ❤️ if you'd add an update to CHANGELOG.md with this enhancement. (We're trying to do this in each PR instead since doing them in the PR that bumps the gem version is a bit painful).

@dirceu dirceu merged commit 3d5cf59 into master Sep 10, 2019
@dirceu dirceu deleted the add-loadbalancer-condition branch September 10, 2019 18:07
@douglas douglas mentioned this pull request Sep 20, 2019
@douglas douglas temporarily deployed to rubygems September 20, 2019 20:14 Inactive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants