Skip to content

Latest commit

 

History

History
75 lines (51 loc) · 2.74 KB

NOTES.md

File metadata and controls

75 lines (51 loc) · 2.74 KB

TODO:

migrate all the things to helm https://github.com/newtonjose/hasura-k8s-stack/tree/master/hasura-chart

works on kind in docker for win if you are using metallb

to see how to set that up

kubectl apply -f postgres/secret.yaml mkdir /tmp/k8s-hasura-test kubectl apply -f postgres/persistvol.yaml # this MUST FINISH before you can go forth kubernetes/kubernetes#44370 (comment) kubectl apply -f postgres/pvc.yaml kubectl apply -f postgres/deployment-service.yaml

kubectl wait --for=condition=Running --timeout=180s deployments/postgres

kubectl apply -f hasura/secret.yaml kubectl apply -f hasura/deployment-service.yaml

kubectl apply -f nginx-ingress/cloud-generic.yaml kubectl apply -f nginx-ingress/mandatory.yaml

kubectl apply -f hasura/ingress-insecure.yaml

helm repo add jetstack https://charts.jetstack.io helm repo update helm install --create-namespace
cert-manager jetstack/cert-manager
--namespace cert-manager
--version v0.16.0
--set installCRDs=true

kubectl wait --for=condition=Available --timeout=180s deployment/cert-manager-webhook -n cert-manager kubectl apply -f cert-manager/dev-self-signed-cert-and-issuer.yaml

create letsencrypt staging and prod issuers

only do this on non local clusters... if your cluser is on localhost use self signed stuff..

other wise the cluster stops respoding to kubectl and you have to delete it and restart

I think cert-manager enters an infinte loop of failing to get certs from letsencrypt

kubectl apply -f cert-manager/le-staging-issuer.yaml kubectl apply -f cert-manager/le-prod-issuer.yaml

kubectl -n ingress-nginx port-forward --address localhost,0.0.0.0 service/ingress-nginx 8080:80 kubectl -n ingress-nginx port-forward --address localhost,0.0.0.0 service/ingress-nginx 443:443 kubectl port-forward --address localhost,0.0.0.0 deployment/postgres 5432:5432

hasura console --endpoint https://kubernetes.docker.internal/ --insecure-skip-tls-verify --admin-secret "accessKey"

visit http://kubernetes.docker.internal/console/

note: I think the redirect url at the root / route is broken if you don't have https setup ... still working on that solution? just go strait to /console as above

useful commands

kubectl get ingress

kubectl get events # super useful for debugging wiernedss with PersistentVolume stuff

scratch

sources:

https://hasura.io/docs/1.0/graphql/manual/deployment/kubernetes/updating.html#kubernetes-update https://stackoverflow.com/questions/59255445/how-can-i-access-nginx-ingress-on-my-local