Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion cmd/clusterctl/examples/openstack/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,7 @@ spec:
kind: "OpenstackProviderSpec"
tags:
- a_cluster_wide_tag
masterIP: <master ip>
clusterConfiguration:
controlPlaneEndpoint: <apiServerLoadBalancer or master IP>:6443
kubernetesVersion: 1.15.0

Original file line number Diff line number Diff line change
Expand Up @@ -7,45 +7,22 @@ NAMESPACE={{ .Machine.ObjectMeta.Namespace }}
MACHINE=$NAMESPACE
MACHINE+="/"
MACHINE+={{ .Machine.ObjectMeta.Name }}
CONTROL_PLANE_VERSION={{ .Machine.Spec.Versions.ControlPlane }}
CLUSTER_DNS_DOMAIN={{ .Cluster.Spec.ClusterNetwork.ServiceDomain }}
POD_CIDR={{ .PodCIDR }}
SERVICE_CIDR={{ .ServiceCIDR }}
ARCH=amd64

swapoff -a
# disable swap in fstab
sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab
# Getting master ip from the metadata of the node. By default we try the public-ipv4
# If we don't get any, we fall back to local-ipv4 and in the worst case to localhost
MASTER=""

# Getting local ip from the metadata of the node.
echo "Getting local ip from metadata"
for i in $(seq 60); do
echo "trying to get public-ipv4 $i / 60"
MASTER=$(curl --fail -s http://169.254.169.254/2009-04-04/meta-data/public-ipv4)
if [[ $? == 0 ]] && [[ -n "$MASTER" ]]; then
echo "trying to get local-ipv4 $i / 60"
OPENSTACK_IPV4_LOCAL=$(curl --fail -s http://169.254.169.254/latest/meta-data/local-ipv4)
if [[ $? == 0 ]] && [[ -n "$OPENSTACK_IPV4_LOCAL" ]]; then
break
fi
sleep 1
done

if [[ -z "$MASTER" ]]; then
echo "falling back to local-ipv4"
for i in $(seq 60); do
echo "trying to get local-ipv4 $i / 60"
MASTER=$(curl --fail -s http://169.254.169.254/2009-04-04/meta-data/local-ipv4)
if [[ $? == 0 ]] && [[ -n "$MASTER" ]]; then
break
fi
sleep 1
done
fi

if [[ -z "$MASTER" ]]; then
echo "falling back to localhost"
MASTER="localhost"
fi
MASTER="${MASTER}:443"

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
Expand Down Expand Up @@ -124,81 +101,17 @@ EOF

# Set up kubeadm config file to pass parameters to kubeadm init.
cat > /etc/kubernetes/kubeadm_config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: ${TOKEN}
ttl: 24h0m0s
usages:
- signing
- authentication
localAPIEndpoint:
bindPort: 443
nodeRegistration:
criSocket: /var/run/dockershim.sock
kubeletExtraArgs:
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v${CONTROL_PLANE_VERSION}
apiServer:
extraArgs:
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
extraVolumes:
- hostPath: /etc/kubernetes/cloud.conf
mountPath: /etc/kubernetes/cloud.conf
name: cloud
readOnly: true
- hostPath: "/etc/certs/cacert"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was missed in userdata/kubeadm.go

Copy link
Member Author

@sbueringer sbueringer Aug 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just out of curiosity, do you know why this is needed? We don't need this on-premise, but our registry has a regular certificate

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you using https or http ? for me , it's a https environment, and openstack cloud you connected through the cacert,
unfortunately my env just broken so I can't show the error I had, it's actually kube controller container failed to start due to it can't find this file

so if we don't want the machines we created able to talk to openstack cloud, we can avoid this but we need fix somewhere else (I need find it later) to make kube controller able to start, or we need add cacert here ,after all ,it's previously there before this PR..

Copy link
Member Author

@sbueringer sbueringer Aug 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it's okay, I'll add it back. Just wanted to understand for what it's used. I'll try to find out how it's done in our environments.

Copy link
Member Author

@sbueringer sbueringer Aug 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I overlooked it in our on-prem installation because we're using the openstack cloud controller manager there instead.

But not sure how my Cluster Installation on CoreOS works currently. I'm:

  • using https
  • not configuring a cacert nor ignore tls
  • the CoreOS example user data uses the mount point but actually never rights the cacert file

(Maybe it's not working on CoreOS right now, but I"ll get ready nodes, never tried cinder though)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jichenjc Unfortunately I do not have self-signed keystone environment now.
@sbueringer This cacert is for using self-signed keystone, will be ca-file in cloud.conf.
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#global
I tested only Ubuntu though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can paste my test env later, I need cacert definitely otherwise the whole openstack APi can't be called

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the mountpoint to controller-manager config

mountPath: "/etc/certs/cacert"
name: cacert
readOnly: true
timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ${MASTER}
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
cluster-cidr: ${POD_CIDR}
service-cluster-ip-range: ${SERVICE_CIDR}
extraVolumes:
- hostPath: /etc/kubernetes/cloud.conf
mountPath: /etc/kubernetes/cloud.conf
name: cloud
readOnly: true
- hostPath: "/etc/certs/cacert"
mountPath: "/etc/certs/cacert"
name: cacert
readOnly: true
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: ${SERVICE_CIDR}
{{ .KubeadmConfig }}
EOF

echo "Replacing OPENSTACK_IPV4_LOCAL in kubeadm_config through ${OPENSTACK_IPV4_LOCAL}"
/usr/bin/sed -i "s#\${OPENSTACK_IPV4_LOCAL}#${OPENSTACK_IPV4_LOCAL}#" /etc/kubernetes/kubeadm_config.yaml

kubeadm init -v 10 --config /etc/kubernetes/kubeadm_config.yaml
for tries in $(seq 1 60); do
kubectl --kubeconfig /etc/kubernetes/kubelet.conf annotate --overwrite node $(hostname -s) machine=${MACHINE} && break
sleep 1
done

# By default, use calico for container network plugin, should make this configurable.
kubectl --kubeconfig /etc/kubernetes/admin.conf apply -f https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,11 @@ set -e
set -x
(
KUBELET_VERSION={{ .Machine.Spec.Versions.Kubelet }}
TOKEN={{ .Token }}
MASTER={{ call .GetMasterEndpoint }}
NAMESPACE={{ .Machine.ObjectMeta.Namespace }}
MACHINE=$NAMESPACE
MACHINE+="/"
MACHINE+={{ .Machine.ObjectMeta.Name }}
CLUSTER_DNS_DOMAIN={{ .Cluster.Spec.ClusterNetwork.ServiceDomain }}
POD_CIDR={{ .PodCIDR }}
SERVICE_CIDR={{ .ServiceCIDR }}
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
Expand Down Expand Up @@ -54,21 +51,7 @@ echo $OPENSTACK_CLOUD_CACERT_CONFIG | base64 -d > /etc/certs/cacert

# Set up kubeadm config file to pass to kubeadm join.
cat > /etc/kubernetes/kubeadm_config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
bootstrapToken:
apiServerEndpoint: ${MASTER}
token: ${TOKEN}
unsafeSkipCAVerification: true
timeout: 5m0s
tlsBootstrapToken: ${TOKEN}
nodeRegistration:
criSocket: /var/run/dockershim.sock
kubeletExtraArgs:
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
{{ .KubeadmConfig }}
EOF

systemctl enable kubelet.service
Expand All @@ -82,6 +65,5 @@ for tries in $(seq 1 60); do
kubectl --kubeconfig /etc/kubernetes/kubelet.conf annotate --overwrite node $(hostname -s) machine=${MACHINE} && break
sleep 1
done

echo done.
) 2>&1 | tee /var/log/startup.log
Original file line number Diff line number Diff line change
Expand Up @@ -85,51 +85,7 @@ storage:
- path: /etc/kubernetes/kubeadm_config.yaml
filesystem: root
contents:
inline: |
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
bindPort: 443
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v{{ .Machine.Spec.Versions.ControlPlane }}
networking:
serviceSubnet: {{ .ServiceCIDR }}
podSubnet: {{ .PodCIDR }}
dns:
type: CoreDNS
clusterName: kubernetes
controlPlaneEndpoint: ${MASTER}
apiServer:
extraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
- name: cacert
hostPath: "/etc/certs/cacert"
mountPath: "/etc/certs/cacert"
controllerManager:
extraArgs:
cluster-cidr: {{ .PodCIDR }}
service-cluster-ip-range: {{ .ServiceCIDR }}
allocate-node-cidrs: "true"
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
- name: cacert
hostPath: "/etc/certs/cacert"
mountPath: "/etc/certs/cacert"
inline: '{{ .KubeadmConfig | EscapeNewLines }}'
user:
id: 0
group:
Expand All @@ -143,28 +99,8 @@ storage:

. /run/metadata/coreos

MASTER=""

echo "Trying to get the public IPv4 address."
if [[ -n "$COREOS_OPENSTACK_IPV4_PUBLIC" ]]; then
MASTER="$COREOS_OPENSTACK_IPV4_PUBLIC"
fi

if [[ -z "$MASTER" ]]; then
echo "Trying to get the local IPv4 address. (Try $i/60)"
if [[ -n "$COREOS_OPENSTACK_IPV4_LOCAL" ]]; then
MASTER="$COREOS_OPENSTACK_IPV4_LOCAL"
fi
fi

if [[ -z "$MASTER" ]]; then
echo "Falling back to localhost."
MASTER="localhost"
fi

MASTER="${MASTER}:443"

/usr/bin/sed -i "s#\${MASTER}#$MASTER#" /etc/kubernetes/kubeadm_config.yaml
echo "Replacing OPENSTACK_IPV4_LOCAL in kubeadm_config through ${COREOS_OPENSTACK_IPV4_LOCAL}"
/usr/bin/sed -i "s#\${OPENSTACK_IPV4_LOCAL}#${COREOS_OPENSTACK_IPV4_LOCAL}#" /etc/kubernetes/kubeadm_config.yaml
user:
id: 0
group:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,7 @@ storage:
- path: /etc/kubernetes/kubeadm_config.yaml
filesystem: root
contents:
inline: |
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
discovery:
bootstrapToken:
apiServerEndpoint: {{ call .GetMasterEndpoint }}
token: {{ .Token }}
unsafeSkipCAVerification: true
tlsBootstrapToken: {{ .Token }}
inline: '{{ .KubeadmConfig | EscapeNewLines }}'
user:
id: 0
group:
Expand Down
Loading