Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore/service certs #113

Merged
merged 10 commits into from
Dec 13, 2017
Merged

Chore/service certs #113

merged 10 commits into from
Dec 13, 2017

Conversation

frickjack
Copy link
Contributor

resolve #105

terraform launches the kube provisioner with an AWS instance profile that maps to a role with less than admin permissions. kube-aws and the aws cli acquire temporary creds from the AWS metadata service associated with the kube-provisioner's EC2 instance. We no longer copy ~/.aws/credentials to the kube provisioner

resolve #110

the tf_files/configs/kube-certs.sh script runs with kube-services.sh, and can also be run independently at any time to automatically create certificates (via the k8s CA configs in ~/VPC_NAME/credentials) and k8s secrets for k8s services discovered via grep'ing into the services/ folder.

This patch also updates the various kube//-deployment.yaml files to mount the appropriate SSL secrets under /mnt/ssl. The kube/README.md has more details

resolve planx-misc-issue-tracker issue 15 - https://github.com/uc-cdis/planx-misc-issue-tracker/issues/15

the services/workspace/deploy_workspace.sh script creates

  • a k8s namespace (named 'workspace')
  • k8s user (named 'worker') that authenticates via a certificate built from the k8s CA (under ~/VPC_NAME/credentials)
  • a role and rolebinding that grants the user permission to deploy services under the namespace
  • a kubeconfig_worker with context configured to access the API as the new user
  • a kubeconfig_worker.tar.xz suitcase that can be dropped on a VM with a route to the k8s api server to issue kubectl commands with

There's a workspace/README.md with an overview.

also tweak kube*.sh generation to avoid confusion
between terraform template vars and bash vars
give limitted access to k8s cluster to a
'worker' user
* use the kube-aws CA
* deployments mount the appropriate cert and ca
* need to wire up each service to use the cert,
   listen on https, connect over https,
   and register the CA with the trust store
* still need to test and document more
* tweak permissions in
  role attached to kube provisioner
* no_proxy for AWS metadata service at 169.256.169.254
* encrypt backup
* no quotes around "~/" in .sh scripts

... that kind of thing
}

statement {
actions = [ "ec2:*" ]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this thing above redundant then?

statement {
actions = [
"rds:*",
"cloudwatch:DescribeAlarms",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already allowed cloudwatch:*?

}

statement {
effect = "Allow"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we get rid of this? Or at least reduce the scope of it? This will allow you to do everything...?

@zflamig
Copy link
Contributor

zflamig commented Dec 11, 2017

Can you add port 443 to the -service.yamls in kube/services/ too please?

@frickjack
Copy link
Contributor Author

Thanks, Zac!

provisioner permissions are still too broad -
kube-aws has an open issue to more clearly specify
the permissions needed for kube-aws:
    kubernetes-retired/kube-aws#90
services and deployments .yaml ready for TLS listeners
@frickjack
Copy link
Contributor Author

Hey Zac,

I just pushed a patch:

  • added port 443 to the deployments and services
  • cleanup of the provisioner permissions

The permissions on the provisioner are still very broad - basically 'admin', since it gives IAM*, so it's not really any better than what we were doing before (copying up admin credentials.json) except that it gives us a path forward to trim down the permissions on the role (if we update the permissions, then terraform apply updates the inline policy in place). It's a project to get the right set of permissions for kube-aws - the kube-aws project actually has an open issue:
kubernetes-retired/kube-aws#90

Anyway - I'd like to just leave the provisioner permissions the way they are, since it's no worse than what we had been doing, and create a separate issue to narrow down the permissions. Another option is to wire up 'kube-up.sh' to run 'aws iam delete-role-policy ...' after kube-aws up finishes - which will just delete the permission on the provisioner down to nothing, or put a more limited set of permissions in place after kube-aws has done its thing. What do you think?

rename to kube-vars.sh.tpl to clarify it is a
terraform template - not a .sh script - which
should make codacy style-check happier :-)
@frickjack frickjack merged commit a1107bf into master Dec 13, 2017
@frickjack frickjack deleted the chore/service_certs branch December 13, 2017 23:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create Code/Function to Create SSL Certs Have terraform setup K8sProvider role
2 participants