Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve EKS Onboarding Experience #44

Closed
mrichman opened this issue Dec 12, 2018 · 15 comments
Closed

Improve EKS Onboarding Experience #44

mrichman opened this issue Dec 12, 2018 · 15 comments
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue

Comments

@mrichman
Copy link

First off, I love the idea of opening up this roadmap!

If you compare EKS's first-time user experience with that of ECS, or even competing managed Kubernetes offerings (i.e. GKE), it becomes quite obvious that EKS has a lot of room for improvement.

The first issue that hit me as a new EKS user (and EKS instructor) is the onboarding experience. In other words, a newcomer's experience going through the process of creating a new cluster and knowing what to do when that's complete is laborious, error-prone, and wrought with confusion.

Creating an EKS cluster should be at least as easy as in ECS. I shouldn't necessarily have to have pre-planned my VPC topology, IAM roles, etc. I also shouldn't have to know CloudFormation in order to hit the ground running.

CLI tools like eksctl are a great step forward in simplifying and demystifying the EKS experience. It would be wonderful to see AWS take EKS more seriously and put some effort into the Management Console experience for new users.

I'd love to see a one-click install option for new users, including a "Download kubeconfig" which will spare the newcomer from having to run aws eks update-kubeconfig.

In addition to prompting to add a worker node group in one step at cluster creation time, EKS should also offer the option to install the Kubernetes dashboard.

Bonus points for optionally creating a new VPC and subnets at cluster creation time too.

I'd be happy to split up the above suggestions into discrete issues if that makes more sense.

Thanks for listening!

@abby-fuller abby-fuller added the Proposed Community submitted issue label Dec 12, 2018
@philoserf
Copy link

philoserf commented Dec 14, 2018

@weaveworks eksctl is an example of a good experience.
gloud container cluster create too does the job.
aws eks create-cluster is a surprise to some as an incomplete vision of a cluster.

ref: https://github.com/weaveworks/eksctl

@tabern
Copy link
Contributor

tabern commented Nov 15, 2020

Merged in #421. A quick note, this functionality will be delivered in stages, so we will open requisite linked issues to track that as we go.

@Maxwell2022
Copy link

Maxwell2022 commented Feb 11, 2022

I just want to give feedback on my experience working with EKS, I was also new to Kubernetes at the time and originally worked with a POC setup with Cloud Run on GKE.

To summarise, everything is so much more complicated with EKS (Fargate), everything you need out of the box is not there, you need to install and configure everything yourself, and on top of that, there is a lot of limitations using Fargate.

Here is the list that comes out of the box with GKE (or natively supported by Kubernetes) that you have to install/configure yourself in EKS (Fargate):

All in all, for small companies that do not have a dedicated infrastructure team, it's overwhelming to manage all of this. I can't even imagine when we'll have to update the Kubernetes cluster to a newer version. We stuck with AWS to run our Kubernetes cluster because we already had all our applications and datastore running here, but looking back at it it would probably have been simpler to move the datastore to Google Cloud.

My feeling is that EKS is very far behind other managed Kubernetes cluster solutions such as the one offered by Azure and GCP. These other solutions seem to be implemented closer to the Kubernetes open statandard without introducing complexity around it or proprietary solutions.

@mreferre
Copy link

Thanks for the feedback @Maxwell2022. I think "a more curated out of the box experience" and "Fargate limitations" are two orthogonal topics (equally important).

If we focus on the former, I am wondering what you think about this (we released it a few days ago): https://aws.amazon.com/blogs/containers/bootstrapping-clusters-with-eks-blueprints/

Yes, this is not yet a "one click" in the console solution - we are exploring that - but I (personally) think that if you are using Kubernetes for more than "kicking the can" a proper IaC setup is more than desirable/required (to use an understatement).

@andrewmclagan
Copy link

andrewmclagan commented Apr 27, 2022

Thanks @mreferre these Terraform providers go a long way to solving many of the issues with managing K8s clusters on AWS. I strongly agree, a solid IaC setup is essential when managing complex deployments such as K8s. Other providers such as GCP and Azure have had such capabilities for years. Thank you for all your hard work! But, I agree with @Maxwell2022 the K8s experience on AWS is far below competitors. There seems to be a strong push for proprietary models over open-source at AWS.

I will also mention one of the most significant struggles with K8s on AWS is the documentation. Currently it is fragmented, hard to find, sometimes out of date and frustratingly the correct docs are often embedded in blog posts. Even for seasoned K8s engineers, finding up-to-date information is at best difficult and time consuming, which simply creates a bad user experience for the product. The comparison to competitors is jarring.

@mreferre
Copy link

Thanks @andrewmclagan for the candid feedback. We surely can do better and we will strive to do better.

I just want to point out that there isn't really an organic strong push for cloud native services Vs open source based services. While there are definitely people that have their own opinions re what's best for our customers (and we have them on both camps) our stance is that customers are king and we want to serve them for what they need. This is not an engineering-dense blog but it lays out how we think about this exact matter at the high level.

Also, if I can make a joke, based on your theory the ECS docs would be top-notch... and I wish we were there (we perhaps receive more "candid feedback" for ECS than for EKS) ;)

Anyway... rant is over, in the end competition is good for everyone! Thanks again for the feedback, keep it coming.

@stevehipwell
Copy link

@andrewmclagan I'm going to have to assume you haven't actually used Terraform with Azure, especially for AKS? The two aren't even close enough in capability and reliability to compare, Azure's fundamental architecture and API architecture makes IaC a fight against the tide. It's also worth pointing out that EKS is a managed Kubernetes control plane while AKS is a managed Kubernetes cluster (I don't work with GKE so I won't attempt to categorise it); that might sound like semantics but it makes a comparison of the two apples and oranges. Fundamentally because of the great power of EKS you do have to do a bit more work to get it going, but the result is worth it.

I will completely agree with you that the documentation and component provenance doesn't match the high quality of the EKS product. Hopefully this is an area which will see significant investment in the next few months.

@sukoneck
Copy link

@Maxwell2022 nailed it. I ran into the issue with the secrets driver which highlights the overall experience for me.

  1. Secrets controller had to be done outside of terraform (like the loadbalancer, logging, etc)
  2. The only recommended pattern (in the EKS and Secrets Manager docs) is for a driver that's incompatible with Fargate
  3. This footnote is the only place I've found this documented and it's disproportionately small for how important it is

The managed addons are a good start.

@andrewmclagan
Copy link

I'm going to have to assume you haven't actually used Terraform with Azure, especially for AKS?

You got me ;-) totally haven't

@johnkeates
Copy link

johnkeates commented Dec 12, 2022

I'd like to add that for users that do not need any pre-added resources and would rather have as bare of a managed control plane as possible, please make sure that whatever is appended to the EKS offering doesn't make users like me have to do more work in removing 'default' things that I really don't want.

Perhaps the happy medium would be some preconfigured cloud formation stack for those who have a pet-cluster and leave the base EKS feature a cattle for the rest of us :)

Along the same lines as Steve and Andrew, we really love the API-driven way we can compose our infrastructure, and having Terraform to construct massive systems is a major benefit to us, and AWS (and to a lesser extent GCP) is one of the very few options that doesn't try to "a special API for me, but not for thee" us like Azure does. It's almost as if they don't trust their own software and hide half of their stuff behind a GUI and a 'secret' API, and then having some sort of 'second hand, slightly used' public API that doesn't really do the thing it needs to do.

@mikestef9
Copy link
Contributor

mikestef9 commented Dec 12, 2022

@johnkeates we hear you. This is always a delicate balance to strike in a product like EKS, where the majority of our existing customer base prefers full control and customization, where as newer and potential customers aren't the same level of Kubernetes experts and need simpler controls. Any newer features centered around simplifying the getting started experience will always be API driven first, and as much as possible optional and opt in.

@stevehipwell
Copy link

@mikestef9 this is where #923 (bare EKS cluster) comes in, is there any progress on this?

@mikestef9
Copy link
Contributor

We will get to that next year, is it's blocking a few others features we need to launch.

@stevehipwell
Copy link

That's great news @mikestef9, especially if it comes with a solution to the RBAC issue that allows RBAC (or at least cluster admin) to be controlled at the AWS API level.

@mikestef9
Copy link
Contributor

mikestef9 commented Dec 10, 2024

The launch of EKS Auto Mode capped off a number of improvements we’ve made to the EKS onboarding experience, that we believe it’s time we close this issue out.

With Auto Mode and the built-in managed NodePools, you can create an application ready cluster (with nothing actually running inside the cluster!) and simply deploy your applications, with compute and node autoscaling, block storage, load balancing, pod networking, service networking, and cluster DNS, all included and managed by default.

Along with EKS Auto Mode, we've introduced a new quick create experience in the Console. The only inputs are 1/ cluster name (pre-generated for you if you want to use our suggestion), 2/ K8s version (defaulted to latest) 3/ cluster role and node role (both with deep links to IAM Console quick create experience with pre-selected recommended defaults) and 4/ VPC and subnets. EKS will automatically select a compatible VPC in the account and its private subnets following best practices. If no VPC is present, you can use the deep link to the new VPC Console quick create experience. The only recommended change for EKS compatibility from VPC Console pre-selected defaults is to add at least 1 NAT Gateway so nodes in private subnets can communicate with public endpoints.

This new workflow is documented in the AWS News launch blog for Auto Mode, but of course we encourage you to go and try things out for yourself!

Some future improvements we are tracking:

  • EKS API to optionally install default Ingress and Storage class for Auto Mode. This will simplify getting started experience further so you don't need to manually apply those resources before deploying workloads that require block storage and/or load balancers.
  • Terminal/shell in the EKS Console. An alternative (although not as user friendly) improvement would be a download kubeconfig file button from the Console.
  • [EKS] : Reduction in EKS cluster creation time #1227
  • Pod Identity support for the CloudWatch observability agent (coming soon!)
  • EKS [request]: Support metrics-server in EKS add-ons #261 (coming soon!)

@github-project-automation github-project-automation bot moved this to Researching in containers-roadmap Dec 10, 2024
@mikestef9 mikestef9 moved this from Researching to Shipped in containers-roadmap Dec 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue
Projects
Status: Shipped
Development

No branches or pull requests