Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jx boot failing creating IRSA manage Service Accounts #5786

Closed
1 of 2 tasks
andrewhertog opened this issue Oct 11, 2019 · 14 comments
Closed
1 of 2 tasks

jx boot failing creating IRSA manage Service Accounts #5786

andrewhertog opened this issue Oct 11, 2019 · 14 comments
Labels
area/eks support for AWS EKS kind/bug Issue is a bug lifecycle/rotten priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@andrewhertog
Copy link

andrewhertog commented Oct 11, 2019

Summary

Running jx boot on a pre-built eks cluster fails with the following output:

attempting to lazily create the IAM Role for Service Accounts permissions
Enabling IRSA for cluster andrew-test-dev associating the IAM Open ID Connect provider
[ℹ]  eksctl version 0.7.0
[ℹ]  using region us-east-1
[ℹ]  (plan) would create IAM Open ID Connect provider for cluster "andrew-test-dev" in "us-east-1"
[!]  no changes were applied, run again with '--approve' to apply the changes
error: error creating the IRSA managed Service Accounts: there was a problem creating the policies stack and returning the template values: open eks/templates/jenkinsx-policies.yml: no such file or directory
error: failed to interpret pipeline file jenkins-x.yml: failed to run '/bin/sh -c jx step verify preinstall' command in directory 'env', output: ''

Steps to reproduce the behavior

NAME               VERSION
jx                 2.0.856
Kubernetes cluster v1.14.7-eks-e9b1d0
kubectl            v1.16.1
helm client        v2.13.1+g618447c
git                2.23.0

eksctl 0.7.0

Command to run jx boot

Expected behavior

Successfully run jx boot

Actual behavior

Exit code 1 running jx boot

Jx version

The output of jx version is:

jx version
WARNING: Failed to retrieve team settings: failed to setup the dev environment for namespace 'jx': the server could not find the requested resource (post environments.jenkins.io) - falling back to default settings...
NAME               VERSION
jx                 2.0.856
Kubernetes cluster v1.14.7-eks-e9b1d0
kubectl            v1.16.1
helm client        v2.13.1+g618447c
helm server        v2.14.1+g5270352
git                2.23.0
Operating System   Mac OS X 10.14.5 build 18F132


verifying packages
error: failed to load TeamSettings: failed to setup the dev environment for namespace 'jx': the server could not find the requested resource (post environments.jenkins.io)

Jenkins type

  • Serverless Jenkins X Pipelines (Tekton + Prow)
  • Classic Jenkins

Kubernetes cluster

EKS 1.14 created via Terraform (see https://github.com/terraform-aws-modules/terraform-aws-eks)

Operating system / Environment

Mac OS X 10.14.5

Edit: fix formatting

Update: Tested with eksctl 0.6.0 and had the same results

@ww-daniel-mora
Copy link

Running into the same issue. It looks like there is a missing file, but, I cannot find documentation on what that file should look like.

@gazal-k
Copy link

gazal-k commented Oct 11, 2019

ran into the same problem after updating to newer jx release. Looks like it's related to this: #5704. Using v2.0.838 seems to work, at least past that point

@gazal-k
Copy link

gazal-k commented Oct 12, 2019

looks like we may just have to fetch new changes for boot config repo; jenkins-x/jenkins-x-boot-config#74 getting the same error with v1.0.26 of boot config

@RicoToothless
Copy link

same issue here
and I think IRSA could be optional config

@ww-daniel-mora
Copy link

Okay I've done a bit of digging into this and there is definitely a bug. Getting started took a while because I did not realize that jx shells into jx so if you PATH does not put your local build of the executable first then you will never see your changes reflected. Here is my current understanding of the problem

  1. jx boot will execute verify-preinstall as a step in the env directory
STEP: verify-preinstall command: /bin/sh -c jx step verify preinstall in dir: env
  1. This call will hit jx/pkg/cmd/step/verify/step_verify_preinstall.go which will create a StepVerifyPreInstallOptions struct. The struct has a ProviderValuesDir value which is empty.
  2. This call will hit createPoliciesStack:jx/pkg/cloud/amazon/permissions.go which will use the ProviderValuesDir to construct a path the the policy template like so
eksKubeProviderDir := filepath.Join(kubeProvidersDir, cloud.EKS, ConfigTemplatesFolder)
...
policiesFilePath := filepath.Join(eksKubeProviderDir, PoliciesTemplateName)

Since the kubeProvidersDir is empty this path will resolve to env/eks/templates/jenkinsx-policies.yml. Note that the execution started in the env directory and the path is resolved as relative from there.


Potential solutions

Set the providers dir when we call the step

jx makes a shell call into itself we could just add flags to the call to set the correct providers dir. This option feels less invasive and more in keeping with the current design. Unless the maintainers have a different opinion.

Default to the correct providers dir

Since the kube providers dir is optional we could just select a more reasonable default like the kubeProviders dir from the jx boot template repository

@ww-daniel-mora
Copy link

Okay I failed to understand what jx boot was fundamentally doing. As I understand it now jx boot is looking for a file called jenkins-x.yml which defines the steps it takes. So this is actually a bug in the boot repository and potentially an enhancement to jx.

  1. The yml file in the boot repo should pass the correct directory to the kubeProviders and run from the correct directory
  2. jx should either require --provider-values-dir to be set or it should provide a reasonable default.

I can address the first issue shortly.

@ww-daniel-mora
Copy link

Have a look here folks and verify that this change addresses your isse jenkins-x/jenkins-x-boot-config#81

@abayer
Copy link
Contributor

abayer commented Oct 15, 2019

cc @dgozalo

@daveconde daveconde added area/eks support for AWS EKS kind/bug Issue is a bug priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Oct 15, 2019
@Larzack
Copy link

Larzack commented Oct 19, 2019

Same issue for me

@Larzack
Copy link

Larzack commented Oct 19, 2019

@ww-daniel-mora
Copy link

@babadofar
Copy link

Managing IRSA works for me now, I believe the fix for this has been merged.

@jenkins-x-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle stale

@jenkins-x-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle rotten

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/eks support for AWS EKS kind/bug Issue is a bug lifecycle/rotten priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

9 participants