-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Failed to get existing workspaces" #791
Comments
cc: @lkysow |
Hmm, can you check that when you exec in you're running as the atlantis user? |
@tomesco's coworker here. Note that we also have EC2 roles applied to the AWS nodes, but we are attempting to use ~/.aws/credentials instead since we're running Kubernetes on AWS and have no method to maintain EC2 roles at the pod level just yet. We found another issue we think could be related. hashicorp/aws-sdk-go-base#7 |
TF_LOG trace
|
Resolved, thank you @lkysow ! Seems when we mounted credentials to {$HOME}/.aws it was as root , which was a different location than atlantis' {$HOME}. |
Thanks for the follow-up Thomas. Is this a bug with the helm chart? |
Can't comment on the helm chart as we manually configured and deployed manifest. |
Ahh I see I missed your comment you were using the raw statefulset. |
Running Atlantis on Kubernetes. TF version 0.12.9. We're using explicit AWS credentials in ~/.aws/credentials instead of an AWS instance role. Using the Kubernetes statefulset from here.
When we run
atlantis plan
on a PR we get the following errorWhen we cd to
/atlantis/repos/AdaSupport/infrastructure/25/default/terraform/pre-production/_global
and run/usr/local/bin/terraform init -input=false -no-color -upgrade
on the pod itself we see no errors.Any idea what could be causing this?
The text was updated successfully, but these errors were encountered: