-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CAAPH report log context "default-context" does not exist while preparing to patch HelmReleaseProxy #199
Comments
@phuongvtn: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@phuongvtn I believe the kubeconfig logic was changed in #248. Could you try to see if you're still having the same issue? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Hi,
I'm newbie on clusterAPI operator and facing error log that appear continuously on caaph (addon-helm)
I'm testing clusterAPI with Openstack Infra that can provision workload cluster and using helmchartproxy for deploying cni (calico and cilium) for target workload
But, I see caaph pod report error log continuously about get kubeconfig for cluster
Below are logs that I collected about my facing:
These error logs have been appeared right after the first deployment of helmchartproxy on target workload cluster successfully. Seem they from result of func KubeconfigGetter.GetClusterKubeconfig but I not sure
Although I tested updating helm values via helmchartproxy of target workload cluster as well as check revision of helmreleaseproxy (with corresponding namespace) and configmap of target workload cluster, all still update successfully, I not sure how the above logs may affect helmchart 's lifecycle as well as values of cni on target workload cluster in future
Do I missing configure or we can ignore these logs at current? Thanks
Environment:
kubectl version
): v1.28.4/etc/os-release
): Ubuntu 20.04.4/kind bug
/area logging
The text was updated successfully, but these errors were encountered: