-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vCluster connect fails silently when connecting with --server flag #2383
Comments
@colinjlacy Thanks for raising the issue! As I understand the issue, I tried reproducing it, I do see that we are able to connect to the vcluster even when it does not exist on the endpoint but as you mentioned that the kubecontext still points to the host cluster context is not what I am seeing. I am rather seeing the context set to the vcluster we are trying to connect to, its when we hit one of the
|
What happened?
When I run
vcluster connect <vcluster-name> --server=<any-url>
I get a positive response output, regardless of the actual result and whether or not a vCluster is even running at the URL that was entered.This happens if there is a matching vCluster found in the current context even if the
--server
value points to the vCluster control plane.I used Google just to prove a point, but the real problem comes in when trying to connect through a load balancer as explained here. I may be fooled into thinking that I'm working in a vCluster when I'm actually working in the host. That could very easily result in resource conflicts or deleting host resources entirely.
What did you expect to happen?
Some error message indicating that the vCluster could not be reached at the specified server endpoint.
How can we reproduce it (as minimally and precisely as possible)?
Set up a Kind cluster and install a vClsuter in the default namespace. I used Helm:
Once everything is up and running, run:
You should see:
Now run
kubectl config get-contexts
, and you'll see that you're still using the host cluster context.It's worth noting that if you delete the vCluster:
and then try to connect, the attempt will fail:
Anything else we need to know?
No response
Host cluster Kubernetes version
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.31.2
vcluster version
vcluster version 0.22.1
VCluster Config
controlPlane:
advanced:
defaultImageRegistry: ""
globalMetadata:
annotations: {}
headlessService:
annotations: {}
labels: {}
serviceAccount:
annotations: {}
enabled: true
imagePullSecrets: []
labels: {}
name: ""
virtualScheduler:
enabled: false
workloadServiceAccount:
annotations: {}
enabled: true
imagePullSecrets: []
labels: {}
name: ""
backingStore:
database:
embedded:
enabled: false
external:
caFile: ""
certFile: ""
connector: ""
dataSource: ""
enabled: false
keyFile: ""
etcd:
deploy:
enabled: false
headlessService:
annotations: {}
service:
annotations: {}
enabled: true
statefulSet:
annotations: {}
enableServiceLinks: true
enabled: true
env: []
extraArgs: []
highAvailability:
replicas: 1
image:
registry: registry.k8s.io
repository: etcd
tag: 3.5.15-0
imagePullPolicy: ""
labels: {}
persistence:
addVolumeMounts: []
addVolumes: []
volumeClaim:
accessModes:
- ReadWriteOnce
enabled: true
retentionPolicy: Retain
size: 5Gi
storageClass: ""
volumeClaimTemplates: []
pods:
annotations: {}
labels: {}
resources:
requests:
cpu: 20m
memory: 150Mi
scheduling:
affinity: {}
nodeSelector: {}
podManagementPolicy: Parallel
priorityClassName: ""
tolerations: []
topologySpreadConstraints: []
security:
containerSecurityContext: {}
podSecurityContext: {}
embedded:
enabled: false
migrateFromDeployedEtcd: false
coredns:
deployment:
affinity: {}
annotations: {}
image: ""
labels: {}
nodeSelector: {}
pods:
annotations: {}
labels: {}
replicas: 1
resources:
limits:
cpu: 1000m
memory: 170Mi
requests:
cpu: 20m
memory: 64Mi
tolerations: []
topologySpreadConstraints:
- labelSelector:
matchLabels:
k8s-app: vcluster-kube-dns
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
embedded: false
enabled: true
overwriteConfig: ""
overwriteManifests: ""
priorityClassName: ""
service:
annotations: {}
labels: {}
spec:
type: ClusterIP
distro:
k0s:
command: []
config: ""
enabled: false
extraArgs: []
image:
registry: ""
repository: k0sproject/k0s
tag: v1.30.2-k0s.0
imagePullPolicy: ""
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 40m
memory: 64Mi
securityContext: {}
k3s:
command: []
enabled: false
extraArgs: []
image:
registry: ""
repository: rancher/k3s
tag: v1.31.1-k3s1
imagePullPolicy: ""
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 40m
memory: 64Mi
securityContext: {}
k8s:
apiServer:
command: []
enabled: true
extraArgs: []
image:
registry: registry.k8s.io
repository: kube-apiserver
tag: v1.31.1
imagePullPolicy: ""
controllerManager:
command: []
enabled: true
extraArgs: []
image:
registry: registry.k8s.io
repository: kube-controller-manager
tag: v1.31.1
imagePullPolicy: ""
enabled: false
env: []
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 40m
memory: 64Mi
scheduler:
command: []
extraArgs: []
image:
registry: registry.k8s.io
repository: kube-scheduler
tag: v1.31.1
imagePullPolicy: ""
securityContext: {}
version: ""
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
enabled: false
host: my-host.com
labels: {}
pathType: ImplementationSpecific
spec:
tls: []
proxy:
bindAddress: 0.0.0.0
extraSANs: []
port: 8443
service:
annotations: {}
enabled: true
httpsNodePort: 0
kubeletNodePort: 0
labels: {}
spec:
type: ClusterIP
serviceMonitor:
annotations: {}
enabled: false
labels: {}
statefulSet:
annotations: {}
args: []
command: []
enableServiceLinks: true
env: []
highAvailability:
leaseDuration: 60
renewDeadline: 40
replicas: 1
retryPeriod: 15
image:
registry: ghcr.io
repository: loft-sh/vcluster-pro
tag: ""
imagePullPolicy: ""
labels: {}
persistence:
addVolumeMounts: []
addVolumes: []
binariesVolume:
- emptyDir: {}
name: binaries
dataVolume: []
volumeClaim:
accessModes:
- ReadWriteOnce
enabled: auto
retentionPolicy: Retain
size: 5Gi
storageClass: ""
volumeClaimTemplates: []
pods:
annotations: {}
labels: {}
probes:
livenessProbe:
enabled: true
readinessProbe:
enabled: true
startupProbe:
enabled: true
resources:
limits:
ephemeral-storage: 8Gi
memory: 2Gi
requests:
cpu: 200m
ephemeral-storage: 400Mi
memory: 256Mi
scheduling:
affinity: {}
nodeSelector: {}
podManagementPolicy: Parallel
priorityClassName: ""
tolerations: []
topologySpreadConstraints: []
security:
containerSecurityContext:
allowPrivilegeEscalation: false
runAsGroup: 0
runAsUser: 0
podSecurityContext: {}
workingDir: ""
experimental:
deploy:
host:
manifests: ""
manifestsTemplate: ""
vcluster:
helm: []
manifests: ""
manifestsTemplate: ""
genericSync:
clusterRole:
extraRules: []
role:
extraRules: []
isolatedControlPlane:
headless: false
multiNamespaceMode:
enabled: false
syncSettings:
disableSync: false
rewriteKubernetesService: false
setOwner: true
targetNamespace: ""
exportKubeConfig:
context: ""
insecure: false
secret:
name: ""
namespace: ""
server: ""
serviceAccount:
clusterRole: ""
name: ""
namespace: ""
external: {}
integrations:
certManager:
enabled: false
sync:
fromHost:
clusterIssuers:
enabled: true
selector:
labels: {}
toHost:
certificates:
enabled: true
issuers:
enabled: true
externalSecrets:
enabled: false
sync:
clusterStores:
enabled: false
selector:
labels: {}
externalSecrets:
enabled: true
stores:
enabled: false
webhook:
enabled: false
kubeVirt:
enabled: false
sync:
dataVolumes:
enabled: false
virtualMachineClones:
enabled: true
virtualMachineInstanceMigrations:
enabled: true
virtualMachineInstances:
enabled: true
virtualMachinePools:
enabled: true
virtualMachines:
enabled: true
webhook:
enabled: true
metricsServer:
enabled: false
nodes: true
pods: true
networking:
advanced:
clusterDomain: cluster.local
fallbackHostCluster: false
proxyKubelets:
byHostname: true
byIP: true
replicateServices:
fromHost: []
toHost: []
resolveDNS: []
plugins: {}
policies:
centralAdmission:
mutatingWebhooks: []
validatingWebhooks: []
limitRange:
annotations: {}
default:
cpu: "1"
ephemeral-storage: 8Gi
memory: 512Mi
defaultRequest:
cpu: 100m
ephemeral-storage: 3Gi
memory: 128Mi
enabled: auto
labels: {}
max: {}
min: {}
networkPolicy:
annotations: {}
enabled: false
fallbackDns: 8.8.8.8
labels: {}
outgoingConnections:
ipBlock:
cidr: 0.0.0.0/0
except:
- 100.64.0.0/10
- 127.0.0.0/8
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
platform: true
resourceQuota:
annotations: {}
enabled: auto
labels: {}
quota:
count/configmaps: 100
count/endpoints: 40
count/persistentvolumeclaims: 20
count/pods: 20
count/secrets: 100
count/services: 20
limits.cpu: 20
limits.ephemeral-storage: 160Gi
limits.memory: 40Gi
requests.cpu: 10
requests.ephemeral-storage: 60Gi
requests.memory: 20Gi
requests.storage: 100Gi
services.loadbalancers: 1
services.nodeports: 0
scopeSelector:
matchExpressions: []
scopes: []
rbac:
clusterRole:
enabled: auto
extraRules: []
overwriteRules: []
role:
enabled: true
extraRules: []
overwriteRules: []
sync:
fromHost:
csiDrivers:
enabled: auto
csiNodes:
enabled: auto
csiStorageCapacities:
enabled: auto
events:
enabled: true
ingressClasses:
enabled: false
nodes:
clearImageStatus: false
enabled: false
selector:
all: false
labels: {}
syncBackChanges: false
priorityClasses:
enabled: false
runtimeClasses:
enabled: false
storageClasses:
enabled: auto
volumeSnapshotClasses:
enabled: false
toHost:
configMaps:
all: false
enabled: true
endpoints:
enabled: true
ingresses:
enabled: false
networkPolicies:
enabled: false
persistentVolumeClaims:
enabled: true
persistentVolumes:
enabled: false
podDisruptionBudgets:
enabled: false
pods:
enabled: true
enforceTolerations: []
rewriteHosts:
enabled: true
initContainer:
image: library/alpine:3.20
resources:
limits:
cpu: 30m
memory: 64Mi
requests:
cpu: 30m
memory: 64Mi
translateImage: {}
useSecretsForSATokens: false
priorityClasses:
enabled: false
secrets:
all: false
enabled: true
serviceAccounts:
enabled: false
services:
enabled: true
storageClasses:
enabled: false
volumeSnapshotContents:
enabled: false
volumeSnapshots:
enabled: false
telemetry:
enabled: true
The text was updated successfully, but these errors were encountered: