-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it possible to use a pre-created private network #762
Comments
I just came across this comment. Are you using Cilium Host Firewall to lock down etcd and other services on your control plane nodes from external access? |
I would also be interested in using a pre-created hcloud network, as I would like to create a nat gateway in the same network beforehand, so that the newly created k8s nodes can use this nat gateway to reach the internet. |
Do you think this could be done by just implementing this in the network.create and network.delete functions? |
It would be useful for me as well to be able to specify an existing network on a HetznerCluster level. |
I also had a use case for exactly this. Before creating the workload cluster with CAPH I pre-created a network (having the CAPH |
I think the best solution would be to add a I think the option should be something like the api loadbalancer, where you can customize the natGateway and the provider will create a small server (or multiple with ha configuration) nat is used as nat gateway. Example:
I know that this is outside of the capi provider scope but I think this is the "best" and "cleanest" solution until Hetzner creates NatGateways as a service (see current job offers that something is comming). |
Yes, this seems to be a much cleaner solution. Especially the fact that the network and the gateway will then be tied to the lifecycle of the cluster. (But, IIRC at least CAPA also offers the option to "bring your own network", by specifing the ID of an existing one). But as you said it is also a matter of the scope. The only advantage of the naive solution, which just picks up an existing network, is that the code base of CAPH remains small (no extra server that must be configured for IP forwarding, no network routes etc.). With the disadvantage that the network can now also be disjoint form the lifecycle of the cluster. Another yet undefined aspect might be how a NAT gateway looks like that suits all needs? To not simply assume a type that probably won't fit for every use case, this might require to make the NAT gateway configurable via cloud-init?! For example, in my personal use case the NAT gateway also has wireguard running which allows the management cluster (running somewhere else) to connect to the internal IP of the API server without the need for a load balancer. Or simply don't make the NAT gateway configurable and assume a minimal "best fit"?! If there is need for additional stuff (like e.g. the wireguard thing I just mentioned), users might be responsible for it themselves, outside of the scope of CAPH. Which, brings us back to the initial "problem" if this server should be in the same network as the CAPH machines 🙂 Whould be really interesting what others think and to see if there is an actual need for this. |
@johannesfrey Maybe one compromise could be that a NatGateway spec will be added in the provider to keep everything together, but the cloudinit for that server will be read (or can be read and there is a small default) from a configmap so custom user configurations are possible. |
I could try to work on it, but it would be my first "real" kubebuilder project and I don't have much time over the next few weeks, especially for writing all the unit and e2e tests (the api changes and the reconcile loop could be done very fast i guess) So it would be interesting to hear from one of the code owners here if such PRs would be merged or if they are "out of scope". @batistein you have interacted with other PRs in the past discussing networking, what do you think? (sorry for pinging one of you, I just want to know if this discussion is in scope and can be worked on or if there are better solutions) |
@simonostendorf thanks for driving this forward. I would be more than happy to support on this, if you want. The only thing is that I am also gone for the next two and a half weeks and afterwards I'm only able to support in my spare time. But yeah, let's see if there are some other perspectives on this subject before diving in 😊 . |
@johannesfrey Would be happy if we could solve the NAT topic. I forked this repo and extended the api and controller logic. There are some things missing or marked with a todo now, but I will implement them in a few days (if I find enough time). After that only the go unit tests (for natgateway service and for changed hcloud-controller) and the e2e tests (to create a cluster with natgateway) are missing. But I am not familiar enough with go and the caph logic to know what I need to change there. Maybe you could inspect it and commit something on top of my changes? I could find the time to test the changes yet, but I will edit this comment if I tested them. You can find my changes inside my fork but I could also create a draft pull request to discuss the topic with others. |
Hi :) we a evaluating if the Hetzner provider would help us in our setup and we can see the benefit in having an option for bringing your own network. For example when you need private connectivity to existing vm's in a private network. Other cluster api providers like Azure also supports using pre-existing networks so it's not uncommon |
@simonostendorf , integrating a NAT gateway or similar network solutions into Hetzner would indeed be a substantial feature. However, based on our experience, it would require a significant amount of effort, potentially spanning several person-months, especially considering the testing phase. We've encountered issues with Hetzner's private networks in the past, which adds to the complexity of such an undertaking. In our managed Kubernetes services, we've generally avoided using private networks, not just in Hetzner but across other providers as well. We find that, in most cases, a perimeter architecture is no longer a necessity. Instead, we prefer leveraging robust CNI solutions like Cilium. It not only meets our networking challenges effectively but also simplifies the network topology. This simplicity is a considerable advantage, making debugging more straightforward and faster. Additionally, Cilium's feature for external workloads seamlessly accommodates existing VMs. @lkt82 Given these points, while we understand the interest in this feature, I personally don't see enough benefit to offset the considerable effort it would demand for development and integration, especially when current tools have been adept at handling challenges in more lightweight ways. However, if there's community interest and someone is willing to contribute to such a feature, we are open to providing support where we can. We believe in collaborative solutions, and if there's a clear demand and willingness from contributors, it's something we could explore together, despite the economic and technical challenges it presents. Our focus at Syself, though, continues to be on investing resources in areas like Zero-Trust security, which we see as a more beneficial strategy. |
Thanks for the detailed answer @batistein . |
This is one of the most requested features from the community. We have it on the radar, but I can't promise anything. |
Hello, is there any update on this topic? This feature is also very important for us. |
I reviewed @johannesfrey 's PR again. @johannesfrey do you want to finish this up? It's just some cleaning up and implementing slight usability improvements |
Thx @janiskemper. Sure, I hope I addressed your suggestions correctly. |
/kind feature
Describe the solution you'd like
I'm attempting to spin up a cluster that is entirely on a private network that has already been created. Unfortunately, I'm met with a uniqueness error:
Anything else you would like to add:
I have created a network with a single VM inside it. From this VM I'm attempting to bootstrap a cluster with a private-only network. Does this make sense? Do you foresee any issues doing this?
Environment:
/etc/os-release
): Fedora 37The text was updated successfully, but these errors were encountered: