Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Korifi installation on single namespace #3137

Open
doddisam opened this issue Feb 22, 2024 · 7 comments
Open

Korifi installation on single namespace #3137

doddisam opened this issue Feb 22, 2024 · 7 comments
Labels

Comments

@doddisam
Copy link

Background

Can korifi be installed on single namespace ?

I see the requirement is to have a cluster.

Action to take

Can korifi be installed on single namespace ?

I see the requirement is to have a cluster.

Impact

Can korifi be installed on single namespace ?

I see the requirement is to have a cluster.

Dev Notes

No response

@doddisam doddisam added the chore label Feb 22, 2024
@danail-branekov
Copy link
Member

@doddisam Could you share more details on your use case?

Currently, Korifi maps orgs and spaces to kubernetes namespaces, therefore creating all resources in a single namespace is not supported. We also have the concept of a "root" namespace where some korifi metadata is stored. However, if we have a concrete scenario, we could think about how feasible it would be to support it.

cc @georgethebeatle

@doddisam
Copy link
Author

Hi George,

Thanks for the reply

We extensively run cloudfoundry on vsphere for our production grade applications. We thought of testing korifi on k8's as part of POC. we have K8's on BareMetal setup and we are only provided with the namespaces. I know korifi create separate namespaces for org/space and also we need root (cf) namespace and other namespaces for dependency install . Wanted to understand if we can setup korifi and dependencies in single namespace for POC. Can this be achieved as of today ?

@danail-branekov
Copy link
Member

danail-branekov commented Feb 23, 2024

Can this be achieved as of today

TL;DR - unfortunately not.

FWIW, I understand that a single namespace setup might have its merits. Leaving dependencies aside (as they are their own universe), a single namespace would have quite some implications:

  • Korifi assumes that orgs/spaces are backed by k8s namespaces and the code assumes that an org/space guid is the underlying k8s namespace name. In a single namespace setup this assumption does not hold, one needs to think how multiple orgs and spaces can be squashed into a single namespace and adapt the code accordingly.
  • User authorisation model relies on k8s RBAC and namespace-scoped roles. I am not sure how would that work out in a single namespace.
  • The role bindings/secrets/service accounts propagation across org/spaces namespaces should be adjusted for a single ns scenario
  • I am pretty sure there are a lot more things I cannot think off the top of my head.

If you are willing to explore this topic on your own we would be very happy to hear how it went, what challenges you faced, is such a setup feasible for a "productive" deployment, etc.

@gowrisankar22
Copy link

@danail-branekov Have you looked into HNC, this will give you same hierarchy levels same as what CF has on k8s. Worth looking .
https://github.com/kubernetes-sigs/hierarchical-namespaces

@danail-branekov
Copy link
Member

@gowrisankar22

Yes, initially we did make use of HNC to propagate secrets, bindings, etc into the namespace hierarchy. Unfortunately, we found out that HNC comes with significant performance implications. We therefore abandoned it and implemented our own mechanism that only replicates stuff we actually need.

@gowrisankar22
Copy link

@danail-branekov If this is a general HNC issue, did you address to HNC ?

@danail-branekov
Copy link
Member

@gowrisankar22

Well, it was not only just performance, we also had some security considerations. More details here and here. The proposal doc referenced in the second link is unfortunately no longer accessible - we lost it during Broadcom acquisition of VMWare.

But to answer your question directly - no, we did not take it with HNC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: 🧊 Icebox
Development

No branches or pull requests

3 participants