-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic OVS CNI plugin implementation #2
Comments
@SchSeba @yuvalif @dankenigsberg could you check this? Thanks |
There is also an option not to use veth, but rather OVS internal interfaces. AFAIK they are faster but don't support all traffic shaping features. The problem is, that it would be created by OVS in host network namespace - therefore we would need to give it a random name and only then move it to container netns and rename. |
Looking for OVSDB client with nice abstraction in Go. I found several libraries that expose ovsdb directly, that means communicating directly with OVS database and making sure all references between objects (ports, interfaces, bridges) are consistent, that is a lot of extra work. DigitalOcean library has nice OVS abstraction (single command to add bridge, ports, set attribute etc.), but that is implemented on top ovs |
If there is no high-level Go binding for OvS, how about implementing this CNI in Python, using |
@dankenigsberg it would have other problems:
There are not many calls to ovs-vsctl in the code. It should not be that hard to rewrite them to pure ovsdb calls if we decide to. |
Yeah, yeah. That's why I was /mostly/ joking. |
* make ovs socket file path as configurable property Signed-off-by: Periyasamy Palanisamy <[email protected]> * address review comments Signed-off-by: Periyasamy Palanisamy <[email protected]> * address review comments #2 Signed-off-by: Periyasamy Palanisamy <[email protected]>
ds: Add CI Dockerfile
With this initial implementation, ovs-cni would just connect container to an OVS bridge available on host and optionally assign connection port to a VLAN. Support for IPAM and other advanced features will be added later and described in separate issues. Pure L2 plugins sounds limiting, but it should be enough for KubeVirt VMs use-case.
This basic support should reflect bridge plugin from containernetworking/plugins. Code must have unit tests coverage, reasonable logging and standalone usage examples and usage example for local cluster.
Open vSwitch CNI Plugin
Overview
With ovs plugin, containers (on the same host) are plugged into an Open vSwitch bridge (virtual switch) that resides in the host network namespace. It's host adminitrator's responsibility to create such bridge and optinally connect it to broader network, be it using L2 directly, NAT or an overlay. The containers receive one end of the veth pair with the other end connected to the bridge.
Example configuration
Network configuration reference
name
(string, required): the name of the network.type
(string, required): "ovs".bridge
(string, required): name of the bridge to use.vlan
(integer, optional): VLAN ID of attached port. Opened for all if not specified.The text was updated successfully, but these errors were encountered: