-
Notifications
You must be signed in to change notification settings - Fork 315
enable hairpin mode on virtual interface bridge port #62
enable hairpin mode on virtual interface bridge port #62
Conversation
netlink/netlink_linux.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: ioutil.WriteFile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I stayed away from any function which tries to create the file. If the file doesn't exist, it shouldn't create it.
Granted the creation should fail, but I thought this was cleaner.
|
@phemmer does this make your other iptable change on docker not needed? |
|
No, still needed. This is a prerequisite to moby/moby#6810 |
|
@phemmer is there any performance impact from enabling hairpin mode? Particularly if it is not used. |
|
No. The kernel performs the hairpin check on every packet, whether enabled or not. In fact enabling hairpin would make it faster as it's the first thing in the list of items in the decision :-) (probably not really, you just save a few function calls) |
|
Thanks for the info @phemmer! change LGTM. |
|
Ping @crosbymichael since you'd looked into this before. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to fail the creation if hairpin mode isn't enabled or failed to be setup?
Can we log and forgive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be possible to limp along without it. I see 2 potential avenues.
- If we remove the proxy, hairpin connections would simply fail.
- If we keep the proxy, we could not add the iptables rule to DNAT traffic, this would cause the connection to hit the proxy which would route it back in.
I think # 2 is dangerous. The proxy causes all traffic to appear to come from a certain source address. If an app has some sort of automatic blacklist capability, failing back to this method might cause the app to blacklist all traffic.
|
Wouldn't it be easier to add MAC VLAN support and enable creation of MAC VLAN interfaces in VEPA mode ? You could then send this interface straight into container. I can open a PR with this. |
|
@milosgajdos83 I think such a topic is far beyond the scope of this change. You're talking about ripping out the bridge interface and replacing it with VEPA. This is a huge change (which I have significant arguments against) and should be covered in its own issue/PR, and probably discussed on the mailing list and/or IRC as well. |
|
Nah, I'm not talking about ripping out the bridge. Bridge has its place in Docker. I'm talking about one particular functionality. I agree, this would be a significant change. The question is, would it bring more benefits than overloading one simple bridge on the host with millions options and adding some iptables kung fu on top to it ? I would quite like to hear the significant arguments you have :-) Happy to take this into mailing lists indeed. |
|
The 2 features are exclusive. The purpose of VEPA is to replace bridge routing. |
|
ping @phemmer @crosbymichael |
This is to support being able to DNAT/MASQ traffic from a container back into itself (moby/moby#4442) Docker-DCO-1.1-Signed-off-by: Patrick Hemmer <[email protected]> (github: phemmer)
|
rebased |
|
@crosbymichael feel free to review and merge |
|
Tested. LGTM. |
|
LGTM |
enable hairpin mode on virtual interface bridge port
This is to support being able to DNAT/MASQ traffic from a container back into itself (moby/moby#4442)
See subsequent pull request on docker project for details.