-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker bypasses ufw firewall rules #690
Comments
The problem is ufw does it's own thing here. The best thing to do here would be to insert a jump rule into the Note that in your example, docker is not doing anything with iptables OR networking since it's using |
@cpuguy83 Thanks for the quick reply. With --net=host specified docker (latest version) is still opening the port via iptables, at least on my ubuntu 18.04 fresh install. If that's not supposed to be happening, maybe it's a bug? I agree it definitely shouldn't be doing anything with iptables or networking if --net=host is specified. I'll see if I can find the thread you mentioned. Perhaps the docker install process could automatically add DOCKER_OPTS="--iptables=false" if ufw is enabled? |
I just came across the same article myself, and I am very surprised by this behaviour. I dont know all the details about Linux networking, but is there any reason to be doing it? I never heard of any other program that goes around the firewall. If this is some kind of feature, it should definitely disabled by default because it opens up the entire server. |
@Nutomic I agree. I don't think looking at this as a "UFW problem" is the right approach. The way UFW manages the firewall is quite elegant, which is why the majority of Ubuntu users are using it over firewalld. I really think the docker devs need to add UFW compatibility ASAP as it's a serious security issue. Or include a clear warning on install letting users know their UFW rules will be ignored and instructions on a workaround. |
+100 All of us are using ufw. This is an exploit waiting to be exploited. This should be considered a security risk. Why is it not given priority? |
Does someone know how to report security risks? In other projects there is usually an email address (or some other mechanism) to report exploits directly to management. If a server is compromised and docker can be blamed somehow, it would be a major PR headache (at best) for docker management, so I'm sure they'd want to know about this immediately. And if it cannot be fixed, I'm also sure they'd quickly update the docs with major warnings about this problem to let users know about it and legally pass the blame onto us. We should not discover this problem by accident, it should be made clear in the docs - and how best to work around it. |
There is a SECURITY.md in the moby repo explaining it. I don't think the desired outcome should be to default to iptables=false but rather have a way to have docker insert its jump rule into ufw. |
@cpuguy83 IMHO the fact that it's so easy to misconfigure makes it a pretty serious security risk. Other than third party firewall software, I don't know of any other packages that bypass UFW rules. It's such a low-level OS function that most users, even advanced ones, wouldn't think to check. Definitely agree re: implementation though. Getting Docker to insert its jump rule into UFW would be great. |
Thanks to @cpuguy83, it's: security @ docker . com I recommend we all send a message there. |
I just had a look at the multiple UFW-related issues for moby and they've all been closed... :/ Does anyone know if podman has the same problem? If not, might be a viable alternative |
That is very surprising! More reason for us to message the security team. This is crazy - how many users don't know they have gaping holes in their security because of this??? For anyone arriving here from google in the future, please do two things:
|
btw, I'm think all that's needed is to insert a jump rule from e.g.
|
Adding some info per @menathor request See:
Most of these recommend disabling iptables manipulation with
More recently, two other workarounds have surfaced which do not use this flag and seem to be more robust: |
Someone from the docker team please tell us the official and secure way to deal with this problem? I don't want to become an iptables expert just so I can use docker. |
@lonix1 Did you try |
There are about a dozen approaches starting from 2013/2014, and changing with each major version of docker. At this point I have no understanding which to use, and why. I know ufw well, but not iptables. I'm hesitant to use the mindless copy-paste approach as I'm afraid to blow up my servers's security. That's why I would be grateful for official guidance, and explanation. The code you posted seems good, but I have no idea what it does 😃 Would you mind telling us in your opinion which is the best way (I assume it's what you posted above), and why/how it works? (The simplest approach I've found is not to change anything, but use Thanks for helping us out! |
TMK nothing has changed in a very long time. Docker creates an iptables chain called |
@cpuguy83 So if I understand correctly, if I implement your approach, thereafter I can continue to use ufw to open/close ports, and never need to mess around with iptables at all? That sounds perfect. On another note, since you are obviously an expert in this matter, how do you feel this approach compares to the one I posted above ( |
It's always best to be explicit about what you want... e.g. if you
don't want nginx to be available on all interfaces then specify the
interface (such as 127.0.0.1).
The other thing you can do is change the default bind address to
127.0.0.1 in the daemon config, then you need to be explicit about
what should have public access rather than what should be private.
…On Mon, Sep 9, 2019 at 10:17 AM lonix1 ***@***.***> wrote:
@cpuguy83 So if I understand correctly, if I implement your approach, thereafter I can continue to use ufw to open/close ports, and never need to mess around with iptables at all? That sounds perfect.
On another note, since you are obviously an expert in this matter, how do you feel this approach compares to the one I posted above (127.0.0.1:8080:80 + nginx) - do you feel one is better/safer/ whatever than the other?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
--
- Brian Goff
|
@cpuguy83 I think its less about a change in behavior and more about a documented best practice to make things work securely. I say this realizing that you can be explicit about which interface/ip to listen on. But if the expected behavior of UFW is that it blocks all incoming traffic except where specified, then to avoid mistakes, it should do so for Docker as well. Even if it takes some extra configuration. Also your jump rule |
@kaysond What workaround are you using at the moment? |
EDIT: just fixed my docker / ufw example - the first one didn't make any sense. It's late over here :P 💯 x this:
Any solution that requires sysadmin knowledge / reconfiguration every time new ports are involved isn't a viable solution. The whole idea behind ufw and docker is ease of use. So taking a step back and breaking it down into super simplistic (i.e. developer) terms:
I then proxy port 9000 to be behind an oauth gateway on port 443
The only reasonable solutions for users like me are:
I can't speak for all Ubuntu's users, but I'm a developer, not a sysadmin. I have reasonable linux skills and could implement either of the above solutions no probs. But I no longer feel comfortable using docker on ubuntu in production if my network security is based on hacks from third party sites that may or may not work in certain situations, or may break with future updates. How can I run things like keycloak, rundeck or other sensitive apps that I'm proxying to 443 and putting behind an oauth proxy if there's any chance docker is going to completely ignore my deny rules and happily expose the original port? Sorry if this is sounds a bit harsh and peace and love to everyone here, but this has already turned into another "try workaround x" and "workaround x doesn't work when x+y=z" thread. Every one of those on the moby repo have been closed, dating back years. Developers using Ubuntu (the linux distro with the biggest market share) need an official, supported and documented way of setting docker up so that we can happily Or alternatively, official clarification from Docker that "docker does not work with Ubuntu's default firewall". That would be better than silence. TLDR: https://tenor.com/view/developers-gif-13292051 ☮️❤️ |
@menathor I'm in the same boat as you exactly. And I'm also wary of the copy-pasta approach. So I decided to dump ufw and use iptables. Which annoys me, as I have to waste time learning it. I'm still going through various tutorials. But what surprises me so far, is despite the widely-held belief that iptables is "low-level" and complicated.... it's not. It's simple. I assume there are edge cases though that you wouldn't typically think of - icmp, specific ports, etc. But if you use a whitelist (rather than blacklist) then at the very worst, you'll make a mistake that'll lock you out of your own server. That's not a problem as you can log in via your hoster's web-based console and fix it, and importantly - the bad guys won't be able to get in either. This is the one and only downside to using iptables, and it's manageable. Here are some tutorials I'm going through:
More theoretical: I'll post my final config when I'm done in a few days, if others do the same then we can compare. |
@lonix1 Right now, nothing. But I'm working on a larger scale deployment that needs a solution. For now I would suggest reading this: https://github.com/chaifeng/ufw-docker It has a very descriptive readme, and once you have an idea of what its doing, I'd just run the script. This is a temporary solution until the Docker team comes up with something. |
I would say that is the "fix". |
Well, yes. I think its reasonable to have a set of iptables rules be the "fix," and its easy enough to implement via UFW's config files. But what the community is looking for is something that has been thoughtfully considered by the Docker team and published in the documentation. For example, the script I linked will allow all traffic from RFC 1918 ranges to reach the Docker network. This is not how UFW behaves by design. If I turn on a service on my Ubuntu host, but don't explicitly allow it in UFW, not even local traffic should reach it. And maybe that difference is fine. But I, and probably most others, would be more comfortable taking the recommendation from the Docker documentation, and not some guy's github. |
Bump. Anyone have any more input? Maybe we need to open an issue on the documentation repo... |
Hey @kaysond Sorry I forgot about this thread. I've been using iptables for a week now and couldn't be happier. For anyone who arrives here, these two links will help you integrate iptables and docker very simply:
If you still want to integrate ufw and docker, then I think what @cpuguy83 wrote above is the way to go (or a variation on that idea). Any rules in the Of course the ideal is for the docker team to provide clear guidance of a tested/supported rule, because this isn't something most ufw users know how to do. |
I've just got my database stolen. The database port was supposed to be closed by ufw but it wasn't. Lukilly, the database was not very important. For those who use docker compose. The following is the solution that seems to work for me. In docker-compose.yml when mapping host ports to container ports specify also ip address part IPADDR:HOSTPORT:CONTAINERPORT. docs . E.g.
This will hide host port from the outside world. |
Good learning opportunity. 😉
There's a more general point too: if you are running a service and don't want to make it available to other networks, check! |
This is 2023 about to crossover to 2024 and am still experiencing this issue Please @docker do something |
While I 100% agree with your points made, this is still a confession of failure by docker and waiting for disasters to happen. |
I ran into this problem with FirewallD as well, to my surprise all containers that had Ports configured to them were directly accessible from the LAN even if I have a zone with supposedly strict rules controlling what inbound connections are allowed to the server. My workaround to this problem is, for the containers that had ports and I wanted to control access to from the LAN, I switched to using host networking (network_mode: host in docker compose). This gives control back to firewalld. I assume the same workaround applies to ufw. |
Yes, it does! |
So what's the current best way to fix this, as Docker in their wisdom refuse to fix this clearly broken design? https://github.com/capnspacehook/whalewall or https://github.com/chaifeng/ufw-docker? |
The best way to fix this is not to use the
or
|
How does that compare to the other two solutions? I think one of the problems with all this is that not everyone is a Linux networking/firewall specialist or wants to be one just to use containers. So this makes it difficult to understand when to use one of those two solutions, or this third solution of simply not using This is all compounded by seemingly dozens of solutions or variations of solutions in this issue, the linked issues, and the internet. |
Binding to localhost (127.0.0.1) - regardless of port - should not directly expose the container to anything outside the system upon which the container is running. You’d need to specifically set up something like a reverse proxy to route traffic to/from the container. Otherwise, the container would only be accessible from the local system. |
expose does nothing. It is documentation in your compose file of what port your container operates on. It does not actually change anything. The suggestion is simply not to map ports to your LAN and use a reverse proxy to access services outside of the local host. If you want to access it on the local host you can specify the local IP. absent the above, it needs custom firewall rules or a change of logic from Docker |
So whalewall is a valid solution then, possibly? |
No idea, I've not read it. A valid solution for me, which I've been using for years (see my original comment above from 2020 #690 (comment)), is what I and others mentioned earlier. Bind the container to localhost and it cannot be accessed from outwith the local system in the absence of a reverse proxy (or similar mechanism). You can define that in your compose file (if you use that), or even as an argument in the command to fire up a container. You want to ensure you have the localhost IP You can also do things like restricting access to the container to your local network of course, or a specific IP. But that's outwith the scope of this thread. Specifically: binding to localhost will ensure your container is as secure as the rest of the system upon which it's running; it will be inaccessible to anything else unless you specifically do something to make it available. (Naturally you'd also need to ensure any reverse proxy etc is properly configured and that your firewalls are configured to drop any incoming packets with a source and destination defined as 127.0.0.1; by default they should be). Disclaimer: I'm not a network security expert, but I was running containers like this for years and didn't notice any issues. If you're running anything critical, then you should employ a network/cyber security professional to ensure your systems are secure in any case. |
@UplandsDynamic I just came across this issue this week while trying to migrate between reverse proxy managers. I just read the thread to refresh myself. The problem is that one of the most popular proxy managers nginx proxy manager uses a docker container that relies on 3 ports being forwarded in order to work. Its a catch 22 where if you want to use ufw to manage firewall rules, the best answer is to locally bind docker containers and use a reverse proxy to access them. But in order to use a reverse proxy, you need a docker container to be accessible from outside localhost. Since it is just the 2 ports, you could just forward those manually in the iptables and go ahead with this workaround. But, this essentially disables the use of streams in nginx proxy manager, where its like a reverse proxy but for ports instead of webservers. In these cases, I think the best workaround is to just edit /etc/docker/daemon.json and set iptables to false. This supposedly risks internet connection to the containers, but I have tested it and they all seem to work fine. I think if they get updated though or a new container gets added, I might have to temporarily enable iptables in daemon.json It would really be nice though if the docker team could provide an official workaround with docker . This has been an issue for almost 5 years now with no real solution. |
Disabling iptables will lead to other serious problems though… https://stackoverflow.com/questions/30383845/what-is-the-best-practice-of-docker-ufw-under-ubuntu |
@Masong19hippows If you're using a Docker container for your reverse proxy, why can't you just allow that container to bind to 0.0.0.0 (or a specific IP for your requirements)? You'd then configure your NGINX container to forward requests to the correct connected containers (presumably via the docker internal network), in the usual way, using the NGINX config files. I've not done that myself, however, so perhaps you're hitting issues I'm not aware of. I have no need nor desire for containerisation of 'all the things'. I just run forward proxies (NGINX) servers on the OS, (or other machines connected by overlay networks). I do not allow Docker containers to bind to anything other than the machine they're running on (localhost), period. I just don't trust it & never really have. Allowing direct external connections in principle just needlessly broadens the potential attack surface. I prefer to route incoming connections to any service running on any of my servers (virtual & physical) though dedicated, gateways (reverse proxies). That works for me, but I realise that might not be suitable for everyone's needs. |
@UplandsDynamic I have binded it to 0.0.0.0. This bypasses ufw rules though and renders it useless for reasons above. You are right where setting iptables to false might introduce other issues. I was originally using fail2ban with ufw to automagically block ips from accessing the server at all. I just gave up and switched the action to use iptable rules instead of ufw and that seems to work. I didn't want to do this originally though because ufw makes it easier for management purposes. I still think an official solution from docker would be better though. For compatibility between docker and ufw, I had to stop using ufw lol. |
Although this is obviously a design flaw of Docker, I agree to @UplandsDynamic and don’t recommend any “workaround” suggested above, regarding uninstalling IPtables or UFW. For the above mentioned reverse proxy scenario it’s pretty simple to bind all ports of the services to 127.0.0.1 and for the proxy itself to 0.0.0.0. |
@Masong19hippows I don't really understand what you're trying to do. If you have bound your NGINX container to 0.0.0.0, and only exposed the relevant ports on that container, then what's the issue exactly? You'd want connections to that container (on its exposed ports) to get though your firewall in any case, to receive incoming connections. None of your other containers would be exposed outside the system they're hosted on, unless, for example, you explicitly exposed their ports and connected them to your NGINX container over the docker internal network. As I said, personally I wouldn't use a container for the reverse proxy in any case. I'd bind everything to 127.0.0.1 and then have NGINX (or whatever else) forward traffic where you need it. |
The problem is with fail2ban and what it does to firewall rules. What fail2ban does is provide protection against brute force attacks. It provides an extra layer of security by editing firewall rules to deny from specific IP addresses. What you are talking about is just using the firewall for ports and not for denying traffic from certains IPs. If you are hosting a webpage that needs auth at ex.domain.com and a specific IP address accessing it fails the authentication for a specific amount of times in a dedicated time window, then fail2ban sets a firewall rule that denies traffic from that specific IP address to *domain.com. This prevents another unauthorized login attempt from that IP to ex.domain.com So before I switched to npm, I had fail2ban setup with ufw so that it would insert a dent rule for that specific IP address and that's how it would block it. I can't do that though with NPM because it's in a docker container and doesn't follow ufw rules, including the deny rules inserted by fail2ban. I wanted ufw to be the master firewall rule table to where nothing could bypass it. Does this make sense? I also didn't uninstall ufw. I just switched fail2ban to use iptables for inserting these deny rules instead of ufw so that they would actually be followed. I am following exactly what you said and have been binding docker containers to localhost and only exposing the proxy manager to outside networks. |
As some have mentioned previously here, it's not that Docker or the moby project are completely unaware of or ignoring the issue:
|
What the hell... I'm currently dealing with immense brute force attacks on my simple blog, and I was relying on my ufw rate limit until I realised it wasn't working and that's why my server keeps crashing. The topic seems to be a rabbit hole when you look at this thread. As a developer, I'm a bit shocked by this default behaviour of Docker and don't really know what to do. Does anyone have a quick fix for rate limiting with 80/tcp and 443/tcp on my (Docker) reverse proxy? Or is there no easy answer here? P.S. This ufw-docker project and its unanswered issues are too fishy for me. |
Docker seems to take an approach that views a firewall as only allowing/denying ports. I think the reason this chain has stayed open so long with no real solution is that the team doesn't understand the abilities of a firewall and that replacing a very specific part of the firewall doesn't solve the issue of the firewall not working. I think it just comes from a lack of understanding rather than from a "fishy" stance. My solution was to just use the host network instead of a natted docker network. This seems to make docker use the ufw rules. I believe this is just "network_mode: 'host'" in a docker compose file. |
Again, the reason this thread is still open is because it is no longer being actively monitored by the Docker / Moby team. This is a LEGACY issue tracker. This repo readme states:
It further advises:
As @kernstock relates in his comment of 7 May, there are related issues open on the Moby project GitHub account. |
The linked issue has been open for a year with someone asking for an update 6 months ago with no answer. I understand this might not be the correct place, but you aren't suggesting anything better if you actually want a response from people. |
This issue and the one described in the following moby issue are different. Unfortunately, ufw is a bit too simplistic to be compatible with Docker. It assumes all traffic is addressed to the host, whereas containers are really independent routed endpoints. iptables, and the underlying kernel subsystem, distinguishes between these two types of destination: current host (eg. We won't implement the IPVS port mapper I once proposed, as it'd be too much work with uncertain outcomes. However, we recently talked about introducing a 'new' port-mapper that would be based solely on We can't solve this issue on our side only. I get that for ufw, simplicity is one of its core value, and running it on a router is probably out-of-scope, but at the end of the day a docker host really is a router. We'd need ufw maintainer(s) to support that use-case. IIRC there's an open issue on Ubuntu bug tracker for that, with a low number of comments. I encourage you to ask there. |
That is sad. For a long time I thought Docker was the epitome of simplicity and ease of use. When I found out on Monday/Tuesday that something as mundane as the interaction between two applications, which is what simplicity is all about, didn't work, I was very disappointed.
How long should I let my Docker reverse proxy bombard on my basic VPS without a UFW rate limit? It's been dying every few minutes for a week. It will probably be another five years before there is a user-friendly, simple solution between UFW and Docker. I'm going to remove Docker from my stack again, that's my solution, and unfortunately it has lost the simplicity argument for me, so it won't be back on my stack any time soon. Thanks for the answer though, even if it doesn't help anyone. |
@Irotermund If you bind your containers to localhost, you could rate limit your reverse proxy as much as you like. You just need to constrain Docker container connections to the local system upon which they're hosted (bind to localhost), then configure the rest of your proxies, firewalls and networking appropriately. |
Expected behavior
Hi all!
ufw in ubuntu should be treated as the "master" when it comes to low level firewall rules (like firewalld in rhel). However docker bypasses ufw completely and does it's own thing with iptables. It was only by chance (luckily!) we discovered this. Example:
ufw deny 8080 (blocks all external access to port 8080)
docker run jboss/keycloak
Expected behaviour: the Keycloak container should be available at port 8080 on localhost/127.0.0.1, but not from the outside world.
Actual behavior
UFW reports port 8080 as blocked but the keycloak docker container is still accessible externally on port 8080.
There is a workaround (https://www.techrepublic.com/article/how-to-fix-the-docker-and-ufw-security-flaw/) however I think techrepublic are correct when then describe it as a "security flaw", and it's a pretty serious one. Most people using ubuntu user ufw. I imagine a large number of them are unaware their UFW rules are being bypassed and all their containers are exposed.
Is this something that can be addressed in the next update? That article was published in Jan 2018.
The text was updated successfully, but these errors were encountered: