Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publishing one randomized host port will lead to all host ports being randomized #8650

Closed
znerol opened this issue Dec 8, 2020 · 8 comments · Fixed by #8652
Closed

Publishing one randomized host port will lead to all host ports being randomized #8650

znerol opened this issue Dec 8, 2020 · 8 comments · Fixed by #8652
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@znerol
Copy link

znerol commented Dec 8, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Using the --publish flag it is possible to either specify the host port explicitly or omit it. In the latter case the container runtime will allocate a random port. If multiple --publish flags are used, and at least one of them omits the host port, then all host ports will be allocated randomly.

Steps to reproduce the issue:

  1. Run a container and specify one port with explicit host port mapping and one without.
    $ podman run --rm --publish 3000:3000 --publish 1234 k8s.gcr.io/pause
    
  2. Examine the port mapping:
    $ podman port -l
    

Describe the results you received:

$ podman port -l
3000/tcp -> 0.0.0.0:33205
1234/tcp -> 0.0.0.0:33273

Describe the results you expected:

$ podman port -l
3000/tcp -> 0.0.0.0:3000
1234/tcp -> 0.0.0.0:NNNNN

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

$ podman version
Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.14
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

Not relevant (except if the bug isn't reproducible on less exotic systems).

Package info (e.g. output of rpm -q podman or apt list podman):

Installed from kubic

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical machine.

Running with --log-level debug, the following relevant lines appear:

DEBU[0000] Adding port mapping from 3000 to 3000 length 1 protocol "" 
DEBU[0000] Adding port mapping from 0 to 1234 length 1 protocol "" 
DEBU[0000] parsed reference into "[vfs@/home/lo/.local/share/containers/storage+/run/user/1000/containers]k8s.gcr.io/pause:latest" 
[...]
DEBU[0000] using systemd mode: false                    
DEBU[0000] Successfully assigned container port 3000 to host port 45799 (IP  Protocol tcp) 
DEBU[0000] Successfully assigned container port 1234 to host port 34203 (IP  Protocol tcp) 

Thus it looks like the command line parsing works correctly in parseSplitPort but fails somewhere between there an the subsequent specgen

If only explicit port mappings are specified (i.e., one '--publish 3000:3000` flag) then the loglines read:

DEBU[0000] Adding port mapping from 3000 to 3000 length 1 protocol "" 

No lines are logged from specgen/generate/parsePortMapping at all. So, maybe the problem is in the code block within if postAssignPort. When that branch is hit, then it looks like all host ports are being randomized.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 8, 2020
@mheon
Copy link
Member

mheon commented Dec 8, 2020

I'll take this.

@mheon mheon self-assigned this Dec 8, 2020
@mheon mheon added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Dec 8, 2020
@mheon
Copy link
Member

mheon commented Dec 8, 2020

Confirmed, reproduces locally.

@mheon
Copy link
Member

mheon commented Dec 8, 2020

We definitely want a backport of this one for RHEL, I think.

@mheon
Copy link
Member

mheon commented Dec 8, 2020

Got it. Writing a test now to make sure we don't regress.

@znerol
Copy link
Author

znerol commented Dec 8, 2020

@mheon wow! Does it work with port ranges as well? I thought I found a solution by removing the ! in the condition on line 186, but that triggered problems with port ranges.

			if p.HostPort != 0 && !tmp.isInRange {

@znerol
Copy link
Author

znerol commented Dec 8, 2020

Also filed #8651 which might be related.

@mheon
Copy link
Member

mheon commented Dec 8, 2020

Fix in #8652 - you found part of the problem with tmp.isInRange, though the issue was it shouldn't have been there at all.

mheon added a commit to mheon/libpod that referenced this issue Dec 8, 2020
The existing logic (Range > 0) always triggered, because range is
guaranteed to be at least 1 (a single port has a range of 1, a
two port range (e.g. 80-81) has a range of 2, and so on). As such
this could cause ports that had a host port assigned to them by
the user to randomly assign one instead.

Fixes containers#8650
Fixes containers#8651

Signed-off-by: Matthew Heon <[email protected]>
@znerol
Copy link
Author

znerol commented Dec 8, 2020

Yes, #8652 fixes both issues. Thanks!

mheon added a commit to mheon/libpod that referenced this issue Dec 9, 2020
The existing logic (Range > 0) always triggered, because range is
guaranteed to be at least 1 (a single port has a range of 1, a
two port range (e.g. 80-81) has a range of 2, and so on). As such
this could cause ports that had a host port assigned to them by
the user to randomly assign one instead.

Fixes containers#8650
Fixes containers#8651

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants