-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
URL encoded characters are being parsed by ingress #590
Comments
@polycaster If it is the case, then NGINX will normalize the URI before sending a request to the backend. For example, As a workaround, I can suggest taking a look at this example https://stackoverflow.com/questions/28684300/nginx-pass-proxy-subdirectory-without-url-decoding/37584637#37584637 To achieve the expected behavior, you can customize the rewrite annotation according to the example, which will require creating a custom template. Or you can create a new custom annotation for rewrites. |
These are the only annotations regarding NGINX in the Ingress object, so I don't think that rewrites are enabled. Unless it always normalizes the URI, in which case, is that the desired behaviour? |
@polycaster Could you possibly share the Ingress resource? Could you possible share the NGINX Ingress Controller access logs for the problematic requests? |
That is not the behaviour I am observing. Here's a portion of the logs after sending a whole bunch of those requests:
Ingress configuration:
As you can see, no mention to rewrites. |
@polycaster thx for the additional details. Unfortunately, I'm not able to reproduce this behavior. Have you made any additional customizations of the IC that affect the generated configuration? For example, a customization of the ConfigMap resource. Additionally, could you possibly double check if there is any problem with the backend application? |
The configmap contains the following entries:
Not sure if any of them are relevant to the issue we are seeing. Successful request: If it is not possible to find a satisfactory solution to this issue for my case I might attempt to fix it with the stackOverflow post you mentioned above, but i'd rather that be a last resource situation. Not sure if it helps but here's the generated configuration from within the NGINX container:
|
@polycaster unfortunately, even with the additional configuration from the configmap, I am still not able to reproduce the issue. Could you possibly confirm that the problematic requests don't work when the client connects directly to the backend bypassing the Ingress Controller? Note that for testing I used this image Could you possibly tests the requests with a different backend such as the |
Hi @pleshakov, I'd like to thank you for your effort about this. After you've suggested that you couldn't reproduce the behaviour i've dug a bit more and sent the requests directly to the service pods from within the cluster. Apparently, the culprit is GRPC gateway that was downstream, and we seem to have been bitten by this long running issue grpc-ecosystem/grpc-gateway#224 The solution is upgrading GRPC gateway to a release where this issue has been fixed. Thank you once again. |
@polycaster np! |
Describe the bug
During infrastructure auditing it came up that URL encoded strings in the URI didn't match expected behaviour.
So, for example, the following request
DELETE example.endpoint/v1/workflows/%3c%2ffoo%3e
would return a 404 from NGINX whileDELETE example.endpoint/v1/workflows/%3C)foo%3E%20
would result in the expected behaviour where the request gets forwarded to the backend and gets handled there.If this is done with a URL encoded
:
it attempts the protocol redirect.These were the only two offending characters that I found, presumably because they have special meanings.
I'd appreciate if you could tell me if this is a bug, a feature or a misconfiguration.
To Reproduce
Steps to reproduce the behavior:
/
or:
characters.Expected behavior
I would expect that these URL encoded characters would get passed to the backend to deal with.
Your environment
nginx/nginx-ingress:1.4.3-alpine
(although this can be seen on clusters with K8s server version
v1.12.7-gke.10
andv1.9.2+coreos.0
)Kubernetes platforms: GKE, Managed and EKS
Using NGINX
Thank you very much for your time.
The text was updated successfully, but these errors were encountered: