Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream ExternalName services - proxy not working #1600

Closed
mooperd opened this issue Oct 26, 2017 · 16 comments · Fixed by #1605
Closed

Upstream ExternalName services - proxy not working #1600

mooperd opened this issue Oct 26, 2017 · 16 comments · Fixed by #1605

Comments

@mooperd
Copy link

mooperd commented Oct 26, 2017

I think I also have the same problem as #1332: I am seeing a 503 error in the nginx-controller log.

Here is the service which points to an AWS Elasticsearch available over https.

$kubectl get service/external-elasticsearch-service -n dev-andrew-0 -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-10-26T13:45:34Z
  name: external-elasticsearch-service
  namespace: dev-andrew-0
  resourceVersion: "11253256"
  selfLink: /api/v1/namespaces/dev-andrew-0/services/external-elasticsearch-service
  uid: f0a89a6c-ba53-11e7-9ff5-024b8c1b2a04
spec:
  externalName: search-es.eu-west-1.es.amazonaws.com
  sessionAffinity: None
  type: ExternalName
status:
  loadBalancer: {}

Here is my ingress config dumped from kubectl. I trimmed out some stuff for readability

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  name: platform-frontend-dev-andrew-0
  namespace: dev-andrew-0
spec:
  rules:
  - host: dev-andrew.brickblock-dev.io
    http:
      paths:
      - backend:
          serviceName: external-elasticsearch-service
          servicePort: 443
        path: /

The nginx config from the ingress pod.

    server {
        server_name dev-andrew.foobar-dev.io;
        listen 80;
        listen [::]:80;
        set $proxy_upstream_name "-";

        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;

        location / {
            set $proxy_upstream_name "
e";

            port_in_redirect off;

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   20s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://dev-andrew-0-external-elasticsearch-service-443;
        }

    }

From the nginx pod the service does indeed seem to be available.

# curl -I http://dev-andrew-0-external-elasticsearch-service-443
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 340
Content-Type: application/json; charset=UTF-8
Connection: keep-alive

Here is an access log entry.

{
	"time": "2017-10-26T15:26:36+00:00",
	"status": "503",
	"remote_user": "-",
	"request_method": "GET",
	"server_protocol": "HTTP/1.1",
	"host": "dev-andrew.foobar-dev.io",
	"uri": "/",
	"request": "GET / HTTP/1.1",
	"request_id": "b0c5f68ae7d59f881d95211f57cbc05d",
	"args": "-",
	"request_length": "92",
	"request_time": "0.000",
	"bytes_sent": "390",
	"body_bytes_sent": "213",
	"http_referrer": "-",
	"http_user_agent": "curl/7.49.0",
	"upstream_addr": "127.0.0.1:8181",
	"upstream_response_length": "213",
	"upstream_response_time": "0.000",
	"upstream_status": "503",
	"proxy_upstream_name": "dev-andrew-0-external-elasticsearch-service-443",
	"proxy_protocol_addr": "",
	"proxy_add_x_forwarded_for": "100.96.22.0"
}
@aledbf
Copy link
Member

aledbf commented Oct 26, 2017

@mooperd two things:

  1. The service needs a port
  - protocol: TCP
    port: 80
    targetPort: 80
  1. please check the external name can be resolved. Your host search-es.eu-west-1.es.amazonaws.com does not returns a valid IP

@mooperd
Copy link
Author

mooperd commented Oct 26, 2017

Hey Manuel,

The service has been updated to:

apiVersion: v1
kind: Service
metadata:
  name: external-elasticsearch-service
  namespace: dev-andrew-0
spec:
  externalName: search-es-floopy-loop.eu-west-1.es.amazonaws.com #redacted
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: ExternalName
status:
  loadBalancer: {}

proxy_pass http://dev-andrew-0-external-elasticsearch-service-443;
I'm able to curl this service stated in the nginx conf from the controller.

# curl -I http://dev-andrew-0-external-elasticsearch-service-443
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 340
Content-Type: application/json; charset=UTF-8
Connection: keep-alive

Still getting 503's

Cheers,

Andrew

@aledbf
Copy link
Member

aledbf commented Oct 26, 2017

@mooperd please update the image to quay.io/aledbf/nginx-ingress-controller:0.267.
Please post the logs if the issue persist

@mooperd
Copy link
Author

mooperd commented Oct 26, 2017

@aledbf I'm afraid it persists. The only relevant logs that I can find are in the access log.

{
	"time": "2017-10-26T22:40:11+00:00",
	"status": "503",
	"remote_user": "-",
	"request_method": "GET",
	"server_protocol": "HTTP/1.1",
	"host": "dev-andrew.foobar-dev.io",
	"uri": "/",
	"request": "GET / HTTP/1.1",
	"request_id": "a73998b53945595975d269f12bff2bf6",
	"args": "-",
	"request_length": "92",
	"request_time": "0.000",
	"bytes_sent": "390",
	"body_bytes_sent": "213",
	"http_referrer": "-",
	"http_user_agent": "curl/7.49.0",
	"upstream_addr": "-",
	"upstream_response_length": "-",
	"upstream_response_time": "-",
	"upstream_status": "-",
	"proxy_upstream_name": "",
	"proxy_protocol_addr": "",
	"proxy_add_x_forwarded_for": "100.96.30.0"
}

@aledbf
Copy link
Member

aledbf commented Oct 26, 2017

@mooperd please check the dns. This is my test and works as expected

$ kubectl get svc,ingress -n dev-andrew-0 -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    creationTimestamp: 2017-10-26T19:41:13Z
    name: external-elasticsearch-service
    namespace: dev-andrew-0
    resourceVersion: "15510"
    selfLink: /api/v1/namespaces/dev-andrew-0/services/external-elasticsearch-service
    uid: 9feef931-ba85-11e7-b39f-08002744e01c
  spec:
    externalName: www.intercambiojuegos.cl
    sessionAffinity: None
    type: ExternalName
  status:
    loadBalancer: {}
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
    creationTimestamp: 2017-10-26T19:42:13Z
    generation: 2
    name: platform-frontend-dev-andrew-0
    namespace: dev-andrew-0
    resourceVersion: "15563"
    selfLink: /apis/extensions/v1beta1/namespaces/dev-andrew-0/ingresses/platform-frontend-dev-andrew-0
    uid: c3a7ed95-ba85-11e7-b39f-08002744e01c
  spec:
    rules:
    - host: dev-andrew.brickblock-dev.io
      http:
        paths:
        - backend:
            serviceName: external-elasticsearch-service
            servicePort: 80
          path: /
  status:
    loadBalancer:
      ingress:
      - {}
kind: List
metadata: {}
resourceVersion: ""
selfLink: ""

@mooperd
Copy link
Author

mooperd commented Oct 27, 2017

Hi @aledbf, Upon switching the image I saw the proxy_pass directive in the nginx conf was replaced with:

            # No endpoints available for the request
            return 503;

Although this remained in the config.

            set $service_name   "external-elasticsearch-service";

The previously available proxy at http://dev-andrew-0-external-elasticsearch-service-443 is no longer resolvable. I have been swapping the images around a bit, have redeployed the controller and recreated a new externalName service but I am unable to bring it back.

With some poking I got the proxy_pass http://dev-andrew-0-external-elasticsearch-service-443; directive back. Things perhaps felt quite inconsistent here like there is something misbehaving.

I'm unsure what I should be looking for in DNS other than checking if http://dev-andrew-0-external-elasticsearch-service-443 resolves.

ta

@aledbf
Copy link
Member

aledbf commented Oct 27, 2017

@mooperd are you connected to the kubertenes slack channel?

@chrismoos
Copy link
Contributor

@aledbf Is this because services with externalName don't actually have any pods that are targeted (thus no endpoints)? If so, maybe annotating with service-upstream will allow it to work if there is a clusterIP assigned.

@chrismoos
Copy link
Contributor

@mooperd Try setting the ingress.kubernetes.io/service-upstream annotation with a true value on the ingress resource.

@mooperd
Copy link
Author

mooperd commented Oct 27, 2017

@chrismoos - That annotation did not seem to change anything.
I tried ingress.kubernetes.io/service-upstream & kubernetes.io/service-upstream - I'm not sure if there is any difference.

In the nginx conf I noticed the following that looks a bit suspicious.

    upstream dev-andrew-0-external-elasticsearch-service-443 {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 127.0.0.1:8181 max_fails=0 fail_timeout=0;
    }
    # default server for services without endpoints
    server {
        listen 8181;
        set $proxy_upstream_name "-";

        location / {
            return 503;
        }
    }

for the record the ingress now looks like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/service-upstream: "true"
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  name: platform-frontend-dev-andrew-0
  namespace: dev-andrew-0
spec:
  rules:
  - host: dev-andrew.brickblock-dev.io
    http:
      paths:
      - backend:
          serviceName: external-elasticsearch-service
          servicePort: 443
        path: /

The relevant service

$kubectl get svc -n dev-andrew-0
NAME                                                          TYPE           CLUSTER-IP       EXTERNAL-IP                                                                  PORT(S)                      AGE
external-elasticsearch-service                                ExternalName   <none>           search-es-redacted.eu-west-1.es.amazonaws.com   80/TCP                       17h
external-elasticsearch-service-2                             ExternalName   <none>           search-es-redacted.eu-west-1.es.amazonaws.com   <none>                       7h

$kubectl get svc/external-elasticsearch-service -n dev-andrew-0 -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-10-26T13:45:34Z
  name: external-elasticsearch-service
  namespace: dev-andrew-0
  resourceVersion: "11365204"
  selfLink: /api/v1/namespaces/dev-andrew-0/services/external-elasticsearch-service
  uid: f0a89a6c-ba53-11e7-9ff5-024b8c1b2a04
spec:
  externalName: search-es-redacted.eu-west-1.es.amazonaws.com
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: ExternalName
status:
  loadBalancer: {}

Here is the full nginx.conf

daemon off;

worker_processes 1;
pid /run/nginx.pid;

worker_rlimit_nofile 1047552;
events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {
    set_real_ip_from    0.0.0.0/0;
    real_ip_header      X-Forwarded-For;

    real_ip_recursive   on;

    geoip_country       /etc/nginx/GeoIP.dat;
    geoip_city          /etc/nginx/GeoLiteCity.dat;
    geoip_proxy_recursive on;

    vhost_traffic_status_zone shared:vhost_traffic_status:10m;
    vhost_traffic_status_filter_by_set_key $geoip_country_code country::*;

    # lua section to return proper error codes when custom pages are used
    lua_package_path '.?.lua;/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;';
    init_by_lua_block {
        require("error_page")
    }

    sendfile            on;
    aio                 threads;
    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;

    keepalive_timeout  75s;
    keepalive_requests 100;

    client_header_buffer_size       1k;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;

    http2_max_field_size            4k;
    http2_max_header_size           16k;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   64;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      64;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    include /etc/nginx/mime.types;
    default_type text/html;
    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
    gzip_proxied any;

    # Custom headers for response

    server_tokens on;

    # disable warnings
    uninitialized_variable_warn off;

    log_format upstreaminfo '{"time":"$time_iso8601","status":"$status","remote_user":"$remote_user","request_method":"$request_method","server_protocol":"$server_protocol","host":"$host","uri":"$uri","request":"$request","request_id":"$request_id","args":"$args","request_length":"$request_length","request_time":"$request_time","bytes_sent":"$bytes_sent","body_bytes_sent":"$body_bytes_sent","http_referrer":"$http_referer","http_user_agent":"$http_user_agent","upstream_addr":"$upstream_addr","upstream_response_length":"$upstream_response_length","upstream_response_time":"$upstream_response_time","upstream_status":"$upstream_status","proxy_upstream_name":"$proxy_upstream_name","proxy_protocol_addr":"$proxy_protocol_addr","proxy_add_x_forwarded_for":"$proxy_add_x_forwarded_for"}';

    map $request_uri $loggable {
        default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
    error_log  /var/log/nginx/error.log notice;

    resolver 100.64.0.10 valid=30s;

    # Retain the default nginx handling of requests without a "Connection" header
    map $http_upgrade $connection_upgrade {
        default          upgrade;
        ''               close;
    }

    # trust http_x_forwarded_proto headers correctly indicate ssl offloading
    map $http_x_forwarded_proto $pass_access_scheme {
        default          $http_x_forwarded_proto;
        ''               $scheme;
    }

    map $http_x_forwarded_port $pass_server_port {
       default           $http_x_forwarded_port;
       ''                $server_port;
    }

    map $http_x_forwarded_for $the_real_ip {
        default          $http_x_forwarded_for;
        ''               $remote_addr;
    }

    # map port 442 to 443 for header X-Forwarded-Port
    map $pass_server_port $pass_port {
        442              443;
        default          $pass_server_port;
    }

    # Map a response error watching the header Content-Type
    map $http_accept $httpAccept {
        default          html;
        application/json json;
        application/xml  xml;
        text/plain       text;
    }

    map $httpAccept $httpReturnType {
        default          text/html;
        json             application/json;
        xml              application/xml;
        text             text/plain;
    }

    # Obtain best http host
    map $http_host $this_host {
        default          $http_host;
        ''               $host;
    }

    map $http_x_forwarded_host $best_http_host {
        default          $http_x_forwarded_host;
        ''               $this_host;
    }

    server_name_in_redirect off;
    port_in_redirect        off;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    # turn on session caching to drastically improve performance
    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_session_timeout 10m;

    # allow configuring ssl session tickets
    ssl_session_tickets on;

    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;

    # allow configuring custom ssl ciphers
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_prefer_server_ciphers on;

    ssl_ecdh_curve secp384r1;

    proxy_ssl_session_reuse on;

    upstream upstream-default-backend {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 100.96.30.12:8080 max_fails=0 fail_timeout=0;
    }

    upstream dev-andrew-0-external-elasticsearch-service-443 {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 127.0.0.1:8181 max_fails=0 fail_timeout=0;
    }

    server {
        server_name _;
        listen 80 default_server reuseport backlog=511;
        listen [::]:80 default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        listen 442 proxy_protocol default_server reuseport backlog=511 ssl http2;
        listen [::]:442 proxy_protocol  default_server reuseport backlog=511 ssl http2;
        # PEM sha: 367856b451a05f063c074a257f9c0dae97c64702
        ssl_certificate                         /ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /ingress-controller/ssl/default-fake-certificate.pem;

        more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";

        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;

        location / {
            set $proxy_upstream_name "upstream-default-backend";

            port_in_redirect off;

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   20s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://upstream-default-backend;
        }

        # health checks in cloud providers require the use of port 80
        location /healthz {
            access_log off;
            return 200;
        }

        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {
            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }
    }

    server {
        server_name dev-andrew.foobar-dev.io;
        listen 80;
        listen [::]:80;
        set $proxy_upstream_name "-";

        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;

        location / {
            set $proxy_upstream_name "dev-andrew-0-external-elasticsearch-service-443";

            port_in_redirect off;

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   20s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://dev-andrew-0-external-elasticsearch-service-443;
        }

    }
    # default server, used for NGINX healthcheck and access to nginx stats
    server {
        # Use the port 18080 (random value just to avoid known ports) as default port for nginx.
        # Changing this value requires a change in:
        # https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/nginx/command.go#L104
        listen 18080 default_server reuseport backlog=511;
        listen [::]:18080 default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        location /healthz {
            access_log off;
            return 200;
        }

        location /nginx_status {
            set $proxy_upstream_name "internal";

            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }

        # this location is used to extract nginx metrics
        # using prometheus.
        # TODO: enable extraction for vts module.
        location /internal_nginx_status {
            set $proxy_upstream_name "internal";

            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }

        location / {
            set $proxy_upstream_name "upstream-default-backend";
            proxy_pass             http://upstream-default-backend;
        }

    }

    # default server for services without endpoints
    server {
        listen 8181;
        set $proxy_upstream_name "-";

        location / {
            return 503;
        }
    }
}

stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
    access_log /var/log/nginx/access.log log_stream;
    error_log  /var/log/nginx/error.log;
    # TCP services
    # UDP services
}

@chrismoos
Copy link
Contributor

I tested this and unfortunately it won't work out of the box with service-upstream either. The solution is to add support for externalName by customizing the proxy_pass directive to point to it, which is the best way to support a DNS name for the backend:

server {
    location / {
        set $backend_servers backends.example.com;
        proxy_pass http://$backend_servers:8080;
    }
}

@aledbf
Copy link
Member

aledbf commented Oct 27, 2017

ok, first please update the image to quay.io/aledbf/nginx-ingress-controller:0.269. This image contains the current master and PR #1605

This is the service:

  apiVersion: v1
  kind: Service
  metadata:
    name: external-elasticsearch-service
    namespace: dev-andrew-0
  spec:
    externalName: search-xxxxxxxxx.amazonaws.com
    type: ExternalName

and this the ingress:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      ingress.kubernetes.io/secure-backends: "true"
      kubernetes.io/ingress.class: nginx
    name: platform-frontend-dev-andrew-0
    namespace: dev-andrew-0
  spec:
    rules:
    - host: xxxxxxxxxxxxx
      http:
        paths:
        - backend:
            serviceName: external-elasticsearch-service
            servicePort: 443
          path: /

the most important part here is the ingress.kubernetes.io/secure-backends: "true" annotation.
(the externalname uses ssl)

this is the output

curl -v http://192.168.99.100:30507 -H 'Host: xxxxxxxxxxxxx'
* Rebuilt URL to: http://192.168.99.100:30507/
*   Trying 192.168.99.100...
* TCP_NODELAY set
* Connected to 192.168.99.100 (192.168.99.100) port 30507 (#0)
> GET / HTTP/1.1
> Host: xxxxxxxxxxxxx
> User-Agent: curl/7.55.1
> Accept: */*
> 
< HTTP/1.1 403 Forbidden
< Server: nginx/1.13.6
< Date: Fri, 27 Oct 2017 17:01:12 GMT
< Content-Type: application/json
< Content-Length: 99
< Connection: keep-alive
< Access-Control-Allow-Origin: *
< x-amzn-RequestId: 6fb78426-bb38-11e7-813c-fba15c9040d5
< 
* Connection #0 to host 192.168.99.100 left intact
{"Message":"User: anonymous is not authorized to perform: es:ESHttpGet on resource: es-brick"}

@mooperd
Copy link
Author

mooperd commented Oct 27, 2017

Great! It's working!
NOTE - the 403 Forbidden response is coming from the external Elasticsearch and is a positive result.

@chrismoos
Copy link
Contributor

@aledbf Does your solution just put the external name in the upstream section as a new backend. If so, it has this drawback:

NGINX caches the DNS records until the next restart or configuration reload, ignoring the records’ TTL values.

More info here: https://www.nginx.com/blog/dns-service-discovery-nginx-plus/

@aledbf
Copy link
Member

aledbf commented Oct 27, 2017

@chrismoos please check the generated nginx.conf. The resolver caches the dns responses only for 30 seconds
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver

@lukeplausin
Copy link

I am having this issue with chart version 3.34.0. I'm passing traffic to an ExternalName backend on port 443. I can hit the service directly from the NGINX pod on that port without issues. I'm not sure exactly what the cause is, but in the NGINX logs the upstream URL is an IP address, and if I run curl using the IP address I get the same error as the logs. It's not quite clear where the IP address comes from.

I've tried using these annotations:

nginx.ingress.kubernetes.io/proxy-ssl-name: foo.bar.net
nginx.ingress.kubernetes.io/proxy-ssl-server-name: foo.bar.net
ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTPS

Lots of these errors in the logs:

2021/07/06 11:31:03 [error] 656#656: *173113 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.28.4.1, server: foo.bar.net, request: "GET /baz/ HTTP/2.0", upstream: "https://1.2.3.4:443/", host: "foo.bar.net"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants