Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not listening on 443 with dynamic certificates enabled #3910

Closed
jbotelho2-bb opened this issue Mar 19, 2019 · 11 comments
Closed

Not listening on 443 with dynamic certificates enabled #3910

jbotelho2-bb opened this issue Mar 19, 2019 · 11 comments

Comments

@jbotelho2-bb
Copy link

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): dynamic certificates


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

NGINX Ingress controller version: 0.23.0

Kubernetes version (use kubectl version): 1.12.3

Environment:

  • Cloud provider or hardware configuration: OpenStack
  • OS (e.g. from /etc/os-release): Ubuntu 16.04
  • Kernel (e.g. uname -a): Linux 4.15.0-39-generic

What happened:
When adding the --enable-dynamic-certificates=true flag, nginx stops listening on 443. The flag seems to cause the lua module to be loaded/required, and the SSL secrets seem to be ingested, but no listen 443 directives are added to the nginx.conf. Attempting to hit nginx over https on port 443 results in connection refused.

What you expected to happen:
I expected requests to be accepted on port 443.

How to reproduce it (as minimally and precisely as possible):

  1. Deploy the ingress controller (DaemonSet provided below)
  2. Setup a a Secret with a valid TLS cert and key (example provided below)
  3. Create an Ingress with TLS enabled (example provided below)
# Ingress Controller
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-default
  namespace: my-addons
spec:
  selector:
    matchLabels:
      name: nginx-ingress-default
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: nginx-ingress-default
      name: nginx-ingress-default
      namespace: my-addons
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-backend-default
        - --enable-ssl-chain-completion=false
        - --logtostderr
        - --configmap=$(POD_NAMESPACE)/nginx-ingress-config
        - --default-ssl-certificate=$(POD_NAMESPACE)/default-tls
        - --enable-dynamic-certificates=true
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 80
            scheme: HTTP
          initialDelaySeconds: 90
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        name: nginx-ingress-default
        ports:
        - containerPort: 443
          hostPort: 443
          protocol: TCP
        - containerPort: 80
          hostPort: 80
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 80
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 60
      serviceAccountName: nginx-ingress-serviceaccount
      terminationGracePeriodSeconds: 60

# Example TLS Secret
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: test-tls
  namespace: examples
data:
  tls.crt: # <base64-encoded cert>
  tls.key: # <base64-encoded key>

# Example Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  namespace: examples
spec:
  rules:
  - host: aaa-test.example.com
    http:
      paths:
      - backend:
          serviceName: test-svc
          servicePort: 3000
  tls:
  - hosts:
    - aaa-test.example.com
    secretName: test-tls

Anything else we need to know:

@ElvinEfendi
Copy link
Member

@jbotelho2-bb this is super strange, --enable-dynamic-certificates should have nothing to do with listening on HTTPs. Can you share your configmap and generated Nginx configuration?

@aledbf
Copy link
Member

aledbf commented Mar 19, 2019

@jbotelho2-bb and the logs from the ingress-nginx pod

@jbotelho2-bb
Copy link
Author

jbotelho2-bb commented Mar 20, 2019

This is the ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-config
  namespace: my-addons
data:
  enable-vts-status: "true"
  hsts: "false"
  proxy-read-timeout: "120"
  server-name-hash-bucket-size: "128"
  server-name-hash-max-size: "1024"
  ssl-redirect: "false"

I'm having a hard time reproducing this on our smaller test cluster, so I've had to take snippets from our log and config on one of our larger clusters instead of posting the entire thing:

Log:

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.23.0
  Build:      git-be1329b22
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------

W0319 16:19:36.379309       8 flags.go:213] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.9
W0319 16:19:36.388257       8 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0319 16:19:36.389576       8 main.go:200] Creating API client for https://198.19.0.1:443
I0319 16:19:36.406027       8 main.go:244] Running in Kubernetes cluster version v1.12 (v1.12.3) - git (clean) commit 435f92c719f279a3a67808c80521ea17d5715c66 - platform linux/amd64
I0319 16:19:36.410047       8 main.go:102] Validated my-addons/default-backend-default as the default backend.
I0319 16:19:37.002129       8 nginx.go:261] Starting NGINX Ingress controller
I0319 16:19:37.035911       8 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"my-addons", Name:"nginx-ingress-config", UID:"b85d9e16-03d3-11e9-b35d-fa163eca90bb", APIVersion:"v1", ResourceVersion:"17019", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap my-addons/nginx-ingress-config
I0319 16:19:38.122024       8 backend_ssl.go:68] Adding Secret "my-addons/default-tls" to the local store
I0319 16:19:39.329695       8 backend_ssl.go:68] Adding Secret "examples/test-tls" to the local store
I0319 16:19:39.330420       8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"examples", Name:"test-ingress", UID:"c14ee20f-1440-11e9-b35d-fa163eca90bb", APIVersion:"extensions/v1beta1", ResourceVersion:"16832298", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress examples/test-ingress
I0319 16:19:39.340534       8 store.go:348] ignoring add for ingress test-ingress-other based on annotation kubernetes.io/ingress.class with value 
...
W0319 16:20:02.296931       8 controller.go:846] Service "examples/test-svc2" does not have any active Endpoint.
W0319 16:20:02.300582       8 controller.go:773] Error obtaining Endpoints for Service "examples/test-svc3": no object matching key "examples/test-svc3" in local store
...
W0319 16:20:02.306865       8 controller.go:1116] Unexpected error validating SSL certificate "examples/test-svc4" for server "svc4.test": x509: certificate is valid for *.example.com, example.com, not svc4.test
W0319 16:20:02.306913       8 controller.go:1117] Validating certificate against DNS names. This will be deprecated in a future version.
W0319 16:20:02.306949       8 controller.go:1122] SSL certificate "examples/test-svc4" does not contain a Common Name or Subject Alternative Name for server "svc4.test": x509: certificate is valid for *.example.com, example.com, not svc4.test
W0319 16:20:02.306975       8 controller.go:1124] Using default certificate
...
I0319 16:19:41.209323       8 controller.go:190] Backend successfully reloaded.
I0319 16:19:39.412106       8 nginx.go:282] Starting NGINX process
I0319 16:19:39.413337       8 leaderelection.go:205] attempting to acquire leader lease  stacker-addons/ingress-controller-leader-nginx...
I0319 16:19:39.416490       8 controller.go:172] Configuration changes detected, backend reload required.
I0319 16:19:39.419046       8 status.go:148] new leader elected: nginx-ingress-default-x29bl
...
Generated nginx.conf (most ingresses/servers removed):
  # Configuration checksum: 18066534385402276691

  # setup custom paths that do not require root access
  pid /tmp/nginx.pid;

  load_module /etc/nginx/modules/ngx_http_modsecurity_module.so;

  daemon off;

  worker_processes 1;

  worker_rlimit_nofile 64512;

  worker_shutdown_timeout 10s ;

  events {
      multi_accept        on;
      worker_connections  16384;
      use                 epoll;
  }

  http {
      lua_package_cpath "/usr/local/lib/lua/?.so;/usr/lib/lua-platform-path/lua/5.1/?.so;;";
      lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;/usr/local/lib/lua/?.lua;;";
      
      lua_shared_dict configuration_data 5M;
      lua_shared_dict certificate_data 16M;
      
      init_by_lua_block {
          require("resty.core")
          collectgarbage("collect")
          
          local lua_resty_waf = require("resty.waf")
          lua_resty_waf.init()
          
          -- init modules
          local ok, res
          
          ok, res = pcall(require, "lua_ingress")
          if not ok then
          error("require failed: " .. tostring(res))
          else
          lua_ingress = res
          end
          
          ok, res = pcall(require, "configuration")
          if not ok then
          error("require failed: " .. tostring(res))
          else
          configuration = res
      configuration.nameservers = { "169.254.0.2", "10.10.10.10", "1.76.126.1" }
          end
          
          ok, res = pcall(require, "balancer")
          if not ok then
          error("require failed: " .. tostring(res))
          else
          balancer = res
          end
          
          ok, res = pcall(require, "monitor")
          if not ok then
          error("require failed: " .. tostring(res))
          else
          monitor = res
          end
          
          ok, res = pcall(require, "certificate")
          if not ok then
          error("require failed: " .. tostring(res))
          else
          certificate = res
          end
          
      }
      
      init_worker_by_lua_block {
          lua_ingress.init_worker()
          balancer.init_worker()
          
          monitor.init_worker()
          
      }
      
      geoip_country       /etc/nginx/geoip/GeoIP.dat;
      geoip_city          /etc/nginx/geoip/GeoLiteCity.dat;
      geoip_org           /etc/nginx/geoip/GeoIPASNum.dat;
      geoip_proxy_recursive on;
      
      aio                 threads;
      aio_write           on;
      
      tcp_nopush          on;
      tcp_nodelay         on;
      
      log_subrequest      on;
      
      reset_timedout_connection on;
      
      keepalive_timeout  75s;
      keepalive_requests 100;
      
      client_body_temp_path           /tmp/client-body;
      fastcgi_temp_path               /tmp/fastcgi-temp;
      proxy_temp_path                 /tmp/proxy-temp;
      ajp_temp_path                   /tmp/ajp-temp;
      
      client_header_buffer_size       1k;
      client_header_timeout           60s;
      large_client_header_buffers     4 8k;
      client_body_buffer_size         8k;
      client_body_timeout             60s;
      
      http2_max_field_size            4k;
      http2_max_header_size           16k;
      http2_max_requests              1000;
      
      types_hash_max_size             2048;
      server_names_hash_max_size      32768;
      server_names_hash_bucket_size   128;
      map_hash_bucket_size            64;
      
      proxy_headers_hash_max_size     512;
      proxy_headers_hash_bucket_size  64;
      
      variables_hash_bucket_size      128;
      variables_hash_max_size         2048;
      
      underscores_in_headers          off;
      ignore_invalid_headers          on;
      
      limit_req_status                503;
      limit_conn_status               503;
      
      include /etc/nginx/mime.types;
      default_type text/html;
      
      gzip on;
      gzip_comp_level 5;
      gzip_http_version 1.1;
      gzip_min_length 256;
      gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
      gzip_proxied any;
      gzip_vary on;
      
      # Custom headers for response
      
      server_tokens on;
      
      # disable warnings
      uninitialized_variable_warn off;
      
      # Additional available variables:
      # $namespace
      # $ingress_name
      # $service_name
      # $service_port
      log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
      
      map $request_uri $loggable {
          
          default 1;
      }
      
      access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
      
      error_log  /var/log/nginx/error.log notice;
      
      resolver 169.254.0.2 10.10.10.10 1.76.126.1 valid=30s ipv6=off;
      
      # See https://www.nginx.com/blog/websocket-nginx
      map $http_upgrade $connection_upgrade {
          default          upgrade;
          
          # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
          ''               '';
          
      }
      
      # The following is a sneaky way to do "set $the_real_ip $remote_addr"
      # Needed because using set is not allowed outside server blocks.
      map '' $the_real_ip {
          
          default          $remote_addr;
          
      }
      
      map '' $pass_access_scheme {
          default          $scheme;
      }
      
      map '' $pass_server_port {
          default          $server_port;
      }
      
      # Obtain best http host
      map $http_host $best_http_host {
          default          $http_host;
          ''               $host;
      }
      
      # validate $pass_access_scheme and $scheme are http to force a redirect
      map "$scheme:$pass_access_scheme" $redirect_to_https {
          default          0;
          "http:http"      1;
          "https:http"     1;
      }
      
      map $pass_server_port $pass_port {
          443              443;
          default          $pass_server_port;
      }
      
      # Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
      # If no such header is provided, it can provide a random value.
      map $http_x_request_id $req_id {
          default   $http_x_request_id;
          
          ""        $request_id;
          
      }
      
      # Create a variable that contains the literal $ character.
      # This works because the geo module will not resolve variables.
      geo $literal_dollar {
          default "$";
      }
      
      server_name_in_redirect off;
      port_in_redirect        off;
      
      ssl_protocols TLSv1.2;
      
      # turn on session caching to drastically improve performance
      
      ssl_session_cache builtin:1000 shared:SSL:10m;
      ssl_session_timeout 10m;
      
      # allow configuring ssl session tickets
      ssl_session_tickets on;
      
      # slightly reduce the time-to-first-byte
      ssl_buffer_size 4k;
      
      # allow configuring custom ssl ciphers
      ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
      ssl_prefer_server_ciphers on;
      
      ssl_ecdh_curve auto;
      
      proxy_ssl_session_reuse on;
      
      upstream upstream_balancer {
          server 0.0.0.1; # placeholder
          
          balancer_by_lua_block {
              balancer.balance()
          }
          
          keepalive 32;
          
          keepalive_timeout  60s;
          keepalive_requests 100;
          
      }
      
      # Global filters
      
      ## start server _
      server {
          server_name _ ;
          
          listen 80 default_server reuseport backlog=511;
          
          set $proxy_upstream_name "-";
          
          location / {
              
              set $namespace      "";
              set $ingress_name   "";
              set $service_name   "";
              set $service_port   "0";
              set $location_path  "/";
              
              rewrite_by_lua_block {
                  balancer.rewrite()
              }
              
              header_filter_by_lua_block {
                  
              }
              body_filter_by_lua_block {
                  
              }
              
              log_by_lua_block {
                  
                  balancer.log()
                  
                  monitor.call()
                  
              }
              
              access_log off;
              
              port_in_redirect off;
              
              set $proxy_upstream_name    "upstream-default-backend";
              set $proxy_host             $proxy_upstream_name;
              
              client_max_body_size                    1m;
              
              proxy_set_header Host                   $best_http_host;
              
              # Pass the extracted client certificate to the backend
              
              # Allow websocket connections
              proxy_set_header                        Upgrade           $http_upgrade;
              
              proxy_set_header                        Connection        $connection_upgrade;
              
              proxy_set_header X-Request-ID           $req_id;
              proxy_set_header X-Real-IP              $the_real_ip;
              
              proxy_set_header X-Forwarded-For        $the_real_ip;
              
              proxy_set_header X-Forwarded-Host       $best_http_host;
              proxy_set_header X-Forwarded-Port       $pass_port;
              proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
              
              proxy_set_header X-Original-URI         $request_uri;
              
              proxy_set_header X-Scheme               $pass_access_scheme;
              
              # Pass the original X-Forwarded-For
              proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
              
              # mitigate HTTPoxy Vulnerability
              # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
              proxy_set_header Proxy                  "";
              
              # Custom headers to proxied server
              
              proxy_connect_timeout                   5s;
              proxy_send_timeout                      60s;
              proxy_read_timeout                      120s;
              
              proxy_buffering                         off;
              proxy_buffer_size                       4k;
              proxy_buffers                           4 4k;
              proxy_request_buffering                 on;
              
              proxy_http_version                      1.1;
              
              proxy_cookie_domain                     off;
              proxy_cookie_path                       off;
              
              # In case of errors try the next upstream server before returning an error
              proxy_next_upstream                     error timeout;
              proxy_next_upstream_tries               3;
              
              proxy_pass http://upstream_balancer;
              
              proxy_redirect                          off;
              
          }
          
          # health checks in cloud providers require the use of port 80
          location /healthz {
              
              access_log off;
              return 200;
          }
          
          # this is required to avoid error if nginx is being monitored
          # with an external software (like sysdig)
          location /nginx_status {
              
              allow 127.0.0.1;
              
              deny all;
              
              access_log off;
              stub_status on;
          }
          
      }
      ## end server _
      
      ## start server aaa-test.example.com
      server {
          server_name aaa-test.example.com ;
          
          listen 80;
          
          set $proxy_upstream_name "-";
          
          location / {
              
              set $namespace      "aaa-example";
              set $ingress_name   "domain-20745";
              set $service_name   "my-app-web";
              set $service_port   "3000";
              set $location_path  "/";
              
              rewrite_by_lua_block {
                  balancer.rewrite()
              }
              
              header_filter_by_lua_block {
                  
              }
              body_filter_by_lua_block {
                  
              }
              
              log_by_lua_block {
                  
                  balancer.log()
                  
                  monitor.call()
                  
              }
              
              port_in_redirect off;
              
              set $proxy_upstream_name    "aaa-example-my-app-web-3000";
              set $proxy_host             $proxy_upstream_name;
              
              client_max_body_size                    32m;
              
              proxy_set_header Host                   $best_http_host;
              
              # Pass the extracted client certificate to the backend
              
              # Allow websocket connections
              proxy_set_header                        Upgrade           $http_upgrade;
              
              proxy_set_header                        Connection        $connection_upgrade;
              
              proxy_set_header X-Request-ID           $req_id;
              proxy_set_header X-Real-IP              $the_real_ip;
              
              proxy_set_header X-Forwarded-For        $the_real_ip;
              
              proxy_set_header X-Forwarded-Host       $best_http_host;
              proxy_set_header X-Forwarded-Port       $pass_port;
              proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
              
              proxy_set_header X-Original-URI         $request_uri;
              
              proxy_set_header X-Scheme               $pass_access_scheme;
              
              # Pass the original X-Forwarded-For
              proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
              
              # mitigate HTTPoxy Vulnerability
              # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
              proxy_set_header Proxy                  "";
              
              # Custom headers to proxied server
              
              proxy_connect_timeout                   5s;
              proxy_send_timeout                      60s;
              proxy_read_timeout                      120s;
              
              proxy_buffering                         off;
              proxy_buffer_size                       4k;
              proxy_buffers                           4 4k;
              proxy_request_buffering                 on;
              
              proxy_http_version                      1.1;
              
              proxy_cookie_domain                     off;
              proxy_cookie_path                       off;
              
              # In case of errors try the next upstream server before returning an error
              proxy_next_upstream                     error timeout;
              proxy_next_upstream_tries               3;
              
              proxy_pass http://upstream_balancer;
              
              proxy_redirect                          off;
              
          }
          
      }
      ## end server aaa-test.example.com

      ## ...

  # backend for when default-backend-service is not configured or it does not have endpoints
      server {
          listen 8181 default_server reuseport backlog=511;
          
          set $proxy_upstream_name "internal";
          
          access_log off;
          
          location / {
              return 404;
          }
      }
      
      # default server, used for NGINX healthcheck and access to nginx stats
      server {
          listen unix:/tmp/nginx-status-server.sock;
          set $proxy_upstream_name "internal";
          
          keepalive_timeout 0;
          gzip off;
          
          access_log off;
          
          location /healthz {
              return 200;
          }
          
          location /is-dynamic-lb-initialized {
              content_by_lua_block {
                  local configuration = require("configuration")
                  local backend_data = configuration.get_backends_data()
                  if not backend_data then
                  ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
                  return
                  end
                  
                  ngx.say("OK")
                  ngx.exit(ngx.HTTP_OK)
              }
          }
          
          location /nginx_status {
              stub_status on;
          }
          
          location /configuration {
              # this should be equals to configuration_data dict
              client_max_body_size                    10m;
              client_body_buffer_size                 10m;
              proxy_buffering                         off;
              
              content_by_lua_block {
                  configuration.call()
              }
          }
          
          location / {
              content_by_lua_block {
                  ngx.exit(ngx.HTTP_NOT_FOUND)
              }
          }
      }
  }

  stream {
      lua_package_cpath "/usr/local/lib/lua/?.so;/usr/lib/lua-platform-path/lua/5.1/?.so;;";
      lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;/usr/local/lib/lua/?.lua;;";
      
      lua_shared_dict tcp_udp_configuration_data 5M;
      
      init_by_lua_block {
          require("resty.core")
          collectgarbage("collect")
          
          -- init modules
          local ok, res
          
          ok, res = pcall(require, "configuration")
          if not ok then
          error("require failed: " .. tostring(res))
          else
          configuration = res
      configuration.nameservers = { "169.254.0.2", "10.10.10.10", "1.76.126.1" }
          end
          
          ok, res = pcall(require, "tcp_udp_configuration")
          if not ok then
          error("require failed: " .. tostring(res))
          else
          tcp_udp_configuration = res
          end
          
          ok, res = pcall(require, "tcp_udp_balancer")
          if not ok then
          error("require failed: " .. tostring(res))
          else
          tcp_udp_balancer = res
          end
      }
      
      init_worker_by_lua_block {
          tcp_udp_balancer.init_worker()
      }
      
      lua_add_variable $proxy_upstream_name;
      
      log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
      
      access_log /var/log/nginx/access.log log_stream ;
      
      error_log  /var/log/nginx/error.log;
      
      upstream upstream_balancer {
          server 0.0.0.1:1234; # placeholder
          
          balancer_by_lua_block {
              tcp_udp_balancer.balance()
          }
      }
      
      server {
          listen unix:/tmp/ingress-stream.sock;
          
          content_by_lua_block {
              tcp_udp_configuration.call()
          }
      }
      
      # TCP services
      
      # UDP services
      
  }

I tried the same setup on a smaller cluster, and noticed it does have a listen 443 under server "_":

...
	## start server _
	server {
		server_name _ ;
		
		listen 80 default_server reuseport backlog=511;
		
		set $proxy_upstream_name "-";
		
		listen 443  default_server reuseport backlog=511 ssl http2;
...

@aledbf
Copy link
Member

aledbf commented Mar 20, 2019

@jbotelho2-bb the error is clear

W0319 16:20:02.306865 8 controller.go:1116] Unexpected error validating SSL certificate "examples/test-svc4" for server "svc4.test": x509: certificate is valid for *.example.com, example.com, not svc4.test

Please make sure you posted the right ingress in the first comment because doesn't match the log

@jbotelho2-bb
Copy link
Author

@aledbf That is the correct log; I was trying to give a representative sample of what's showing up on our logs in terms of errors and warnings. The example I posted is configured correctly (named "test-ingress"), but I have a few other ingresses that are exhibiting errors like the ones I posted above. In theory those should not block TLS from working, but I can try disabling them to narrow things down.

@jbotelho2-bb
Copy link
Author

I'm noticing that $server.SSLCert.PemFileName is coming up empty for the "_" server on the ingress instance that is refusing connections on 443, but is populated in the config when 443 is working (on my smaller/simpler cluster). Maybe looking at what would cause this to be empty would give a clue about what is happening?

@aledbf
Copy link
Member

aledbf commented Mar 20, 2019

Maybe looking at what would cause this to be empty would give a clue about what is happening?

Yes, it means the SSL certificate defined in the tls section, the host in the ingress and host in the tls section do not match.

@aledbf
Copy link
Member

aledbf commented Mar 20, 2019

Please check the complete log to see if my previous comment is present in more places

@qzio
Copy link

qzio commented Apr 8, 2019

I have similar issue when I upgraded to 0.24.0
adding --enable-dynamic-certificates=false solved my issue.
This is a nginx-ingress deployment with ingresses, services and certificates working since 0.17.0 without problems.

I have the same issue with 2 types of clusters.

  1. long-lived self-signed certifcates using my own PKI stored as lts secrets in k8s
  2. using cert-manager to get valid certificates through lets-encrypt.

I was surprised this dynamic certificate flag broke tls since I don't have anything special in either of my ingresses, services configmaps or deployments.

@towolf
Copy link
Contributor

towolf commented Apr 10, 2019

For us HTTPS also broke with 0.24.0.

We usually never use certs in namespaced secrets but we rely on a default cert with lots of SAN names (--default-ssl-certificate).

@aledbf
Copy link
Member

aledbf commented Apr 10, 2019

Closing. Fixed in master #3990
We are going to release 0.24.1 to fix this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants