Skip to content
This repository has been archived by the owner on Feb 7, 2024. It is now read-only.

[v2] Wrong Peak Connections. Clients not disconnecting? #654

Closed
lucadegasperi opened this issue Jan 9, 2021 · 13 comments
Closed

[v2] Wrong Peak Connections. Clients not disconnecting? #654

lucadegasperi opened this issue Jan 9, 2021 · 13 comments

Comments

@lucadegasperi
Copy link

Hello, I'm running V2 beta 30 as a standalone app on laravel forge behind an Nginx proxy. The issue is the dashboard reports too many peak connections (2000+) and connections steadily rise after a weekly websockets:restart. It looks like clients aren't actually disconnected. If I don't do a weekly websockets:restart I get something similar to #623. My app has around 150 concurrent connections and not 2000.
Here's the nginx proxy configuration. If any more files are needed let me know.

location / {
        proxy_pass             http://127.0.0.1:6001;
        proxy_read_timeout     60;
        proxy_connect_timeout  60;
        proxy_redirect         off;
    
        # Allow the use of websockets
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

Schermata 2021-01-09 alle 19 53 09

@balu-lt
Copy link

balu-lt commented Jan 10, 2021

I’m having the same issue testing on Laravel Valet 😕

After making multiple browser connections that subscribes to private channel and then closing them, dashboard server activity log is saying that sockets are disconnecting (but actually they are not closing). By manually stopping websockets:serve in console you can see all the closing connections that were made. 🤷🏻‍♂️

@malanx
Copy link

malanx commented Jan 13, 2021

@lucadegasperi @balu-lt are your issues consistent with #362 ?

@lucadegasperi
Copy link
Author

@CodeJunkieio my issues are not related to presence channels as I'm not using them. If I try to open multiple tabs with the same browser the dashboard correctly report multiple connections and disconnections, but the peak connections in the chart keep going up (never down) and eventually I get a #623

@malanx
Copy link

malanx commented Jan 13, 2021

Did you have the same issue on v1?

@lucadegasperi
Copy link
Author

Did you have the same issue on v1?

Never tried, started straight with v2 as I couldn't get v1 to work for me.

@rennokki
Copy link
Collaborator

Please note that starting with v2, you should be able to provide the process a SIGTERM or SIGINT signal on closing to avoid any issues related to the stale channel members or stale stats.

@lucadegasperi
Copy link
Author

Please note that starting with v2, you should be able to provide the process a SIGTERM or SIGINT signal on closing to avoid any issues related to the stale channel members or stale stats.

what does this mean exactly? Should the connections be quitted correctly by this package when a client actually disconnects'

@rennokki
Copy link
Collaborator

Well, it's a bit tricky. If the process abruptly closes with a SIGKILL command, for example, the Redis-stored presence channels' members & statistics are not being closed. I have set up a checker for SIGTERM/SIGINT to stop accepting new connections and close all the current connections in the node, including updating statistics & presence channels' members, by disconnecting them.

Of course, another way is to ping the connections and see if they ping back. In case 120s passed and no ping came back, it marks it as disconnected just like Pusher protocol specifies. But there is an edge-case where the node can be SIGKILL-ed within the 120s period, so you can't check if it pinged within 120s since the connections are no longer available.

@simonbuehler
Copy link
Contributor

hi @balu-lt and @lucadegasperi

i still have the same issues with connections beeing logged as "disconnect" but actually beeing kept in the server. this is a local, no redis environment and as soon as a user open multiple tabs the issue appears. when ending the websocket:server process all stale connections do get closed so there is definetly a bug in there.
did you manage to fix this for your installs?

@lucadegasperi
Copy link
Author

@simonbuehler unfortunately not, the problem still persists for me. (I restart the web sockets server every week via cron) I don't have the expertise nor the time to diagnose what's going here.

@simonbuehler
Copy link
Contributor

@lucadegasperi you could try my fix in #696

@simonbuehler
Copy link
Contributor

@rennokki

after making the removeObsoleteConnections loop work, i am dumping all connection_ids, last_ponged and difference in the method.

there is the issue: connection 565743258.454857306 is present, then closed but still present in the next cycle!

"removeObsoleteConnections"
"788257947.980883018 - 2021-02-23 12:08:05 49"
"5805355.353951279 - 2021-02-23 12:08:11 43"
"565743258.454857306 - 2021-02-23 12:08:39 14"
Connection id 565743258.454857306 closed.
Connection id 788257947.980883018 sending message {"event":"log-message","channel":"private-websockets-dashboard-disconnected","data":{"type":"disconnected","time":"12:09:06","details":{"socketId":"565743258.454857306"}}}
Saving statistics...
"removeObsoleteConnections"
"788257947.980883018 - 2021-02-23 12:08:05 61"
"5805355.353951279 - 2021-02-23 12:08:11 55"
"565743258.454857306 - 2021-02-23 12:08:39 26"

this continues

"removeObsoleteConnections"
"788257947.980883018 - 2021-02-23 12:15:00 25"
"5805355.353951279 - 2021-02-23 12:15:00 25"
"565743258.454857306 - 2021-02-23 12:08:39 406"   <-  should never be over 120

until the server is ended:

^CClosing existing connections...
Connection id 788257947.980883018 closing.
Connection id 5805355.353951279 closing.
Connection id 565743258.454857306 closing.

@simonbuehler
Copy link
Contributor

simonbuehler commented Mar 4, 2021

@lucadegasperi found the issue! see #708

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants