Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Editing a guild member (changing voice-channel) hangs indefinitely #2296

Open
Fyko opened this issue Nov 6, 2023 · 6 comments
Open

Editing a guild member (changing voice-channel) hangs indefinitely #2296

Fyko opened this issue Nov 6, 2023 · 6 comments

Comments

@Fyko
Copy link

Fyko commented Nov 6, 2023

Howdy folks! I'm running into this odd issue where, in certain contexts, performing a PATCH Guild Member to change their voice-channel causes the program to hang indefinitely. I cannot reproduce outside of the sample I provided -- the request goes without a hitch.

Reproducable Example

https://gist.github.com/Fyko/f1b6c53843074b4668f9d03097457fcb#file-readme-md

Logs

chuckle_interactions::commands::breakout_rooms: moving user: Id<UserMarker>(492374435274162177)
twilight_http::client: url="https://discord.com/api/v10/guilds/839188384752599071/members/492374435274162177"
twilight_http_ratelimiting::in_memory: getting bucket for path: GuildsIdMembersId(839188384752599071)
twilight_http_ratelimiting::in_memory: making new bucket for path: GuildsIdMembersId(839188384752599071)
twilight_http_ratelimiting::in_memory::bucket: starting to get next in queue path=GuildsIdMembersId(839188384752599071)
twilight_http_ratelimiting::in_memory::bucket: starting to get next in queue path=InteractionCallback(1169351050944315512)
twilight_http_ratelimiting::in_memory::bucket: starting to wait for response headers
hyper::client::pool: reuse idle connection for ("https", discord.com)
hyper::proto::h1::io: flushed 404 bytes
twilight_gateway::shard: received dispatch event_type=VOICE_STATE_UPDATE sequence=28
Stream closed EOF for production/chuckle-5bfff454fb-rmh84 (chuckle) # this happens in prod, the pod dies

Trace logs: message.txt

Tokio Console

g7mv6z.mp4
@Fyko
Copy link
Author

Fyko commented Nov 6, 2023

image
hmm...

edit: nevermind! i think this has to do with an ongoing outage

@Erk-
Copy link
Member

Erk- commented Nov 7, 2023

I think that it is only the 500 that is related to the outage, the indefinite hang probably still happens.

@Erk- Erk- reopened this Nov 7, 2023
@laralove143
Copy link
Member

can we reproduce this with other requests returning 5xx?

@Fyko
Copy link
Author

Fyko commented Nov 7, 2023

The 500 only happened because of the Discord outage. In production, it hangs indefinitely and kube eventually kills the pod.

As mentioned in the issue and repro example, I have no problem executing the query outside of my repro example.

@laralove143
Copy link
Member

the problem is that it hangs, and it'd be useful to know if it only hangs when it gets 500 on edit guild member endpoint or on any 500s, though the latter would mean it only sometimes halts

for the edit guild member endpoint, does it always hang or has this only happened once? it may be hard to reproduce since we cant simulate an outage

@Fyko
Copy link
Author

Fyko commented Nov 8, 2023

it always hangs, both in production and and when testing locally

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants