-
-
Notifications
You must be signed in to change notification settings - Fork 314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
*Sometimes* gix fetch
gets stuck in negotiation with ssh://
remotes (hosted by gitea
)
#1061
Comments
Thanks for reporting! From the error when hitting It's too bad to hear that it's still not stable, but given the complexity of the protocol in conjunction with many kinds of transports, it's not entirely surprising either. It's strange that it's not always hanging though, which makes it appear like a dead-lock based on buffer sizes of some pipeline used to communicate. If SSH was trying to provide a response but The reason it works after a I think the best course of action would be to provide a public repository in its broken (local) state that will consistently let From my experience, HTTPS works fine, but it's a state-less mode of operation which is different from the stateful SSH connection, which I basically never use myself (note that stateful connections are tested, just not through SSH but through bare TCP). To repeat, ideally for reproduction you can provide a local copy of a stuck public repository which I can use for reproduction and debugging. |
Yes, I basically only use SSH for git. I'll see if I can reproduce it in a small repo then upload the full (.git included?) repo as a test case? It might be really hard to reproduce as most cases I remember this happening are from private repos... I'm sorry, this is such a flaky thing to reproduce. Thanks for all the help! |
gix fetch
gets stuck in negotiation with ssh://
remotes (hosted by gitea
)
Yes, that would be optimal.
Thanks for helping me to make There is also a There is a light at the end of the tunnel though, as it's definitely planned to offer a built-in native |
Using $ gix --trace fetch
^C 19:21:55 tracing INFO run [ 7.06s | 23.06% / 100.00% ] racing
19:21:55 tracing INFO ┝━ ThreadSafeRepository::discover() [ 10.2ms | 0.01% / 0.14% ]
19:21:55 tracing INFO │ ┕━ open_from_paths() [ 9.61ms | 0.03% / 0.14% ]
19:21:55 tracing INFO │ ┝━ gix_path::git::install_config_path() [ 7.43ms | 0.11% ]
19:21:55 tracing INFO │ ┕━ gix_odb::Store::at() [ 245µs | 0.00% ]
19:21:55 tracing DEBUG ┝━ 🐛 [debug]: gix_transport::SpawnProcessOnDemand | command: GIT_PROTOCOL="version=2" LANG="C" LC_ALL="C" "ssh" "-o" "SendEnv=GIT_PROTOCOL" "gitea@**censored**" "git-upload-pack" "\'jalil/**censored**.git\'"
19:21:55 tracing INFO ┕━ fetch::Prepare::receive() [ 5.42s | 0.00% / 76.80% ]
19:21:55 tracing INFO ┕━ negotiate [ 5.42s | 0.01% / 76.79% ]
19:21:55 tracing DEBUG ┝━ mark_complete_and_common_ref [ 1.52ms | 0.01% / 0.02% ] mappings: 1
19:21:55 tracing INFO │ ┝━ mark_all_refs [ 880µs | 0.01% ]
19:21:55 tracing DEBUG │ ┝━ mark_alternate_refs [ 1.18µs | 0.00% ] num_odb: 0
19:21:55 tracing INFO │ ┝━ mark known_common [ 2.48µs | 0.00% ]
19:21:55 tracing DEBUG │ ┕━ mark tips [ 2.77µs | 0.00% ] num_tips: 1
19:21:55 tracing DEBUG ┝━ negotiate round [ 5.42s | 76.77% ] round: 1
19:21:55 tracing DEBUG ┕━ negotiate round [ 90.7µs | 0.00% ] round: 2
Error: An IO error occurred when talking to the server
Caused by:
Broken pipe (os error 32) Killing it after 7s or 1min seems to make no difference to the trace output. I will make a back up of this repo in case you have a fix you'd like to test. |
Thanks so much, I forgot that it's possible to interrupt and then shut-down the application normally, showing the trace. We see that it hangs in round two, which probably means it blocks while sending or… it blocks while receiving a reply maybe because the sending didn't get flushed so that would be a local problem. Since I pretty much trust negotiation by now I'd think it might be something silly like a flush that wasn't performed. |
Otherwise it might be that the write-end still isn't flushed, so the receiver didn't get the message it's waiting on, which wouild cause us to deadlock while waiting for a response from the remote.
I think I have got something! The gist is that I found a spot that would only work correctly in a stateless setting, i.e. with HTTP, as this would flush automatically I presume. In stateful connections, this wouldn't happen which leaves data-to-be-sent in the pipe without ever being seen by the remote, which then waits forever. Then we turn around and try to read the response to a message never sent, and wait forever as well. The fix should alleviate that, so I recommend trying a custom-built binary from when #1067 was merged and see if this fixes the issue. If you have any questions, please let me know - I'd love for this to be the fix. |
Otherwise it might be that the write-end still isn't flushed, so the receiver didn't get the message it's waiting on, which wouild cause us to deadlock while waiting for a response from the remote.
Otherwise it might be that the write-end still isn't flushed, so the receiver didn't get the message it's waiting on, which wouild cause us to deadlock while waiting for a response from the remote.
Seems like it still happens (the bad repo still gets stuck in negotiation, trace looks the same). But I am not sure I got the version from GDB shows this is the stack frame it is stuck on:
I don't know how to enable debug symbols in NixOS. I'll look into that later. |
Thanks for having taken a look! Could you verify that you see It's good to see that it tries to PS: You should be able to install with |
The right command is I'll whip up a quick flake to fix this and get debug symbols. |
Ah, right! Then the real right command is :): That will definitely not need additional toolchains and is pure Rust. |
That works, but still gets stuck in negotiation. This is the gdb stack trace (now with debug info!):
|
I thinks that's it! |
I have no idea how to get gdb to debug threads, I'll see if I can figure it out later today. |
I don't think this will be necessary, as I had a look at the code and it's made so that it most definitely can't deadlock. Can you validate that this is the latest version by checking that the In any case, you could try to set the protocol version to something else, as with Thanks for all your help. |
Something is really weird with the protocol version; the trace says transport is trying to set version 2, but negotiate uses version 1: $ gix --trace fetch
^C 09:32:15 tracing INFO run [ 7.26s | 20.98% / 100.00% ] racing
09:32:15 tracing INFO ┝━ ThreadSafeRepository::discover() [ 16.8ms | 0.01% / 0.23% ]
09:32:15 tracing INFO │ ┕━ open_from_paths() [ 15.8ms | 0.12% / 0.22% ]
09:32:15 tracing INFO │ ┝━ gix_path::git::install_config_path() [ 6.40ms | 0.09% ]
09:32:15 tracing INFO │ ┕━ gix_odb::Store::at() [ 510µs | 0.01% ]
09:32:15 tracing DEBUG ┝━ 🐛 [debug]: gix_transport::SpawnProcessOnDemand | command: GIT_PROTOCOL="version=2" LANG="C" LC_ALL="C" "ssh" "-o" "SendEnv=GIT_PROTOCOL" "gitea@**censored**" "git-upload-pack" "\'**censored**.git\'"
09:32:15 tracing INFO ┕━ fetch::Prepare::receive() [ 5.72s | 0.01% / 78.79% ]
09:32:15 tracing DEBUG ┕━ negotiate [ 5.72s | 0.02% / 78.78% ] protocol_version: 1
09:32:15 tracing DEBUG ┝━ mark_complete_and_common_ref [ 5.81ms | 0.05% / 0.08% ] mappings: 1
09:32:15 tracing INFO │ ┝━ mark_all_refs [ 2.01ms | 0.03% ]
09:32:15 tracing DEBUG │ ┝━ mark_alternate_refs [ 11.9µs | 0.00% ] num_odb: 0
09:32:15 tracing INFO │ ┝━ mark known_common [ 21.5µs | 0.00% ]
09:32:15 tracing DEBUG │ ┕━ mark tips [ 21.1µs | 0.00% ] num_tips: 1
09:32:15 tracing DEBUG ┝━ negotiate round [ 5.71s | 78.67% ] round: 1
09:32:15 tracing DEBUG ┕━ negotiate round [ 618µs | 0.01% ] round: 2
Error: An IO error occurred when talking to the server
Caused by:
Broken pipe (os error 32) Same when you force protocol v2:
|
A bit more context on when it happens (I'm not 100% sure this is the pattern because it happens so infrequently). I have two computers where I have the same git repos (the ones that get stuck from my selfhosted gitea instance). I have a preference for one PC so I leave the other alone for a while. When I return to the other computer the repos sometimes get stuck. So how I think the issue could be reproduced is:
I'll see if I can reproduce this like that. |
The protocol-discrepancy is due to the server being able to downgrade the client. Setting the protocol version is merely a request, and it can be ignored. V2 is also the default and The issue also seems to happen if multiple negotiation rounds are needed which is strange. Which reminds me, please do try different settings for the I start to think it's something about the protocol that is wrong or unexpected that causes the probably custom I think the ultimate answer will be if you could use a way to make the packetlines sent over the wire visible. Independently of that, if you could provide a public repo in a state that reproduces it on my side, that would definitely help as well. |
Both Also, when setting an option with |
Noop$ gix --trace -c fetch.negotiationAlgorithm=noop fetch
10:26:20 indexing done 1.2k objects in 0.09s (12.7k objects/s)
10:26:20 decompressing done 203.5KB in 0.09s (2.2MB/s)
10:26:20 Resolving done 1.2k objects in 0.05s (23.4k objects/s)
10:26:20 Decoding done 333.0KB in 0.05s (6.6MB/s)
10:26:20 writing index file done 34.2KB in 0.00s (63.8MB/s)
10:26:20 create index file done 1.2k objects in 0.15s (8.1k objects/s)
10:26:20 read pack done 179.2KB in 0.18s (1003.3KB/s)
10:26:20 tracing INFO run [ 1.64s | 85.64% / 100.00% ]
10:26:20 tracing INFO ┝━ ThreadSafeRepository::discover() [ 3.12ms | 0.01% / 0.19% ]
10:26:20 tracing INFO │ ┕━ open_from_paths() [ 3.00ms | 0.04% / 0.18% ]
10:26:20 tracing INFO │ ┝━ gix_path::git::install_config_path() [ 2.25ms | 0.14% ]
10:26:20 tracing INFO │ ┕━ gix_odb::Store::at() [ 68.6µs | 0.00% ]
10:26:20 tracing DEBUG ┝━ 🐛 [debug]: gix_transport::SpawnProcessOnDemand | command: GIT_PROTOCOL="version=2" LANG="C" LC_ALL="C" "ssh" "-o" "SendEnv=GIT_PROTOCOL" "gitea@**censored**" "git-upload-pack" "\'**censored**.git\'"
10:26:20 tracing INFO ┕━ fetch::Prepare::receive() [ 232ms | 0.02% / 14.17% ]
10:26:20 tracing INFO ┝━ negotiate [ 50.0ms | 0.03% / 3.05% ]
10:26:20 tracing DEBUG │ ┝━ mark_complete_and_common_ref [ 2.00ms | 0.05% / 0.12% ] mappings: 1
10:26:20 tracing INFO │ │ ┝━ mark_all_refs [ 1.25ms | 0.08% ]
10:26:20 tracing DEBUG │ │ ┝━ mark_alternate_refs [ 1.81µs | 0.00% ] num_odb: 0
10:26:20 tracing INFO │ │ ┝━ mark known_common [ 2.53µs | 0.00% ]
10:26:20 tracing DEBUG │ │ ┕━ mark tips [ 1.59µs | 0.00% ] num_tips: 1
10:26:20 tracing DEBUG │ ┕━ negotiate round [ 47.6ms | 2.90% ] round: 1
10:26:20 tracing INFO ┝━ gix_pack::Bundle::write_to_directory() [ 179ms | 10.94% ]
10:26:20 tracing DEBUG ┕━ update_refs() [ 2.60ms | 0.05% / 0.16% ] mappings: 1
10:26:20 tracing DEBUG ┕━ apply [ 1.71ms | 0.10% ] edits: 1
+refs/heads/*:refs/remotes/origin/*
0393a83903a10ac8c61112af371140bbc4e0dd75 refs/heads/main -> refs/remotes/origin/main [fast-forward]
pack file: "./.git/objects/pack/pack-cc62afd989d58966bbd3b54ff6ba124b1b89f2a9.pack"
index file: "./.git/objects/pack/pack-cc62afd989d58966bbd3b54ff6ba124b1b89f2a9.idx"
server sent 2 tips, 1 were filtered due to 1 refspec(s). Skipping$ gix --trace -c fetch.negotiationAlgorithm=skipping fetch
10:28:14 indexing done 6.0 objects in 0.00s (15.6k objects/s)
10:28:14 decompressing done 1.8KB in 0.00s (4.6MB/s)
10:28:14 Resolving done 6.0 objects in 0.05s (117.0 objects/s)
10:28:14 Decoding done 1.8KB in 0.05s (36.0KB/s)
10:28:14 writing index file done 1.2KB in 0.00s (25.2MB/s)
10:28:14 create index file done 6.0 objects in 0.05s (116.0 objects/s)
10:28:14 read pack done 1.6KB in 0.05s (31.1KB/s)
10:28:14 tracing INFO run [ 1.10s | 88.69% / 100.00% ]
10:28:14 tracing INFO ┝━ ThreadSafeRepository::discover() [ 4.41ms | 0.03% / 0.40% ]
10:28:14 tracing INFO │ ┕━ open_from_paths() [ 4.09ms | 0.10% / 0.37% ]
10:28:14 tracing INFO │ ┝━ gix_path::git::install_config_path() [ 2.91ms | 0.26% ]
10:28:14 tracing INFO │ ┕━ gix_odb::Store::at() [ 104µs | 0.01% ]
10:28:14 tracing DEBUG ┝━ 🐛 [debug]: gix_transport::SpawnProcessOnDemand | command: GIT_PROTOCOL="version=2" LANG="C" LC_ALL="C" "ssh" "-o" "SendEnv=GIT_PROTOCOL" "gitea@**censored**" "git-upload-pack" "\'**censored**.git\'"
10:28:14 tracing INFO ┕━ fetch::Prepare::receive() [ 120ms | 0.02% / 10.91% ]
10:28:14 tracing INFO ┝━ negotiate [ 67.2ms | 0.04% / 6.08% ]
10:28:14 tracing DEBUG │ ┝━ mark_complete_and_common_ref [ 1.58ms | 0.06% / 0.14% ] mappings: 1
10:28:14 tracing INFO │ │ ┝━ mark_all_refs [ 881µs | 0.08% ]
10:28:14 tracing DEBUG │ │ ┝━ mark_alternate_refs [ 1.09µs | 0.00% ] num_odb: 0
10:28:14 tracing INFO │ │ ┝━ mark known_common [ 2.68µs | 0.00% ]
10:28:14 tracing DEBUG │ │ ┕━ mark tips [ 25.1µs | 0.00% ] num_tips: 1
10:28:14 tracing DEBUG │ ┕━ negotiate round [ 65.2ms | 5.90% ] round: 1
10:28:14 tracing INFO ┝━ gix_pack::Bundle::write_to_directory() [ 52.4ms | 4.74% ]
10:28:14 tracing DEBUG ┕━ update_refs() [ 725µs | 0.03% / 0.07% ] mappings: 1
10:28:14 tracing DEBUG ┕━ apply [ 438µs | 0.04% ] edits: 1
+refs/heads/*:refs/remotes/origin/*
0393a83903a10ac8c61112af371140bbc4e0dd75 refs/heads/main -> refs/remotes/origin/main [fast-forward]
pack file: "./.git/objects/pack/pack-8bf88c208d478c44cfea799314371755fee90a9e.pack"
index file: "./.git/objects/pack/pack-8bf88c208d478c44cfea799314371755fee90a9e.idx"
server sent 2 tips, 1 were filtered due to 1 refspec(s). |
In theory, that's a feature, and it's intentionally lenient there. This makes it easy to change and maybe it should change. If you want to change it to strict mode please be my guest. I am looking into adding tracing support similar to |
The repo is behind by just one commit, but I cannot reproduce it in Gitea's public git repo (haven't tried for long though). The date of the last two commits: Date: Thu Aug 24 15:43:58 2023 +0000 (origin has this one) |
My preference is |
Also, another question, which version of |
From the code that I see, also on the If the version check doesn't lead anywhere, i.e. it seems new enough, then we'd have to host the server-side locally with |
It is fairly recent (2.40.1 might also be 2.38.4? but I think that is just a backup) and I have no idea how to update it (although I can probably figure something up). |
Thanks, I'd think that's recent enough, no need to investigate further. What I'd like to learn is why the protocol V2 request sent by the client isn't respected by the server - my guess is that SSH is locked down and won't allow setting any environment variables. Could that be? Is this something you can validate? Of course, the avenue above aims at side-stepping the actual problem, which might not be what you want in the first place. It would be really helpful to see what Going one step further down the rabbithole I saw that |
This is the issue with the protocol version, just verified it: $ export GIT_PROTOCOL=v2
$ echo $GIT_PROTOCOL
v2
$ ssh -o 'SendEnv GIT_PROTOCOL' **REDACTED**
Last login: **REDACTED*
$ echo $GIT_PROTOCOL
|
Some |
Trying to reproduce this in gitea.com through forcing protocol version 1. There might be a minimum number of commits required to cause the issue as it succeeds with 3 commits (2 in test repo). Tested up to 19 commits, no luck reproducing the issue. |
I tested this locally and figured that With this configuration change, you'd be able to side-step the issue. However, ideally there would be extended debug info of Protocol V1 to learn what the problem actually is. For that, the local clone that shows the hanging issue would get a remote like
You could turn on packetline tracing to see what the difference is. I presume that in these cases, it only has one round of negotiation and sends the I also have tests with multi-round negotiation for both stateless and stateful connections, for V1 and V2 protocols, and it all works fine (by now, after fixing many hanging issues in the first place). This is what makes this issue here so puzzling - the receiving code is written in such a way that it basically skips over all unnecessary chatter right to the pack, and the pack is expected after sending Thanks again for your tremendous help - I'd love to squelch this bug and then hopefully V1 hangs will be a thing of the past, for good, for real this time :D . |
I am tempted to never do this so I can keep catching bugs in v1 c:
How do you go about requiring two negotiation rounds? I thought it would be after 16 commits, but it doesn't seem to be that, and I am not famliar with the git protocol. I'll try to figure out how to get the repo from the gitea server to debug locally. And thanks for your help! This whole process has been so pleasant <3 |
You might find it amusing that I also don't have a clear-cut way of achieving this. The test-suite adapts a similar test from However, in theory it should be easy to get an even higher number of rounds by adding a remote that has nothing to do with the local repository. Then it should try to find common commits and just give up after sending 256 of them. Maybe… that's even the solution for me to reproduce this… . Indeed, I could easily get it to do a lot of rounds in V1 one without SSH in the middle:
With SSH, it's the same.
So there really, really seems to be something special about that specific repository state we are seeing :/.
It should be fine just to copy it, If both don't reproduce, one should validate that the interaction pattern truly is the same (and I'd expect that). Otherwise… I don't know.
Thank you, that's great to hear, as I have a feeling the debugging won't be finished anytime soon 😅. |
|
log of file:// url using v2 protocol
log of file:// url using v1 protocol
|
🎉! V1 of the protocol is very close as it reproduces the issue perfectly while using a local I assume that you have exported diff --git a/gix-transport/src/client/blocking_io/file.rs b/gix-transport/src/client/blocking_io/file.rs
index e99f79bc9..446992ff8 100644
--- a/gix-transport/src/client/blocking_io/file.rs
+++ b/gix-transport/src/client/blocking_io/file.rs
@@ -209,7 +209,7 @@ impl client::Transport for SpawnProcessOnDemand {
Cow::Owned(command.to_owned()),
),
None => (
- gix_command::prepare(service.as_str()).stderr(Stdio::null()),
+ gix_command::prepare(service.as_str()).stderr(Stdio::inherit()),
None,
Cow::Borrowed(OsStr::new(service.as_str())),
), (can be applied with The output should contain everything that And that output will finally prove, once and for all, that Thanks again for your help! |
Here is the log with log
|
That's great :)! And it clearly shows that In theory, it should be possible to try forcing a flush (just to prove that it works then) by flushing on the underlying file descriptor directly. But doing that would be cumbersome in Rust especially since it's just a test. If you are on linux, maybe you could try to flush the descriptor while the process is hanging. One could just grab all open descriptors and flush them if it's unclear which one it is. If that works, it would be clear that it's a bug in the standard library which ought to actually flush. |
I tried attaching to Are you sure it's |
Using ChatGPT's python script didn't fix the issue either |
Thanks a lot for trying! This means I am puzzled as to where the But that clearly doesn't happen. Maybe something else is happening here, somehow. What confuses me is that In any case, the way I understand the code in diff --git a/gix-protocol/src/fetch/response/blocking_io.rs b/gix-protocol/src/fetch/response/blocking_io.rs
index 309f5a7c5..d36a1a45f 100644
--- a/gix-protocol/src/fetch/response/blocking_io.rs
+++ b/gix-protocol/src/fetch/response/blocking_io.rs
@@ -85,7 +85,7 @@ impl Response {
assert_ne!(reader.readline_str(&mut line)?, 0, "consuming a peeked line works");
// When the server sends ready, we know there is going to be a pack so no need to stop early.
saw_ready |= matches!(acks.last(), Some(Acknowledgement::Ready));
- if let Some(Acknowledgement::Nak) = acks.last().filter(|_| !client_expects_pack && !saw_ready) {
+ if let Some(Acknowledgement::Nak) = acks.last().filter(|_| !client_expects_pack || !saw_ready) {
break 'lines false;
}
}; Can you try it with this patch? it passes the test-suite, so that's a start (and it will hang if I butcher it too much). |
That did it! Works both locally and through ssh! |
The logic previously tried to estimate when a pack can be expected, and when a NAK is the end of a block, or the beginning of a pack. This can be known because a pack (with our settings) needs two things: * the server thinks it's ready * a `done` sent by the client If the server doesn't think it's ready it will send NAK and be done. So the logic should be, for a NAK to stop the read-loop, that the client expects a pack, and the server is ready. If the client is not ready, or the server isn't ready, keep NAK and consider them the end of a round, hence break the loop.
That's incredible! A small change with huge effect! I can now just hope that the test-coverage is as good as I think or else something else might break 😅 (at least the blast radios is limited to V1). Alright, the PR is in flight and I hope it will be smooth sailing from now on :). |
The logic previously tried to estimate when a pack can be expected, and when a NAK is the end of a block, or the beginning of a pack. This can be known because a pack (with our settings) needs two things: * the server thinks it's ready * a `done` sent by the client If the server doesn't think it's ready it will send NAK and be done. So the logic should be, for a NAK to stop the read-loop, that the client expects a pack, and the server is ready. If the client is not ready, or the server isn't ready, keep NAK and consider them the end of a round, hence break the loop.
The logic previously tried to estimate when a pack can be expected, and when a NAK is the end of a block, or the beginning of a pack. This can be known because a pack (with our settings) needs two things: * the server thinks it's ready * a `done` sent by the client If the server doesn't think it's ready it will send NAK and be done. So the logic should be, for a NAK to stop the read-loop, that the client expects a pack, and the server is ready. If the client is not ready, or the server isn't ready, keep NAK and consider them the end of a round, hence break the loop.
Can you try once more from this PR? It contains adjustments to the logic to work with more test-cases, and I can only hope that it also still covers your case. |
The logic previously tried to estimate when a pack can be expected, and when a NAK is the end of a block, or the beginning of a pack. This can be known because a pack (with our settings) needs two things: * the server thinks it's ready * a `done` sent by the client If the server doesn't think it's ready it will send NAK and be done. So the logic should be, for a NAK to stop the read-loop, that the client expects a pack, and the server is ready. If the client is not ready, or the server isn't ready, keep NAK and consider them the end of a round, hence break the loop.
Verified that |
I scimmed through the code but couldn't ultimately figure out what kind of logging was used for the hierarchies printed by this command. It doesn't seem to be tracing or log as far as I can tell. |
You are probably looking for this: It's |
Current behavior 😯
This happens on some repos sporadically (using v0.29.0, but it has happened long before that).
When you run
gix fetch
it is stuck in the negotiation phase forever(?) I tend to stop it after a few seconds, but I seem to remember it staying there for a few minutes.Cancelling it with
CTRL+C
and rerunning the command causes the same behaviour.Running a
git fetch
fixes the repo(?) and nowgix fetch
works again.This is the error displayed after sending
CTRL+C
:Expected behavior 🤔
gix fetch
should work or time out the negotiation after a resonable amount of time (a few seconds to a minute).Steps to reproduce 🕹
?????
I see it happenning on my selfhosted gitea repos relatively often (~once every two or so weeks) but I have no idea how to reproduce this.
If you have any idea how I could go about diagnosing the issue I'll make sure to keep it in mind for the next time it happens. For now this is all I have.
The text was updated successfully, but these errors were encountered: