Fix how we cancel the context in the builtin backup engine#17285
Fix how we cancel the context in the builtin backup engine#17285frouioui merged 2 commits intovitessio:mainfrom
Conversation
Signed-off-by: Florent Poinsard <florent.poinsard@outlook.fr>
Signed-off-by: Florent Poinsard <florent.poinsard@outlook.fr>
Review ChecklistHello reviewers! 👋 Please follow this checklist when reviewing this Pull Request. General
Tests
Documentation
New flags
If a workflow is added or modified:
Backward compatibility
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #17285 +/- ##
==========================================
- Coverage 67.40% 67.39% -0.02%
==========================================
Files 1574 1574
Lines 253205 253217 +12
==========================================
- Hits 170676 170643 -33
- Misses 82529 82574 +45 ☔ View full report in Codecov by Sentry. |
shlomi-noach
left a comment
There was a problem hiding this comment.
Please see inline question - I unfortunately do not follow the logic.
| if finalErr != nil { | ||
| cancel() | ||
| } | ||
| }() |
There was a problem hiding this comment.
I'm not sure I follow:
- It looks as if the function can exit and still leave the context active, in which case, what is cancelling it?
- If we still have operations in flight, why would we exit the function? Should we not wait until everything is complete?
Something feels off here, as an anti-pattern.
There was a problem hiding this comment.
The S3 and Ceph uploads may be incomplete by the time we return from this function. Writing to the buffers used by these storage implementation will be complete (which is how we're able to return from be.backupFile and be.backupFiles), but the actual reading from the buffer and uploading to the remote storage will/may not be complete. If we cancel the context too early, unfinished uploads will be canceled and marked as failed.
It is only at a later stage where we wait for all the uploads to be finished with the EndBackup method on the backup handle:
vitess/go/vt/mysqlctl/backup.go
Lines 190 to 191 in 16fa7a3
At this stage we will observe the failures created by the S3 or Ceph storage implementation and will decide to fail - even though we have already uploaded the backup (including the MANIFEST) and restarted MySQL.
There is definitely something off with this code. We must wait for the full backup (writing to the backend storage included) to complete before going forward with writing the MANIFEST and assuming the backup is useable. This is something that I am implementing along with a retry mechanism on: #17271.
There was a problem hiding this comment.
what is cancelling it?
The caller of ExecuteBackup will eventually cancel it, whether it is through a gRPC call or through vtbackup, the context always gets canceled.
There was a problem hiding this comment.
Thank you for the clarity! ❤️
There was a problem hiding this comment.
@frouioui How can it ever cancel it? ctx might be cancelled, but ctxCancel is not? That leads to a memory leak.
So even if the outer context is cancelled, we still need to cancel this inner one to avoid the memory leak.
Signed-off-by: Florent Poinsard <florent.poinsard@outlook.fr>
Description
This PR makes sure we cancel the context only at the correct time. It was getting canceled too early which could lead to incomplete/failing backups with storage implementations that write concurrently (Ceph and S3).
This regression was introduced by: #16856, backporting this to release-21.0 as the issue was introduced there. It would lead to the following error: