MySQL: avoid tiny writes to improve performance in read-heavy scenarios#31204
Merged
MySQL: avoid tiny writes to improve performance in read-heavy scenarios#31204
Conversation
This change is analogous to the Postgres change made in #29812.
rosstimothy
approved these changes
Aug 30, 2023
greedy52
approved these changes
Aug 31, 2023
This was referenced Sep 1, 2023
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This change is analogous to the Postgres change made in #29812.
Just like for Postgres, the MySQL engine is forwarding the server messages individually. This hurts performance because tiny writes will have that effect.
With this PR, we replace that mechanism with a
io.Copycall, ensuring sufficiently large writes are made. To populate the relevant metrics, we analyze the copy of the message stream in a separate goroutine.The performance impact was measured using custom scripts, with a particular focus on the read-heavy use-case of creating a backup.
I ran the tests on my local MacBook, with MySQL running in Docker.
make create-backupmaster @ 27ab9514a6make create-backuptener/mysql-io-copy @ 0cdf4a66cbmake create-backupThere is still a 30% performance penalty compared to the direct database connection. It is, however, much faster than the existing code by a vast margin and likely acceptable in practice.
I was hoping to add a synthetic benchmark similar to
BenchmarkPostgresReadLargeTable, but it is challenging: the library we are using to mock the MySQL server in tests suffers from the same issue this PR is fixing, i.e., it is sending each result with an individualWrite(), leading to poor performance.Changelog: MySQL: improve performance in read-heavy scenarios.
Related: #26868.