WIP: Feature: support bulk insert to speed insert times #181
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Overview
This pull request introduces a new feature to the fixture library, enabling bulk inserts for fixture files. Prior to this enhancement, inserts were performed sequentially, leading to potential performance bottlenecks, especially with large datasets. The new functionality allows users to opt for bulk inserts, significantly improving the efficiency of the fixture loading process.
Changes Made
Added a functional option parameter UseBulkInsert() to allow bulk inserting for fixture files.
Modified the implementation to group records with exactly same columns together for efficient bulk inserts when this parameter is set to true.
Also add a free new STRING= like RAW= because I was in trouble this last days due to time conversion for string ids... So with this new "annotation" it solve the problem.
Benefits
Improved performance by enabling bulk inserts.
Maintained flexibility by allowing users to choose between sequential and bulk inserts using the UseBulkInsert() functional option (mostely because I write this for postgres in the first place and I don't know if it works for other databse right now)
Feedback and suggestions for further improvements are welcome!
Not tested right now but this is coming.