-
Notifications
You must be signed in to change notification settings - Fork 13.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SIP-59] Proposal for Database migration standards #13351
Comments
Love the suggestions, thanks for driving this! A couple pieces of feedback:
This isn't always possible, especially when the migration is doing some repair of the metadata (vs. adding/removing columns and tables). See #12960 for an example of an impossible migration to write a down method for. Maybe this can be more precise by saying that all migrations that modify the structure of the DB/it's columns must have a down method?
Love this, let's plan to add these as fields in the PR template?
This will be our first use of code owners I think, do you have any thoughts about using this more broadly across the repo? Or have you only thought about the migration use case so far? Nothing here besides my first point should be considered blocking though, and I'll happily vote +1 on this initiative once the thread is created! |
Should we also consider how we could provide near zero-downtime for migrations which involve DDL operations or is this outside the scope of this SIP? |
I was just talking to an engineer today (Arash @ Preset) about the idea of using It seems like A few other ideas around this SIP:
|
It could be possible in some cases by keeping data as backup / renaming column to enable just that. Of course that doesn't always work as new objects get created and may be missing the backup, it may get very tricky to provide that guarantee where you may have to maintain the old and new field with the related old/new logic... Probably over-complicated, but we can see on a case per case basis if it makes sense to try to guarantee that down-migration. If it's not possible we may want to try to delay that migration until a bigger release if possible. |
We've seen instances in the past where one contributor thought runtime/downtime would be minimal based on their perceived use cases. When merged, other orgs had significantly/exponentially more data that needed migration, and the execution time was a pain point. How can we most accurately provide realistic/reasonable estimates given the fairly disparate use cases and datasets of Superset users/institutions? |
Good point. The primary goal here is to be able to successfully rollback from any migration. The example you provided is idempotent and additive, which fits the criteria. How about this updated language?
|
Another use case I'm thinking about for code owners is the new ephemeral test environment workflow code: adding Preset code owners to ensure AWS resources are not changed without account owner approval. |
Yeah, that's a bit tricky. One idea is to provide run times for different row counts, which could then be reasonably extrapolated for larger datasets. In general, committers notified via the proposed Github code owners should know if the tables being altered will incur significant migration overhead.
Should we also require that the PR be open for review for a minimum period of time (48h?) to ensure committers from different orgs have time to review? |
Making this work for all metadata DB types will be difficult, as the pitfalls and tooling are different for each. We could add some guidance around things like setting default values and creating indexes on tables with many rows, but DDL is going to potentially cause downtime on some systems unless you're using a tool like pt-online-schema-change (for MySQL). |
Ran across this guidance in the Alembic docs about naming constraints. Thoughts on including this as a requirement for migrations? |
To build on Rob's point above, I'd like to add that, I've noticed several migrations that do things like call |
@mistercrunch agreed, I added an item for accumulating breaking/cleanup migrations for the next major release
I think the standards set forth in SIP-57 re: breaking changes should accomplish this goal, unless you have something else in mind? |
@craig-rueda I added some detail around atomicity of migrations |
Updated the SIP above based on feedback in this thread. Will send for a vote on Friday if there's no other discussion items. |
@robdiciuccio @evans regarding "PRs introducing database migrations must include runtime estimates and downtime expectations", I'm working on a script to run benchmarks on migrations that pre-populates the models: |
The SIP has been approved with nine binding +1 votes, four non-binding +1 votes, zero 0 votes and zero -1 votes. |
[SIP] Proposal for database migration standards
Motivation
Reduce pain around metadata database migrations by ensuring standards are followed and appropriate reviews are obtained before merging.
Proposed Change
SIP-57 (Semantic Versioning) introduced standards for avoiding breaking changes and general best practices for database migrations. The proposed changes below are in addition to those standards:
downgrade
method to effectively rollback schema changes introduced in theupgrade
method. If a migration makes changes to data that are not easily undone (e.g. fix: Retroactively add granularity param to charts #12960), the changes introduced must be non-breaking and idempotent.sa.ForeignKeyConstraint(["user_id"], ["ab_user.id"], name='fk_user_id')
./superset/migrations/next/
for evaluation and inclusion in a future release.superset/migrations
directory to ensure PMC members are notified of new or updated migrations.New or Changed Public Interfaces
None.
New dependencies
No additional package dependencies.
Migration Plan and Compatibility
Workflow changes only. PR template will be updated with guidelines. Process for running migrations unchanged.
Rejected Alternatives
The status quo, which has resulted in quite a bit of thrash, deployment roadblocks and external discussions between Superset users.
The text was updated successfully, but these errors were encountered: