-
Notifications
You must be signed in to change notification settings - Fork 317
[RFC 0002] MVP #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
We would like to create a basic CLI for Supabase. At this stage we can assume that the user won't need to create orgs/projects from the CLI, but they will need to: - develop locally - push database migrations to their Postgres database
|
|
||
| ### Other questions | ||
| - what is the prefix? should it be `supabase` (clearer) or `supa` (might have a few people calling it "supper/suppa") | ||
| - Which CLI tool/framework/libraries should we use? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we use golang (static binaries, easy to distribute), https://github.com/spf13/cobra is used by most (all?) of the nicer modern CLIs (e.g. kubectl, I think also flyctl under the wraps)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, although I suspect we will need to use postgres-meta/pg-api for a lot of this functionality.
Unfortunately it creates a dependency on NodeJS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/vadimdemedes/ink looks good, apparently Prisma used to use it but I couldn't figure out why they migrated away from it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soedirgo It had issues on Windows for us (Prisma) and we simplified things a lot so it was not needed anymore.
| ``` | ||
|
|
||
|
|
||
| ### Generators (bonus) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a fantastic idea that I think will help a lot of people!
For some inspiration you can take a look at the fantastic GraphQL Code Generator project!
| # Models: | ||
| # This is my suggestion for a "local first" database which has all of the rules/relationships we need to build a a realtime user store with automatic synchronization via our Realtime engine. | ||
| # A bit like https://github.com/mobxjs/mst-gql (but hopefully much simpler) | ||
| supa gen typescript store |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joshnuss 's ActiveRecord library could be a good starting place for JS:
https://github.com/joshnuss/supabase-active-record
| ├── models | ||
| │ ├── public.table # Personal preference to explicitly add the schema, and we should prefer dot notation over nesting. | ||
| │ │ ├── data.{sql|csv|json} # For convenience, table specific data can be stored here too. | ||
| │ │ ├── model.sql # This is the core. We should also figure out a nice, readable format for SQL files. Should include constraints and indexes and table/column comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a small mock of this to figure out how to structure the file. Some initial thoughts here:
https://github.com/supabase/cli/pull/3/files#r575771489
| ### Dump | ||
|
|
||
| ```sh | ||
| supa db dump # dumps the currently active (local) database to disk |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used postgres-meta to create a small mock here: #3
I'm not entirely sure if it's the right approach. On the one hand, we have full control to manage the layout/structure of the dump. On the other hand, we will need to cover all the nuances of Postgres.
Perhaps it's better rely on pgdump from the outset. doing some sort of system where we either:
Option 1:
- Run pg_dump on the entire database
- Run some sort of AST parser on the full dump
- Clean it up and process it into separate files
or Option 2:
- Run
pg_dumpon individual tables/functions/types etc (eg:pg_dump -Fp -t $table -f $table.dump;). We can usepostgres-metato fetch the "structure" of the database - Run some some sort of "prettier" formatter to make it look readable
I'm leaning towards option 2, as it also give the develop better control. For example they might want to dump just their "functions".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example they might want to dump just their "functions".
AFAICT, pg_dump cannot dump only functions(SO ref).
So if we go the above way, we'd need a combination of 1 and 2
Maybe there's another way.
If you look at the Schema Diff output image in #2 (comment), you'll notice that it also gives the source DDL. And all the database objects are neatly separated.
The Flask endpoint of the pgadmin schema diff process gives this big json as its response. The diff related to the above image is located here.
Notice that the ddl output also has COMMENTs included and it's pretty printed(has newlines).
Checking the big json, all the different types of db objects(event triggers, tables, functions, etc) are there. If we could (ab)use that output, we'd have most of what we need already organized.
| ```sh | ||
| supa db dump # dumps the currently active (local) database to disk | ||
| supa db restore # restores the (local) database from disk | ||
| supa migrate # generate the migrations file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still not sure how we will handle going from a declarative structure, into a set of migration scripts.
There are tools like migra which work on a schema comparison - this might be the way to do it as long as the limitations aren't too restrictive.
eg: "custom types/domains Basic support (drop-and-create only, no alter)". ref
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still not sure how we will handle going from a declarative structure, into a set of migration scripts.
Ideally the developer should not look at migrations scripts and instead it should focus on declarative* SQL. So perhaps we could keep the migrations off git?(perhaps on a table)
- Of course there's a limit of how declarative SQL can be. Column renames for example always have to use ALTER.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metagration also has an interesting approach. It allows generating up/down migration scripts with stored procedures. It also has its own table for tracking applied migrations.
Maybe we can use metagration plus migra for providing a template migration that can be edited.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metagration also creates a restore point for down migrations, so it's possible to recover dropped columns for example(no need for a restore from a backup).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK cool - metagration looks interersting.
I think it's the declarative structure -> up/down scripts that's the tricky part here though. The tools I found:
- migra (python)
- https://github.com/gimenete/dbdiff (JS, looks limited)
- https://github.com/joncrlsn/pgdiff (Go)
- https://github.com/fordfrog/apgdiff (Java)
- https://github.com/eulerto/pgquarrel (C)
Or we can of course code this directly into postgres-meta, perhaps using migra as an example since it seems the most feature complete. It will be a lot of work though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately they're not open to provide a CLI mode: https://redmine.postgresql.org/issues/6304 (you'll need to sign in to see this). They mention that it would take a lot of work.
I still think the pgadmin diff is the best choice. It's already the most complete and has dedicated support. It would provide the best DX.
I've already managed to run it from source. It's a Flask application plus a Backbone.js frontend(seems they're migrating to React).
The functionality for the schema diff was started here: pgadmin-org/pgadmin4@45f2e35. That gives me an idea of the modules involved for making it work.
How about if I try isolating that python code from the GUI and create a package of our own?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about if I try isolating that python code from the GUI
Succeeded doing that. Code here: https://gist.github.com/steve-chavez/8c6825abee788394cb5424380c06487c.
Now I would need to wrap the code in a CLI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job Steve - that's an awesome effort.
Now I would need to wrap the code in a CLI
Yeah, it would be very cool if we can use it here somehow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the CLI ready on this branch: https://github.com/steve-chavez/pgadmin4/blob/cli/web/cli.py.
I'm now wrapping it on a docker container so it can be tried without building the whole pgadmin4 package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docker container here: https://hub.docker.com/r/supabase/pgadmin-schema-diff/
Usage:
docker run supabase/pgadmin-schema-diff \
'postgres://postgres@host:5432/diff_source' \
'postgres://postgres@host:5432/diff_target' \
> diff_db.sql| ```sh | ||
| │ # Data: database fixed/seed data. | ||
| ├── data | ||
| │ ├── some_data.{sql|csv|json} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding branching, this would allow us to just rely on different hardcoded master data for test/staging/preview environments right?
It does seem easier compared to something like snapshotting the db with ZFS. Much more lighter on resources as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah exactly - this would give "reproducible builds". It's not as powerful as ZFS cloning, but it is less "magic"
ZFS cloning might solve a different use-case in the future for managing terrabyte-scale databases
| - `postgresql://postgres@localhost:5432/master` | ||
| - `postgresql://postgres@localhost:5432/develop` | ||
| - `postgresql://postgres@localhost:5432/chore_add_types` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On these different envs, would the users also get different realtime/postgrest instances?
For preview workflows, the users would like to interact with the full stack right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This completely slipped my mind. Yes they are going to need it. Hmm, how will we keep the costs down?
Perhaps it will have to be "local only" for now? It should be manage-able with the docker files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, how will we keep the costs down?
A temporary postgrest with db-pool = 1(one connection max) should be lightweight, it would fit on the same instance. Realtime would need to limit its pool as well, but we'd definitely need to set its systemd MemoryLimit to be really low. But I'm not sure how low it can get.
Perhaps it will have to be "local only" for now? It should be manage-able with the docker files
Yes, local-only it's a good start. Other envs can be done in a later RFC - they need more thought.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also take advantage of systemd socket activation, which essentially lets a service sleep until a request comes. That coupled with a systemd timer to stop the service after some idle time(so question), would make resource usage minimum.
…umn? That would give the CLI a more framework-like experience.
|
Closing this as we're transitioning to the Go CLI. |
…nd cosmetic function changes ## Summary Fixes spurious schema differences detected by `supabase db pull` when comparing identical local and remote databases. Resolves two separate issues: 1. Column default normalization differences (ALTER TABLE statements) 2. Cosmetic-only function regeneration (CREATE OR REPLACE FUNCTION statements) ## Issues Resolved ### Issue supabase#1: Spurious REVOKE Statements (RESOLVED ✅) **Problem:** Migra generated hundreds of REVOKE/GRANT statements even when privileges were identical between remote and shadow databases. **Solution:** Removed `--with-privileges` flag for user schemas (public) in bash migra. User schemas rely on RLS policies rather than raw GRANTs, making privilege checking unnecessary and overly sensitive. **Files:** `internal/db/diff/templates/migra.sh` ### Issue supabase#2: Column Default Normalization (RESOLVED ✅) **Problem:** pg_get_expr() normalized column defaults differently based on search_path at query time, causing false differences: - With extensions in search_path: `uuid_generate_v4()` (unqualified) - Without extensions: `extensions.uuid_generate_v4()` (qualified) **Solution:** Three-part fix: 1. Query remote database's search_path before creating shadow (GetSearchPath) 2. Configure shadow database container with matching search_path via -c flag 3. Set PGOPTIONS environment variable for migra to ensure consistent pg_get_expr() normalization **Files:** - `internal/db/diff/diff.go`: GetSearchPath(), CreateShadowDatabaseWithSearchPath() - `internal/db/diff/migra.go`: PGOPTIONS environment variable for migra - `internal/db/pull/pull.go`: ALTER DATABASE SET search_path for shadow - `internal/db/start/start.go`: Conditional search_path in NewContainerConfig() ### Issue supabase#3: Cosmetic Function Regeneration (RESOLVED ✅) **Problem:** Migra generated 1,459 lines of CREATE OR REPLACE FUNCTION statements due to textual formatting differences in stored function definitions (dollar quote styles, schema qualification), even though functions were functionally identical. **Solution:** Post-processing filter (filterCosmeticFunctionChanges) that: - Parses migra output to detect and remove function-only changes - Preserves real schema changes (ALTER TABLE, CREATE TABLE, policies, triggers, etc.) - Handles three scenarios: 1. Pure cosmetic (only functions changed) → Skip migration entirely 2. Mixed (functions + real changes) → Filter functions, keep real DDL 3. Clean (no cosmetic changes) → Return unchanged **Files:** - `internal/db/diff/migra.go`: filterCosmeticFunctionChanges() and integration - `internal/db/pull/pull.go`: Graceful handling of errInSync (no stack trace) ## Technical Details ### Search Path Consistency Mechanism - **Remote query:** `SHOW search_path` retrieves target database's configuration - **Shadow creation:** Docker container starts with `-c search_path='<remote_value>'` flag - **Migra execution:** `PGOPTIONS=-c search_path=<remote_value>` ensures psycopg2 uses same path - **Result:** Both databases queried with identical search_path, producing consistent pg_get_expr() normalization ### Function Filtering Logic State machine parser that: - Tracks function boundaries (CREATE FUNCTION ... $function$) - Identifies real DDL statements (ALTER TABLE, CREATE POLICY, etc.) - Removes function blocks while preserving all other content - Returns filtered migration + boolean flag for informational messages ### User Experience Improvements - No empty migration files created when only cosmetic changes detected - Clean "No schema changes found" message (no stack traces) - Informational messages for filtered cosmetic changes - Proper handling of mixed migrations (functions + real changes) ## Debug Logging (Temporary) Multiple DEBUG statements added for troubleshooting during development. These provide visibility into: - Shadow database search_path at various stages - Column defaults in both target and shadow databases - Migration execution flow and search_path changes - Function filtering decisions **Note:** Debug logging can be removed once fix is verified in production use. ## Files Modified - `.gitignore` - Added /debug directory exclusion - `internal/db/diff/diff.go` - Search path detection and shadow database configuration - `internal/db/diff/migra.go` - Function filtering and PGOPTIONS for migra - `internal/db/diff/templates/migra.sh` - Conditional --with-privileges flag - `internal/db/pull/pull.go` - ALTER DATABASE search_path and graceful errInSync handling - `internal/db/start/start.go` - Conditional default search_path in container config - `pkg/migration/apply.go` - Migration execution diagnostics - `pkg/migration/file.go` - Session-level search_path and batch execution diagnostics ## Testing Tested with Nathan's Haddy project database containing: - 51 functions with SET search_path TO '' - Multiple tables with uuid_generate_v4() defaults - Complex RLS policies and triggers **Results:** - ✅ 0 ALTER TABLE statements (was 13) - ✅ 0 spurious function regenerations (was 1,459 lines) - ✅ 0 empty migration files - ✅ Clean "No schema changes found" message when schemas identical ## References - Issue: supabase#4068 - Related PR: (pending)
We would like to create a basic CLI for Supabase. At this stage we can assume that the user won't need to create orgs/projects from the CLI, but they will need to:
Rendered