Hasura V3 is the API execution engine, based over the Open Data Domain Specification (OpenDD spec) and Native Data Connector Specifications (NDC spec), which powers the Hasura Data Delivery Network (DDN). The v3-engine expects to run against an OpenDDS metadata file and exposes a GraphQL endpoint according to the specified metadata. The v3-engine needs a data connector to run alongside, for the execution of data source specific queries.
Hasura v3-engine does not execute queries directly - instead it sends IR (abstracted, intermediate query) to NDC agents (aka data connectors). To run queries on a database, we'll need to run the data connector that supports the database.
Available data connectors are listed at the Connector Hub
For local development, we use the reference agent implementation that is a part of the NDC spec.
To start the reference agent only, you can do:
docker compose up reference_agent
Hasura v3-engine is written in Rust, hence cargo
is required to build and run
the v3-engine locally.
To start the v3-engine locally, we need a metadata.json
file and an auth
config file.
Following are steps to run v3-engine with a reference agent (read only, in memory, relational database with sample tables), and an sample metadata file, exposing a fixed GraphQL schema. This can be used to understand the build setup and the new V3 concepts.
RUST_LOG=DEBUG cargo run --release --bin engine -- \
--metadata-path crates/open-dds/examples/reference.json \
--authn-config-path static/auth/auth_config.json
A dev webhook implementation is provided in crates/auth/dev-auth-webhook
, that
exposes the POST /validate-request
which accepts converts the headers present
in the incoming request to a object containing session variables, note that only
headers that start with x-hasura-
will be returned in the response.
The dev webhook can be run using the following command:
docker compose up auth_hook
and point the host name auth_hook
to localhost in your /etc/hosts
file.
Open http://localhost:3000 for GraphiQL.
Use --port
option to start v3-engine on a different port.
RUST_LOG=DEBUG cargo run --release --bin engine -- \
--port 8000 --metadata-path crates/open-dds/examples/reference.json
Now, open http://localhost:8000 for GraphiQL.
You can also start v3-engine, along with a Postgres data connector and Jaeger for tracing using Docker:
METADATA_PATH=crates/engine/tests/schema.json AUTHN_CONFIG_PATH=static/auth/auth_config.json docker compose up
Open http://localhost:3001 for GraphiQL, or http://localhost:4002 to view traces in Jaeger.
Note: you'll need to add {"x-hasura-role": "admin"}
to the Headers section to
run queries from GraphiQL.
NDC Postgres is the official connector
by Hasura for Postgres Database. For running V3 engine for GraphQL API on
Postgres, you need to run NDC Postgres Connector and have a metadata.json
file
that is authored specifically for your Postgres database and models (tables,
views, functions).
The recommended way to author metadata.json
for Postgres, is via Hasura DDN.
Follow the Hasura DDN Guide to create a Hasura DDN project, connect your cloud or local Postgres Database (Hasura DDN provides a secure tunnel mechanism to connect your local database easily), and model your GraphQL API. You can then download the authored metadata.json and use the following steps to run GraphQL API on your local Hasura V3 engine.
-
Download metadata from DDN project, using Hasura V3 CLI
hasura3 build create --dry-run > ddn-metadata.json
-
Following steps are to generate Postgres metadata object and run the Postgres Connector. These steps refer to the NDC Postgres repository:
-
Start the Postgres connector in configuration mode (Config server). A config server provides additional endpoints for database instrospection and provide the schema of the database. Output of the config server will form the Postgres Metadata object.
-
Run the following command in the ndc-postgres repository:
just run-config
-
Generate the postgres configuration using the
new-configuration.sh
script by running the following command (in another terminal) in the ndc-postgres repository:./scripts/new-configuration.sh localhost:9100 '<postgres database url>' > pg-config.json
-
Now shutdown the postgres config server and start the Postgres Connector using the
pg-config.json
generated in the above step, by running the following command:Please specify different
PORT
for different data connectors:PORT=8100 \ RUST_LOG=INFO \ cargo run --bin ndc-postgres --release -- serve --configuration pg-config.json > /tmp/ndc-postgres.log
-
Fetch the schema for the data connector object by running the following command:
curl -X GET http://localhost:8100/schema | jq . > pg-schema.json
-
Finally, generate the
DataConnector
object:jq --null-input --arg name 'default' --arg port '8100' --slurpfile schema pg-schema.json '{"kind":"DataConnector","version":"v2","definition":{"name":"\($name)","url":{"singleUrl":{"value":"http://localhost:\($port)"}},"schema":$schema[0]}}' > pg-metadata.json
-
-
Now you have the NDC Postgres connector running, and have obtained the Postgres metadata (
pg-metadata.json
) which is required for the V3 engine. -
In
ddn-metadata.json
(from step 1.), replace theHasuraHubDataConnector
objects withDataConnector
objects generated inside thepg-metadata.json
file. -
Remove the object for
kind: AuthConfig
fromddn-metadata.json
, move it to a separate fileauth_config.json
, and remove thekind
field from it. -
Remove the object for
kind: CompatibilityConfig
fromddn-metadata.json
. If desired, aflags
field can be added to the OSS metadata to enable the flags corresponding to that compatibility date in the DDN metadata. -
Finally, start the v3-engine using the modified metadata using the following command (using the modified
ddn-metadata.json
andauth_config.json
from Step 5):RUST_LOG=DEBUG cargo run --release --bin engine -- \ --metadata-path ddn-metadata.json auth_config.json
You should have the v3-engine up and running at http://localhost:3000
Note: We understand that these steps are not very straightforward, and we intend to continuously improve the developer experience of running OSS V3 Engine.
To run the test suite, you need to docker login to ghcr.io
first:
docker login -u <username> -p <token> ghcr.io
where username
is your github username, and token
is your github PAT. The
PAT needs to have the read:packages
scope and Hasura SSO
configured. See
this
for more details.
Next run the postgres NDC locally using docker compose up postgres_connector
and point the host name postgres_connector
to localhost in your /etc/hosts
file.
Next run the custom NDC locally using docker compose up custom_connector
and
point the host name custom_connector
to localhost in your /etc/hosts
file OR
you can run cargo run --bin agent
and then do cargo test
.
The crates/engine/tests/chinook
contains static files required to run
v3-engine run with the chinook database as a data connector.
To get this running, you can run the following command:
METADATA_PATH=crates/engine/tests/schema.json AUTHN_CONFIG_PATH=static/auth/auth_config.json docker compose up postgres_connector engine
Alternatively, the tests can be run in the same Docker image as CI:
just test
There are some tests where we compare the output of the test against an expected golden file. If you make some changes which expectedly change the goldenfile, you can regenerate them like this:
Locally
UPDATE_GOLDENFILES=1 cargo test
Docker:
just update-golden-files
The benchmarks operate against the reference agent using the same test cases as the test suite, and need a similar setup.
To run benchmarks for the lexer, parser and validation:
cargo bench -p lang-graphql "lexer"
cargo bench -p lang-graphql "parser"
cargo bench -p lang-graphql "validation/.*"