diff --git a/docs/defer-design.md b/docs/defer-design.md new file mode 100644 index 0000000000..a8107b0aea --- /dev/null +++ b/docs/defer-design.md @@ -0,0 +1,490 @@ +# GraphQL `@defer` — Design Document + +## Overview + +The `@defer` directive allows clients to mark fragments — both inline fragments and named fragment spreads — whose fields should be delivered as separate incremental payloads rather than blocking the primary response. It is only valid on queries; mutations and subscriptions cannot support incremental delivery by design. + +```graphql +# anonymous inline fragment +query { + user(id: "1") { + name + ... @defer { + expensiveField + } + } +} + +# inline fragment with type condition +query { + user(id: "1") { + name + ... on User @defer { + expensiveField + } + } +} + +# named fragment spread +fragment UserDetails on User { + expensiveField +} + +query { + user(id: "1") { + name + ...UserDetails @defer + } +} +``` + +The directive accepts two optional arguments: +- `if: Boolean` — when `false`, the fragment is not deferred and its fields appear in the primary response. Defaults to `true`. +- `label: String` — a client-supplied identifier. **Not yet passed through to incremental responses** — this will be documented once implemented. + +The client receives the primary response immediately with all non-deferred fields, followed by one incremental chunk per deferred group. `hasNext: true` signals more chunks are coming; `hasNext: false` on the final chunk signals the stream is complete. + +```json +// Primary response +{"data": {"user": {"name": "Alice"}}, "hasNext": true} + +// Incremental chunk +{"incremental": [{"data": {"expensiveField": "..."}, "path": ["user"]}], "hasNext": false} +``` + +> **Spec note:** This implementation follows an earlier version of the incremental delivery spec. The current spec draft introduces `pending`/`completed` entries and opaque IDs for correlating chunks — those are not yet implemented. + +--- + +## High-Level Pipeline + +A query with `@defer` travels through four phases, each producing a richer representation for the next. + +### 1 · Normalization + +- `@defer` is removed from every fragment (inline or named spread) +- Every field inside is stamped with `@__defer_internal(id, parentDeferId, label)` +- defer IDs are assigned sequentially in AST walk order +- A `___typename` placeholder is injected into any selection set where all children are deferred + +### 2 · Planning + +- Fields are mapped to one or more datasources; required fields (`@key`, `@requires`) are added to the operation in the correct defer scope +- `ProcessDefer` propagates deferIDs up through parent nodes to identify root anchor nodes +- The path builder plans each field in one of three modes (deferred field, defer parent, or normal) +- `assignDefer` stamps `resolve.Field.Defer` on each deferred field in the response object tree — consumed by the renderer to classify fields at render time +- `configureFetch` writes the deferID onto `FetchDependencies` of each `SingleFetch` — consumed by post-processing to partition the fetch tree +- If any fetch carries a deferID, a `DeferResponsePlan` is produced; otherwise a `SynchronousResponsePlan` + +### 3 · Post-Processing + +- Fields are merged and the fetch tree is built and ordered by dependency +- The fetch tree is partitioned by `FetchDependencies.DeferID`: empty deferID goes into the primary fetch group; each non-empty deferID forms its own `DeferFetchGroup` +- Groups are sorted numerically, preserving AST definition order + +### 4 · Execution + +- Primary fetches are executed and the initial response is rendered (deferred fields skipped) and flushed to the client +- For each `DeferFetchGroup` in order: + - Deferred fetches are executed + - A two-pass render runs: pre-walk validates auth and detects null-bubbling, then the render pass emits the incremental chunk + - The chunk is flushed immediately + + +--- + +## Phase 1: Normalization + +**Files:** +- `v2/pkg/astnormalization/inline_fragment_expand_defer.go` +- `v2/pkg/astnormalization/defer_ensure_typename.go` +- `v2/pkg/astnormalization/astnormalization.go` (opt-in via `WithInlineDefer()`) + +Normalization is enabled by passing `WithInlineDefer()` to the normalizer. Without it, `@defer` is left untouched and the rest of the pipeline treats the query as a normal synchronous request. + +### Defer Expansion (`inlineFragmentExpandDefer`) + +This visitor converts user-facing `@defer` on fragments into a per-field internal directive the planner can consume directly. It handles both inline fragments (`... on User @defer { ... }`, `... @defer { ... }`) and named fragment spreads (`...MyFragment @defer`). + +**What it does:** + +When it encounters `@defer` on a fragment: + +1. Checks `@defer(if: false)` — if disabled, removes `@defer` from the fragment but does not stamp any fields (they are treated as non-deferred). +2. Removes `@defer` from the fragment node itself. +3. Assigns a sequential integer ID to this defer group. IDs are assigned in AST walk order, so they reflect the order in which `@defer` fragments appear in the document. +4. Records `parentDeferId` pointing to the enclosing defer group's ID, if any (for nested `@defer`). +5. Stamps every field in the selection set with `@__defer_internal(id: "N", parentDeferId: "M", label: "...")`. + +After expansion a fragment like `... @defer { title }` becomes: + +```graphql +... { + title @__defer_internal(id: "1") +} +``` + +And a nested defer like `... @defer { profile { ... @defer { bio } } }` becomes: + +```graphql +... { + profile @__defer_internal(id: "1") { + ... { + bio @__defer_internal(id: "2", parentDeferId: "1") + } + } +} +``` + +**Why stamp individual fields rather than keeping `@defer` on the fragment?** + +The primary motivation is **field merging**. GraphQL query can have duplicate field occurrences — for example, the same field may appear both inside a `@defer` fragment and outside it in the same selection set. By stamping `@__defer_internal` on individual fields, the merge step (`MergeFieldsDefer` in `ast_field.go`) can compare them directly: if a non-deferred version of a field exists alongside a deferred version, the non-deferred version wins and the deferred annotation is discarded. The field ends up in the primary response. If `@defer` remained on the fragment, this field-level merge would be much harder to reason about. + +**Why sequential integer IDs?** + +IDs are assigned in AST walk order, which matches document definition order. Post-processing sorts defer groups numerically, so incremental chunks are always streamed to the client in the order the `@defer` fragments appear in the query. + +### Typename Placeholder (`deferEnsureTypename`) + +After defer expansion, a field's selection set can end up with *all* of its child fields carrying `@__defer_internal`. This means all children are deferred — none of them will appear in the primary response. The client must still receive the parent object as an empty `{}` in the initial response so it knows the object exists and where deferred data will be inserted later. To produce that empty object the planner must send a query to the subgraph that selects *something* from it — otherwise the selection set is invalid. + +To solve this, `deferEnsureTypename` injects a `___typename` placeholder (triple-underscore alias) into any selection set where all fields are deferred. The triple-underscore alias distinguishes it from a user-requested `__typename`. The `nodeSelectionVisitor` adds it to `skipFieldRefs` so it never appears in the response shape seen by the client — it exists purely to keep the downstream query valid. + +The placeholder is placed in the correct defer scope depending on context: + +- If the enclosing parent field is **not deferred**: a plain `___typename` with no `@__defer_internal` annotation is added. It lands in the primary fetch. +- If the enclosing parent field **is deferred** and no child shares the parent's defer ID: `___typename` is annotated with the parent's `@__defer_internal` ID so it is fetched in the parent's defer scope, not the children's scope. +- If at least one child already shares the parent's defer ID: no placeholder is needed — that child is effectively "in scope" for the parent's fetch. + +--- + +## Phase 2: Planning + +**Files:** +- `v2/pkg/engine/plan/datasource_filter_collect_nodes_visitor.go` +- `v2/pkg/engine/plan/datasource_filter_node_suggestions.go` +- `v2/pkg/engine/plan/node_selection_visitor.go` +- `v2/pkg/engine/plan/required_fields_visitor.go` +- `v2/pkg/engine/plan/path_builder_visitor.go` +- `v2/pkg/engine/plan/visitor.go` + +Planning is the most involved phase for defer. Its job is to determine which datasource fetches each field, in which defer scope, and to build a set of planner instances — one per `(datasource, deferID)` pair — that will generate the downstream queries. + +### Why planners are scoped by deferID + +A planner is identified by its datasource hash **and** its deferID. Two planners can share the same datasource but serve different defer scopes. This separation is enforced during path assignment: + +- A planner whose `DeferID` is non-empty refuses to accept non-deferred fields. +- A field with a `deferID` is only accepted by a planner whose `DeferID` matches exactly. + +Without this scoping, a deferred field could be picked up by a non-deferred planner that happens to serve the same datasource and path — producing a primary-scope fetch instead of a deferred one. + +A single deferID can produce **multiple** planners if the deferred fields are reachable from different root anchors in the query tree — for example, one starting from the root query node and another starting from an entity node in a different part of the tree. + +### Step 1 — Collect nodes (`datasource_filter_collect_nodes_visitor.go`) + +Builds a `NodeSuggestion` tree that maps every field to one or more candidate datasources. For each field it reads `@__defer_internal` and attaches a `DeferInfo` struct (`id`, `parentDeferId`, `label`) to the suggestion, making the defer context of every field available to all subsequent steps. + +### Step 2 — Node selection and required fields (`node_selection_visitor.go`, `required_fields_visitor.go`) + +Resolves which datasource(s) handle each field. Also detects fields that require additional data to be fetched — `@key` fields for entity resolution and `@requires` fields for computed fields — and injects them directly into the operation AST in the correct defer scope. + +**`@requires` fields** are stamped with the same `@__defer_internal` as the field that needs them. They must be present in the same deferred fetch so the field resolver has the data it depends on. + +**`@key` fields** are placed in the *parent* defer scope, or left plain (primary scope) if there is no enclosing defer. The key must already be available before the entity fetch runs, so it cannot be deferred to the same scope as the field that depends on it. When a plain (non-deferred) copy of the key already exists in the selection set, it is reused directly — no annotation needed. + +All injected fields are recorded in `skipFieldRefs` so they never appear in the client response shape. + +### Step 3 — Propagate defer parents (`datasource_filter_node_suggestions.go` — `ProcessDefer`) + +After node selection, `ProcessDefer` runs once over all selected suggestions. For every deferred field it walks up the `NodeSuggestion` tree through ancestors **on the same datasource**, searching for the nearest root anchor: + +- A **root query node** (e.g. `Query.user`) — a natural starting point for a full query. +- An **entity node that requires a key to be provided** — meaning an entity fetch (`_entities`) will branch from it. + +Child nodes (fields that are neither root query fields nor entity fields with a key requirement) cannot independently start a fetch. They must always be included as part of an ancestor's query. The propagation therefore walks all the way up to the first ancestor that *can* start a fetch, adding the deferred field's ID to the `deferIDs` list of every node on that path. Those nodes become **defer parents**. + +### Step 4 — Path building (`path_builder_visitor.go`) + +For each field, the path builder uses the node suggestion results to plan fetch paths. A field is handled in one of three modes: + +**1. Deferred field** (`deferField=true`, `deferID=`) + +The field carries `@__defer_internal`. It is planned as a deferred path under its own deferID. The path builder looks for an existing planner with a matching `(datasource, deferID)` pair. If none exists, a new planner is created for a new `objectFetchConfiguration` with `deferID` set to the field's ID. + +**2. Defer parent** (`deferField=false`, `deferID=`) + +The field has one or more child deferIDs in its `deferIDs` list from `ProcessDefer`. It is planned **once per child deferID** it covers, each time as a non-deferred path on the planner that owns that child deferID. This anchors the deferred fetch at the correct root node and ensures the planner's generated query contains the full path down to the deferred fields. Without this, the child fields — which cannot start a fetch on their own — would have no root to attach to. + +**3. Normal field** (`deferField=false`, `deferID=""`) + +No defer involvement. Planned once on the primary-scope planner for its datasource. + +A field can be in modes 1 and 2 simultaneously: it may carry its own `deferID` (deferred under one group) while also appearing as a parent anchor for fields deferred under other groups. + +### Step 5 — Assign defer annotations and emit the plan (`visitor.go`) + +**`assignDefer`** runs for every field in the response object tree. When a field's `pathConfiguration.deferredField` is true, it sets `resolve.Field.Defer = &resolve.DeferField{DeferID: ...}`. This annotation is the signal the **renderer** uses at execution time: fields with `Defer != nil` are skipped during the primary response pass and included only during the incremental pass whose `deferID` matches. + +**`configureFetch`** writes `objectFetchConfiguration.deferID` onto `FetchDependencies.DeferID` of the resulting `SingleFetch`. This is the signal **post-processing** uses to partition the fetch tree into primary and deferred groups. + +After all planners are built, if any planner exposes a non-empty `DeferID()`, the plan is a `DeferResponsePlan`. Otherwise it is a `SynchronousResponsePlan`. + +--- + +## Phase 3: Post-Processing + +**Files:** +- `v2/pkg/engine/postprocess/postprocess.go` +- `v2/pkg/engine/postprocess/extract_defer_fetches.go` + +Post-processing takes the raw plan produced by the visitor and turns it into an executable form. For a `DeferResponsePlan` the steps run in this order: + +**1. Merge fields** (`mergeFields`) + +Merges duplicate field nodes in the response object tree. This can leave behind fields from different query branches that happen to resolve to the same path. + +**2. Build flat fetch tree** (`createFetchTree`) + +Promotes `RawFetches` from the planner into a flat sequence node — a single root with one child per fetch. At this point all fetches are in one list regardless of their deferID. + +**3. Process flat fetch tree** (`processFlatFetchTree`) + +Three sub-steps run over the flat list: +- **Resolve input templates** — substitutes variable placeholders in fetch inputs (e.g. entity representation variables) with concrete references to previously fetched data. +- **Deduplication** — removes identical fetches that would otherwise query the same data twice. +- **Create concrete fetch types** — converts generic fetch nodes into concrete typed nodes (single fetch, batch fetch, parallel fetch) based on their shape and dependencies. + +**4. Extract deferred fetches** (`extractDeferFetches`) + +This is the defer-specific step. The flat fetch tree is partitioned by `FetchDependencies.DeferID`: + +- Fetches with an empty `DeferID` stay in the primary response fetch tree. +- Fetches with a non-empty `DeferID` are grouped by ID into `DeferFetchGroup` structs and stored in `GraphQLDeferResponse.Defers`. + +The split must happen before step 5 because each group is organised independently. Groups are sorted numerically by ID, preserving AST definition order so chunks stream to the client in the order the `@defer` fragments appear in the query. + +**5. Organise fetch trees** + +`organizeFetchTree` runs separately on the primary fetch tree and on each `DeferFetchGroup`'s fetch tree. It reorders fetch nodes so that a fetch always executes after all fetches it depends on, and wraps independent fetches in parallel nodes where possible. Each group is organised as a self-contained tree. `DependsOnFetchIDs` is used during this ordering step to sequence fetches correctly within a tree; after organisation it serves only as metadata for query plan display. Cross-group dependencies (e.g. a deferred entity fetch that depends on a key from the primary response) are not re-checked at runtime — they are satisfied structurally because the execution loop always completes the primary response before running any deferred group. + +--- + +## Phase 4: Execution + +**Files:** +- `v2/pkg/engine/resolve/resolve.go` (`ResolveGraphQLDeferResponse`, line 439) +- `v2/pkg/engine/resolve/resolvable.go` (`ResolveDefer`, line 266) + +### 1. `ResolveGraphQLDeferResponse` + +`ResolveGraphQLDeferResponse` is the entry point for defer execution. It differs from the regular `ResolveGraphQLResponse` in that it does not produce a single response — it drives a streaming loop that emits multiple chunks to the client over time. + +The loop runs as follows: + +1. **Initialise** the resolvable state with the operation type. +2. **Fetch primary data** — the loader executes all fetches in the primary fetch tree (`response.Response.Fetches`), populating the shared JSON data buffer. +3. **Render the primary response** — `resolvable.Resolve` is called with `deferMode=true` and `deferID=""`. In this mode `collectDeferFields` skips all fields where `Field.Defer != nil`, rendering only non-deferred fields. Because `deferMode=true` and no errors occurred, `hasNext: true` is written unconditionally at the end of the primary response. +4. **Flush** — the primary chunk is sent to the client immediately. +5. If any errors occurred during primary rendering, the loop stops. +6. **For each `DeferFetchGroup`** in definition order: + - The loader executes the group's fetch tree, appending deferred data into the shared buffer alongside the already-fetched primary data. + - `resolvable.deferID` is set to the group's ID. + - `ResolveDefer` is called to render the incremental chunk. + - The chunk is flushed to the client immediately. + - If errors occurred, the loop stops. +7. **`writer.Complete()`** signals the end of the stream. + +The same shared data buffer (`response.Response.Data`) is reused across all passes. Each deferred fetch appends its results into the buffer at the correct paths, so the renderer can find them during the incremental pass. + +### 2. `DeferResponseWriter` + +```go +type DeferResponseWriter interface { + io.Writer + Flush() error + Complete() +} +``` + +`Flush()` commits the current chunk to the client. `Complete()` closes the stream. Each group produces exactly one `Flush()` call. + +### 3. `ResolveDefer` — Incremental Rendering + +`ResolveDefer` generates one incremental chunk for a given `deferID`. Unlike a regular response render, it cannot simply walk the entire object tree — it must find only the fields belonging to the current defer group, which may be scattered at different depths and paths within the tree. It also cannot start writing to the client until it knows there are no authorization errors, since the HTTP stream is already open and partially sent. + +For these reasons rendering runs in **two passes** over the same object tree. + +#### **Pass 1 — pre-walk** (`enableRender=false`) + +The tree is walked without writing any bytes. For each object node, `collectDeferFields` classifies its fields: + +- Fields whose `Defer.DeferID` matches `r.deferID` → **render set** (will be written in pass 2) +- Fields with no `Defer` annotation, or whose `Defer.DeferID` is numerically smaller than `r.deferID` → **seek set** (traversed to find nested defer content) +- Fields whose `Defer.DeferID` is numerically larger than `r.deferID` → **skipped** (not yet fetched) + +The seek set exists because after normalization the same object can appear in both deferred and non-deferred contexts, producing response tree nodes whose outer object has no deferID but whose nested fields do. The walker must traverse into those outer objects to reach the matching fields inside them. Similarly, an already-completed earlier defer group (smaller ID) may contain nested fields belonging to the current group — the walker seeks into it. + +During this pass, authorization checks run and null-bubbling through non-nullable chains is detected. If a non-nullable field fails authorization, `deferItemDataNull` is set, signalling that the entire incremental item for this object must render as `{"data": null, ...}`. + +#### **Pass 2 — render** (`enableRender=true`) + +The same walk runs again. For each object node that has matching deferred fields, the render pass must decide which incremental item envelope to produce. This decision is made before writing any bytes for that item, based on `deferItemDataNull` set during pass 1. + +The `"path"` value in the envelope is taken from `r.path` — the path stack at the moment the object is entered, pointing to the location in the response tree where the client should merge the incremental data. + +A single `ResolveDefer` call can produce **multiple incremental items** within the one envelope — one per object node that owns matching deferred fields, found by the seeker as it traverses the tree. Array items also produce separate entries: each element of a list that contains deferred fields gets its own incremental item with an index in its path. + +`hasNext: true` is written on all chunks except the last. The last chunk in the loop writes `hasNext: false`. + + +##### Normal envelope (`deferItemDataNull=false`) + +The outer `{"incremental": [` wrapper is opened, then `printDeferEnvelopeOpen` writes `{"data": {`, the deferred fields are rendered inside, then `printDeferEnvelopeClose` appends `}, "path": [...]}`. The result is: + +```json +{"incremental": [{"data": {"expensiveField": "..."}, "path": ["user"]}], "hasNext": ...} +``` + +##### Null envelope (`deferItemDataNull=true`) + +`printDeferEnvelopeNullData` writes the entire item as +```json +{"data": null, "path": [...], "errors": [...]} +``` + +in one shot — the normal `{"data": {` opener and the outer `{"incremental": [` wrapper are never written. The walker returns immediately without descending further. + +This is exactly why the pre-walk is necessary. Each chunk is assembled in an intermediate buffer before being flushed to the client. Once bytes have been written into that buffer you cannot go back and modify them — for example you cannot change `{"data": {field: "value"}` into `{"data": null, ...}` after the fact. The pre-walk ensures the render pass knows which shape to produce before it writes the first byte. + +--- + +## Key Data Structures + +### `GraphQLDeferResponse` (resolve/response.go) + +```go +type GraphQLDeferResponse struct { + Response *GraphQLResponse // primary (non-deferred) fields + Defers []*DeferFetchGroup // one per @defer group, in order +} + +type DeferFetchGroup struct { + DeferID string + Fetches *FetchTreeNode +} +``` + +### `DeferField` (resolve/node_object.go) + +```go +type DeferField struct { + DeferID string +} +``` + +Attached to `Field` in the response object tree. During rendering, fields with a +non-empty `DeferField.DeferID` are skipped in the primary pass and included only +when `deferID` matches. + +### `FetchDependencies.DeferID` (resolve/fetch.go) + +```go +type FetchDependencies struct { + FetchID int + DependsOnFetchIDs []int + DeferID string // non-empty → belongs to a deferred group +} +``` + +--- + +## Wire Format + +**Primary response** — sent immediately, contains all non-deferred fields. `hasNext: true` signals more chunks are coming. + +```json +{"data": {"user": {"id": "1", "name": "Alice"}}, "hasNext": true} +``` + +**Incremental response — normal envelope** — one per `@defer` group when all fields resolved successfully. Each item carries the deferred data and the path where the client should merge it. + +```json +{"incremental": [{"data": {"expensiveField": "..."}, "path": ["user"]}], "hasNext": true} +``` + +**Incremental response — null data envelope** — emitted when a non-nullable field in the deferred group fails (authorization error or null-bubbling). The data is null and errors are included. + +```json +{"incremental": [{"data": null, "path": ["user"], "errors": [{"message": "..."}]}], "hasNext": true} +``` + +- `hasNext: true` on all chunks except the last. +- `hasNext: false` on the final chunk, signalling the stream is complete. + +--- + +## Design Decisions + +| Decision | Rationale | +|----------|-----------| +| **`@defer` → `@__defer_internal` stamped on individual fields** | The primary motivation is field merging. Stamping at field level lets `MergeFieldsDefer` compare deferred and non-deferred copies of the same field directly and discard the deferred annotation when a non-deferred counterpart exists. Fragment-level `@defer` would make this merge much harder to reason about. | +| **Sequential integer defer IDs** | IDs are assigned in AST walk order, matching document definition order. This lets post-processing sort groups numerically so chunks stream to the client in the order `@defer` fragments appear in the query. | +| **`parentDeferId` tracking** | Required fields (`@key`, `@requires`) and `___typename` placeholders must land in the correct defer scope. `parentDeferId` lets the required fields visitor determine that scope without re-walking the tree. | +| **`___typename` placeholder** | When all children of a selection set are deferred, the parent object must still appear as `{}` in the primary response so the client knows where to insert deferred data. The placeholder keeps the downstream query valid. The triple-underscore alias ensures it is excluded from the client response shape via `skipFieldRefs`. | +| **Planners scoped by `(datasource, deferID)` pair** | Prevents non-deferred planners from claiming deferred fields that share the same datasource and path. Each defer group gets its own dedicated set of planners so downstream queries are generated in the correct scope. | +| **Fetch tree split happens in post-processing, after deduplication** | Deduplication and template resolution must see the full flat fetch list to work correctly. The split must happen after those steps but before `organizeFetchTree`, which must run independently per group since primary and deferred trees are ordered separately. | +| **Two-pass rendering in `ResolveDefer`** | The pre-walk determines the correct envelope shape before any bytes are written to the intermediate chunk buffer. Two failure cases require the null envelope (`{"data": null, "path": [...], "errors": [...]}`): unauthorized fields (values must not leak) and null-bubbling from non-nullable field errors. Without the pre-walk, the render pass would open the normal `{"data": {` envelope and then be unable to change it once bytes have already been written into the buffer. | +| **Sequential deferred group delivery** | Groups are fetched and flushed one at a time in definition order. Parallel fetch-and-stream is not implemented. | +| **`@defer` is query-only** | Mutations require serial field execution; subscriptions already stream continuously. Neither is compatible with incremental delivery semantics. | + +--- + +## Configuration & Feature Flags + +| Option | Location | Purpose | +|--------|----------|---------| +| `WithInlineDefer()` | `astnormalization/astnormalization.go` | Enables defer normalization; without this the `@defer` directive is left untouched. | +| `DisableExtractDeferFetches()` | `postprocess/postprocess.go` | Skips the fetch-splitting step (useful for testing the planner in isolation). | + +--- + +## Known Limitations / TODOs + +- **Mutations and subscriptions not supported.** The planner detects `isDefer` and + creates `DeferResponsePlan` only for queries. +- **Sequential delivery only.** Deferred groups are fetched one after another; there + is no parallel fetching across groups. +- **`MergeFieldsDefer` TODO** (`ast_field.go:206`): When merging two fields that both + carry `@__defer_internal`, the merge logic does not yet fully account for `parentId` + reconciliation. + +--- + +## File Reference + +| Area | File | +|------|------| +| Defer directive constants | `v2/pkg/lexer/literal/literal.go` | +| Field AST helpers (merge, stamp, read deferID) | `v2/pkg/ast/ast_field.go` | +| Defer expansion (normalization) | `v2/pkg/astnormalization/inline_fragment_expand_defer.go` | +| Typename placeholder (normalization) | `v2/pkg/astnormalization/defer_ensure_typename.go` | +| Collect nodes visitor | `v2/pkg/engine/plan/datasource_filter_collect_nodes_visitor.go` | +| NodeSuggestions + ProcessDefer | `v2/pkg/engine/plan/datasource_filter_node_suggestions.go` | +| Node selection + skipFieldRefs | `v2/pkg/engine/plan/node_selection_visitor.go` | +| Required fields visitor (defer scope logic) | `v2/pkg/engine/plan/required_fields_visitor.go` | +| Path builder (three planning modes) | `v2/pkg/engine/plan/path_builder_visitor.go` | +| assignDefer + configureFetch + plan type | `v2/pkg/engine/plan/visitor.go` | +| Plan type definitions | `v2/pkg/engine/plan/plan.go` | +| Post-processing pipeline | `v2/pkg/engine/postprocess/postprocess.go` | +| Extract deferred fetches | `v2/pkg/engine/postprocess/extract_defer_fetches.go` | +| Response types (GraphQLDeferResponse, DeferFetchGroup) | `v2/pkg/engine/resolve/response.go` | +| Field defer annotation (DeferField) | `v2/pkg/engine/resolve/node_object.go` | +| Fetch dependencies (DeferID) | `v2/pkg/engine/resolve/fetch.go` | +| Execution entry point | `v2/pkg/engine/resolve/resolve.go` | +| Incremental rendering (ResolveDefer, collectDeferFields) | `v2/pkg/engine/resolve/resolvable.go` | +| Integration tests (planner) | `v2/pkg/engine/datasource/graphql_datasource/graphql_datasource_defer_test.go` | +| Integration tests (engine) | `execution/engine/execution_engine_defer_test.go` | +| Normalization tests | `v2/pkg/astnormalization/inline_fragment_expand_defer_test.go` | +| Typename placeholder tests | `v2/pkg/astnormalization/defer_ensure_typename_test.go` | +| Required fields defer tests | `v2/pkg/engine/plan/required_fields_visitor_test.go` | \ No newline at end of file diff --git a/execution/engine/config_factory_proxy_test.go b/execution/engine/config_factory_proxy_test.go index 4cddfef40f..ce7b2b0a16 100644 --- a/execution/engine/config_factory_proxy_test.go +++ b/execution/engine/config_factory_proxy_test.go @@ -132,6 +132,7 @@ func TestProxyEngineConfigFactory_EngineConfiguration(t *testing.T) { expectedConfig.SetFieldConfigurations(expectedFieldConfigs) sortFieldConfigurations(config.FieldConfigurations()) + assert.Equal(t, graphqlGeneratorFullSchema, string(config.Schema().RawSchema())) assert.Equal(t, expectedConfig, config) }) diff --git a/execution/engine/engine_config_test.go b/execution/engine/engine_config_test.go index db6427d70b..7f92bc6d01 100644 --- a/execution/engine/engine_config_test.go +++ b/execution/engine/engine_config_test.go @@ -358,19 +358,21 @@ type Language { __typename: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The ` + "`Int`" + ` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The ` + "`Float`" + ` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The ` + "`String`" + ` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The ` + "`Boolean` scalar type represents `true` or `false`." + `" scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The ` + "`ID`" + ` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -395,7 +397,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -406,6 +410,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -571,4 +583,11 @@ enum __TypeKind { LIST "Indicates this type is a non-null. 'ofType' is a valid field." NON_NULL -}` +} + +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD` diff --git a/execution/engine/execution_engine.go b/execution/engine/execution_engine.go index 178a8a5e3c..b24ab28e99 100644 --- a/execution/engine/execution_engine.go +++ b/execution/engine/execution_engine.go @@ -153,6 +153,7 @@ func (e *ExecutionEngine) Execute(ctx context.Context, operation *graphql.Reques astnormalization.WithRemoveFragmentDefinitions(), astnormalization.WithRemoveUnusedVariables(), astnormalization.WithInlineFragmentSpreads(), + astnormalization.WithInlineDefer(), ) if err != nil { return err @@ -243,6 +244,9 @@ func (e *ExecutionEngine) Execute(ctx context.Context, operation *graphql.Reques operation.ComputeActualCost(costCalculator, e.config.plannerConfig, execContext.resolveContext.ActualListSizes) } return nil + case *plan.DeferResponsePlan: + _, err := e.resolver.ResolveGraphQLDeferResponse(execContext.resolveContext, p.Response, writer) + return err case *plan.SubscriptionResponsePlan: return e.resolver.ResolveGraphQLSubscription(execContext.resolveContext, p.Response, writer) default: diff --git a/execution/engine/execution_engine_defer_test.go b/execution/engine/execution_engine_defer_test.go new file mode 100644 index 0000000000..34674fb604 --- /dev/null +++ b/execution/engine/execution_engine_defer_test.go @@ -0,0 +1,2304 @@ +package engine + +import ( + "testing" + + "github.com/stretchr/testify/require" + + "github.com/wundergraph/graphql-go-tools/execution/graphql" + "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/datasource/graphql_datasource" + "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/plan" +) + +func TestExecutionEngine_Execute_Defer(t *testing.T) { + type TestCase struct { + name string + definition string + dataSources []plan.DataSource + } + + makeRootNodesTestCase := func() TestCase { + definition := ` + type User { + id: ID! + name: String! + title: String! + info: Info! + } + + type Info { + email: String! + phone: String! + } + + type Query { + user: User! + } + ` + + dataSources := []plan.DataSource{ + mustGraphqlDataSourceConfiguration(t, + "id-1", + mustFactory(t, + testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "first", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"{user {name}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"name":"Black"}}}`, + }, + `{"query":"{user {___typename: __typename}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"___typename":"User"}}}`, + }, + `{"query":"{user {title}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"title":"Sabbat"}}}`, + }, + `{"query":"{user {id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"id":"1"}}}`, + }, + `{"query":"{user {title id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"title":"Sabbat","id":"1"}}}`, + }, + `{"query":"{user {name title id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"name":"Black","title":"Sabbat","id":"1"}}}`, + }, + `{"query":"{user {info {email phone}}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"email":"black@sabbat","phone":"123"}}}}`, + }, + `{"query":"{user {info {phone} title}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"phone":"123"},"title":"Sabbat"}}}`, + }, + `{"query":"{user {name info {email}}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"name":"Black","info":{"email":"black@sabbat"}}}}`, + }, + `{"query":"{user {name info {___typename: __typename}}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"name":"Black","info":{"___typename":"Info"}}}}`, + }, + `{"query":"{user {info {___typename: __typename}}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"___typename":"Info"}}}}`, + }, + `{"query":"{user {info {email}}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"email":"black@sabbat"}}}}`, + }, + `{"query":"{user {info {phone}}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"phone":"123"}}}}`, + }, + }, + }), + ), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "Query", + FieldNames: []string{"user"}, + }, + }, + ChildNodes: []plan.TypeField{ + { + TypeName: "User", + FieldNames: []string{"id", "title", "name", "info"}, + }, + { + TypeName: "Info", + FieldNames: []string{"email", "phone"}, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{ + URL: "https://first/", + Method: "POST", + }, + SchemaConfiguration: mustSchemaConfig( + t, + &graphql_datasource.FederationConfiguration{ + Enabled: true, + ServiceSDL: definition, + }, + definition, + ), + }), + ), + } + + return TestCase{ + name: "defer on non entity field", + definition: definition, + dataSources: dataSources, + } + } + + makeEntityTestCase := func() TestCase { + definition := ` + type User { + id: ID! + name: String! + title: String! + info: Info! + } + + type Info { + email: String! + phone: String! + } + + type Query { + user: User! + } + ` + + firstSubgraphSDL := ` + type User @key(fields: "id") { + id: ID! + info: Info! + } + + type Info { + email: String! + } + + type Query { + user: User! + } + ` + + secondSubgraphSDL := ` + type User @key(fields: "id") { + id: ID! + name: String! + title: String! + info: Info! + } + + type Info { + phone: String! + } + ` + + dataSources := []plan.DataSource{ + mustGraphqlDataSourceConfiguration(t, + "id-1", + mustFactory(t, + testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "first", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"{user {id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"id":"1","info":{"email":"black@sabbat"}}}}`, + }, + `{"query":"{user {___typename: __typename __typename id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"___typename":"User","__typename":"User","id":1}}}`, + }, + `{"query":"{user {info {email}}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"email":"black@sabbat"}}}}`, + }, + `{"query":"{user {info {___typename: __typename}}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"___typename":"Info"}}}}`, + }, + `{"query":"{user {__typename __internal_id: id __internal_1_id: id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"__typename":"User","__internal_id":"1","__internal_1_id":"1"}}}`, + }, + `{"query":"{user {info {___typename: __typename} __typename id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"___typename":"Info"},"__typename":"User","id":"1"}}}`, + }, + `{"query":"{user {___typename: __typename __typename __internal_id: id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"___typename":"User","__typename":"User","__internal_id":"1"}}}`, + }, + `{"query":"{user {__typename id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"__typename":"User","id":"1"}}}`, + }, + `{"query":"{user {id __typename}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"id":"1","__typename":"User"}}}`, + }, + `{"query":"{user {info {email} __typename id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"info":{"email":"black@sabbat"},"__typename":"User","id":"1"}}}`, + }, + }, + }), + ), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "Query", + FieldNames: []string{"user"}, + }, + { + TypeName: "User", + FieldNames: []string{"id", "info"}, + }, + }, + ChildNodes: []plan.TypeField{ + { + TypeName: "Info", + FieldNames: []string{"email"}, + }, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{ + URL: "https://first/", + Method: "POST", + }, + SchemaConfiguration: mustSchemaConfig( + t, + &graphql_datasource.FederationConfiguration{ + Enabled: true, + ServiceSDL: firstSubgraphSDL, + }, + firstSubgraphSDL, + ), + }), + ), + mustGraphqlDataSourceConfiguration(t, + "id-2", + mustFactory(t, + testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "second", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename name}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","name":"Black","title":"Sabbat","info":{"phone":"123"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename name}}}","variables":{"representations":[{"__typename":"User","id":1}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","name":"Black"}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename title}}}","variables":{"representations":[{"__typename":"User","id":1}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","title":"Sabbat"}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename title}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","name":"Black","title":"Sabbat","info":{"phone":"123"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename name title}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","name":"Black","title":"Sabbat","info":{"phone":"123"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename info {phone} title}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","name":"Black","title":"Sabbat","info":{"phone":"123"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename info {phone}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","name":"Black","title":"Sabbat","info":{"phone":"123"}}]}}`, + }, + }, + }), + ), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "User", + FieldNames: []string{"id", "title", "name", "info"}, + }, + }, + ChildNodes: []plan.TypeField{ + { + TypeName: "Info", + FieldNames: []string{"phone"}, + }, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{ + URL: "https://second/", + Method: "POST", + }, + SchemaConfiguration: mustSchemaConfig( + t, + &graphql_datasource.FederationConfiguration{ + Enabled: true, + ServiceSDL: secondSubgraphSDL, + }, + secondSubgraphSDL, + ), + }), + ), + } + + return TestCase{ + name: "entity - distributed fields", + definition: definition, + dataSources: dataSources, + } + } + + testCases := []TestCase{ + makeRootNodesTestCase(), + makeEntityTestCase(), + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + + schema, err := graphql.NewSchemaFromString(tc.definition) + require.NoError(t, err) + + t.Run("single deffered field", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + name + ... @defer { + title + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{"name":"Black"}},"hasNext":true} +{"incremental":[{"data":{"title":"Sabbat"},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("single deffered field between regular fields", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + title + ... @defer { + name + } + id + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{"title":"Sabbat","id":"1"}},"hasNext":true} +{"incremental":[{"data":{"name":"Black"},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("multiple deffered fields", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + name + ... @defer { + title + id + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{"name":"Black"}},"hasNext":true} +{"incremental":[{"data":{"title":"Sabbat","id":"1"},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("multiple deffered fields - all object fields deferred", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + ... @defer { + name + title + id + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{}},"hasNext":true} +{"incremental":[{"data":{"name":"Black","title":"Sabbat","id":"1"},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("nested defers", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + name + ... @defer { + title + ... @defer { + id + } + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{"name":"Black"}},"hasNext":true} +{"incremental":[{"data":{"title":"Sabbat"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"id":"1"},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("nested defers variation", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserNameTitle", + Query: ` + query DeferUserNameTitle { + user { + ... @defer { + name + ... @defer { title } + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{}},"hasNext":true} +{"incremental":[{"data":{"name":"Black"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"title":"Sabbat"},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("parallel defers", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + name + ... @defer { + title + } + ... @defer { + id + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{"name":"Black"}},"hasNext":true} +{"incremental":[{"data":{"title":"Sabbat"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"id":"1"},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer nested object", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + name + ... @defer { + info { + email + phone + } + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{"name":"Black"}},"hasNext":true} +{"incremental":[{"data":{"info":{"email":"black@sabbat","phone":"123"}},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer nested object with duplicated non defered object", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + name + info { + email + } + ... @defer { + info { + phone + } + title + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{"name":"Black","info":{"email":"black@sabbat"}}},"hasNext":true} +{"incremental":[{"data":{"title":"Sabbat"},"path":["user"]},{"data":{"phone":"123"},"path":["user","info"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer nested object fields", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferUserTitle", + Query: ` + query DeferUserTitle { + user { + name + info { + ... @defer { + email + phone + } + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{"user":{"name":"Black","info":{}}},"hasNext":true} +{"incremental":[{"data":{"email":"black@sabbat","phone":"123"},"path":["user","info"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("extensive parallel defers across all possible fields", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferEverythingParallel", + Query: ` + query DeferEverythingParallel { + ... @defer { + user { + ... @defer { id } + ... @defer { name } + ... @defer { title } + ... @defer { + info { + ... @defer { email } + ... @defer { phone } + } + } + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"user":{}},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"id":"1"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"name":"Black"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"title":"Sabbat"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"info":{}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"email":"black@sabbat"},"path":["user","info"]}],"hasNext":true} +{"incremental":[{"data":{"phone":"123"},"path":["user","info"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("extensive fully nested defers across all possible fields", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferEverythingNested", + Query: ` + query DeferEverythingNested { + ... @defer { + user { + ... @defer { + id + ... @defer { + name + ... @defer { + title + ... @defer { + info { + ... @defer { + email + ... @defer { + phone + } + } + } + } + } + } + } + } + } + }`, + } + }, + dataSources: tc.dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"user":{}},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"id":"1"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"name":"Black"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"title":"Sabbat"},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"info":{}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"email":"black@sabbat"},"path":["user","info"]}],"hasNext":true} +{"incremental":[{"data":{"phone":"123"},"path":["user","info"]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + } + + t.Run("cross subgraph requires", func(t *testing.T) { + // Merged schema visible to clients. + definition := ` + type Query { + user: User! + } + type User { + id: ID! + name: String! + billing: Billing! + settings: Settings! + account: Account! + notifications: [String!]! + } + type Billing { + plan: String! + currency: String! + } + type Settings { + region: String! + language: String! + } + type Account { + type: String! + limit: Int! + } + ` + + // Subgraph 1: owns Query.user, User.name, User.account. + // account @requires(fields: "billing { plan } settings { region }") — depends on sub2 and sub3. + firstSubgraphSDL := ` + type Query { + user: User! + } + + type User @key(fields: "id") { + id: ID! + name: String! + account: Account! @requires(fields: "billing { plan } settings { region }") + billing: Billing! @external + settings: Settings! @external + } + + type Account { + type: String! + limit: Int! + } + + type Billing { + plan: String! @external + } + + type Settings { + region: String! @external + } + ` + + // Subgraph 2: owns User.billing, User.notifications. + // notifications @requires(fields: "name settings { language }") — depends on sub1 (name) and sub3 (settings). + secondSubgraphSDL := ` + type User @key(fields: "id") { + id: ID! + name: String! @external + notifications: [String!]! @requires(fields: "name settings { language }") + billing: Billing! + settings: Settings! @external + } + + type Billing { + plan: String! + currency: String! + } + + type Settings { + language: String! @external + } + ` + + // Subgraph 3: owns User.settings. + thirdSubgraphSDL := ` + type User @key(fields: "id") { + id: ID! + settings: Settings! + } + + type Settings { + region: String! + language: String! + } + ` + + schema, err := graphql.NewSchemaFromString(definition) + require.NoError(t, err) + + dataSources := []plan.DataSource{ + mustGraphqlDataSourceConfiguration(t, "id-1", mustFactory(t, testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "first", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"{user {name}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"name":"Alice"}}}`, + }, + `{"query":"{user {__typename id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"__typename":"User","id":"1"}}}`, + }, + `{"query":"{user {___typename: __typename}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"___typename":"User"}}}`, + }, + `{"query":"{user {name __typename id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"name":"Alice","__typename":"User","id":"1"}}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename account {type}}}}","variables":{"representations":[{"__typename":"User","billing":{"plan":"pro"},"settings":{"region":"us-east"},"id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","account":{"type":"premium"}}]}}`, + }, + `{"query":"{user {__internal_name: name}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"__internal_name":"Alice"}}}`, + }, + `{"query":"{user {name account {type} __internal_name: name}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"name":"Alice","account":{"type":"premium"},"__internal_name":"Alice"}}}`, + }, + `{"query":"{user {___typename: __typename __typename id}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"___typename":"User","__typename":"User","id":"1"}}}`, + }, + `{"query":"{user {account {type} __internal_name: name}}"}`: { + statusCode: 200, + body: `{"data":{"user":{"account":{"type":"premium"},"__internal_name":"Alice"}}}`, + }, + }, + })), &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "Query", + FieldNames: []string{"user"}, + }, + { + TypeName: "User", + FieldNames: []string{"id", "name", "account"}, + ExternalFieldNames: []string{"billing", "settings"}, + }, + }, + ChildNodes: []plan.TypeField{ + { + TypeName: "Account", + FieldNames: []string{"type", "limit"}, + }, + { + TypeName: "Billing", + ExternalFieldNames: []string{"plan"}, + }, + { + TypeName: "Settings", + ExternalFieldNames: []string{"region"}, + }, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + }, + Requires: plan.FederationFieldConfigurations{ + { + TypeName: "User", + FieldName: "account", + SelectionSet: "billing { plan } settings { region }", + }, + }, + }, + }, mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{ + URL: "https://first/", + Method: "POST", + }, + SchemaConfiguration: mustSchemaConfig(t, &graphql_datasource.FederationConfiguration{ + Enabled: true, + ServiceSDL: firstSubgraphSDL, + }, firstSubgraphSDL), + })), + mustGraphqlDataSourceConfiguration(t, "id-2", mustFactory(t, testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "second", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename billing {plan}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","billing":{"plan":"pro"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename notifications}}}","variables":{"representations":[{"__typename":"User","name":"Alice","settings":{"language":"en"},"id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","notifications":["msg1","msg2"]}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename __internal_billing: billing {plan}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","__internal_billing":{"plan":"pro"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename billing {plan} __internal_billing: billing {plan}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","billing":{"plan":"pro"},"__internal_billing":{"plan":"pro"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename billing {plan} notifications}}}","variables":{"representations":[{"__typename":"User","id":"1","name":"Alice","settings":{"language":"en"}}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","billing":{"plan":"pro"},"notifications":["msg1","msg2"]}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename notifications billing {plan}}}}","variables":{"representations":[{"__typename":"User","id":"1","name":"Alice","settings":{"language":"en"}}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","notifications":["msg1","msg2"],"billing":{"plan":"pro"}}]}}`, + }, + }, + })), &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "User", + FieldNames: []string{"id", "billing", "notifications"}, + ExternalFieldNames: []string{"name", "settings"}, + }, + }, + ChildNodes: []plan.TypeField{ + { + TypeName: "Billing", + FieldNames: []string{"plan", "currency"}, + }, + { + TypeName: "Settings", + ExternalFieldNames: []string{"language"}, + }, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + }, + Requires: plan.FederationFieldConfigurations{ + { + TypeName: "User", + FieldName: "notifications", + SelectionSet: "name settings { language }", + }, + }, + }, + }, mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{ + URL: "https://second/", + Method: "POST", + }, + SchemaConfiguration: mustSchemaConfig(t, &graphql_datasource.FederationConfiguration{ + Enabled: true, + ServiceSDL: secondSubgraphSDL, + }, secondSubgraphSDL), + })), + mustGraphqlDataSourceConfiguration(t, "id-3", mustFactory(t, testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "third", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename __internal_3_settings: settings {language}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","__internal_3_settings":{"language":"en"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename __internal_2_settings: settings {language}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","__internal_2_settings":{"language":"en"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename __internal_settings: settings {region}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","__internal_settings":{"region":"us-east"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename __internal_settings: settings {language}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","__internal_settings":{"language":"en"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename settings {language} __internal_settings: settings {language}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","settings":{"language":"en"},"__internal_settings":{"language":"en"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename settings {region}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","settings":{"region":"us-east"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename settings {language}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","settings":{"language":"en"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename settings {region language}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","settings":{"region":"us-east","language":"en"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename __internal_settings: settings {region language}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","__internal_settings":{"region":"us-east","language":"en"}}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename settings {region} __internal_settings: settings {region language}}}}","variables":{"representations":[{"__typename":"User","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"User","settings":{"region":"us-east"},"__internal_settings":{"region":"us-east","language":"en"}}]}}`, + }, + }, + })), &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "User", + FieldNames: []string{"id", "settings"}, + }, + }, + ChildNodes: []plan.TypeField{ + { + TypeName: "Settings", + FieldNames: []string{"region", "language"}, + }, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + }, + }, + }, mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{ + URL: "https://third/", + Method: "POST", + }, + SchemaConfiguration: mustSchemaConfig(t, &graphql_datasource.FederationConfiguration{ + Enabled: true, + ServiceSDL: thirdSubgraphSDL, + }, thirdSubgraphSDL), + })), + } + + t.Run("non-defer - name only", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + Query: `{ user { name } }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}}}`, + })) + + t.Run("non-defer - account requires billing and settings", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + Query: `{ user { account { type } } }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"account":{"type":"premium"}}}}`, + })) + + t.Run("non-defer - notifications requires name and settings", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + Query: `{ user { notifications } }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"notifications":["msg1","msg2"]}}}`, + })) + + t.Run("non-defer - both requires fields together", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + Query: `{ user { name account { type } notifications } }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice","account":{"type":"premium"},"notifications":["msg1","msg2"]}}}`, + })) + + t.Run("non-defer - all fields including raw billing and settings", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + Query: `{ user { name billing { plan } settings { region } account { type } notifications } }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice","billing":{"plan":"pro"},"settings":{"region":"us-east"},"account":{"type":"premium"},"notifications":["msg1","msg2"]}}}`, + })) + + t.Run("defer - account field deferred", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferAccount", + Query: ` + query DeferAccount { + user { + name + ... @defer { + account { type } + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"}},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - notifications field deferred", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferNotifications", + Query: ` + query DeferNotifications { + user { + name + ... @defer { + notifications + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - all user fields deferred in single block", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferAll", + Query: ` + query DeferAll { + user { + ... @defer { + name + account { type } + notifications + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{}},"hasNext":true} +{"incremental":[{"data":{"name":"Alice","account":{"type":"premium"},"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("all user fields without defer", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferAll", + Query: ` + query DeferAll { + user { + name + account { type } + notifications + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice","account":{"type":"premium"},"notifications":["msg1","msg2"]}}}`, + })) + + t.Run("defer - parallel defers on both cross-subgraph requires fields", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferBothRequires", + Query: ` + query DeferBothRequires { + user { + name + ... @defer { + account { type } + } + ... @defer { + notifications + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - nested defers: outer has account, inner has notifications", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferNested", + Query: ` + query DeferNested { + user { + name + ... @defer { + account { type } + ... @defer { + notifications + } + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - parallel defers on raw entity fields alongside requires", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferMixed", + Query: ` + query DeferMixed { + user { + name + billing { plan } + ... @defer { + account { type } + } + ... @defer { + notifications + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice","billing":{"plan":"pro"}}},"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - deeply nested requires: account outer, notifications inner, with raw fields", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferDeepNested", + Query: ` + query DeferDeepNested { + user { + ... @defer { + name + billing { plan } + ... @defer { + account { type } + ... @defer { + notifications + } + } + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{}},"hasNext":true} +{"incremental":[{"data":{"name":"Alice","billing":{"plan":"pro"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + // Defer versions of each non-defer test — verify @defer doesn't break @requires resolution. + + t.Run("defer - name only", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferNameOnly", + Query: ` + query DeferNameOnly { + user { + ... @defer { name } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{}},"hasNext":true} +{"incremental":[{"data":{"name":"Alice"},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - only account deferred (no other immediate fields)", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferAccountOnly", + Query: ` + query DeferAccountOnly { + user { + ... @defer { account { type } } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{}},"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"}},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - only notifications deferred (no other immediate fields)", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferNotificationsOnly", + Query: ` + query DeferNotificationsOnly { + user { + ... @defer { notifications } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{}},"hasNext":true} +{"incremental":[{"data":{"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - all fields in single defer block", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferAllFields", + Query: ` + query DeferAllFields { + user { + ... @defer { + name + billing { plan } + settings { region } + account { type } + notifications + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{}},"hasNext":true} +{"incremental":[{"data":{"name":"Alice","billing":{"plan":"pro"},"settings":{"region":"us-east"},"account":{"type":"premium"},"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + // Tests mixing requires-source fields (billing, settings) with derived @requires fields + // (account, notifications) in same or parallel defer blocks. + + t.Run("defer - requires source (billing) and derived field (account) in same defer block", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferBillingAndAccount", + Query: ` + query DeferBillingAndAccount { + user { + name + ... @defer { + billing { plan } + account { type } + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"billing":{"plan":"pro"},"account":{"type":"premium"}},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - requires source (billing) and derived field (account) in parallel defers", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferBillingParallelAccount", + Query: ` + query DeferBillingParallelAccount { + user { + name + ... @defer { billing { plan } } + ... @defer { account { type } } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"billing":{"plan":"pro"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"}},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - requires source (settings) and derived field (notifications) in same defer block", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferSettingsAndNotifications", + Query: ` + query DeferSettingsAndNotifications { + user { + name + ... @defer { + settings { language } + notifications + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"settings":{"language":"en"},"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - requires source (settings) and derived field (notifications) in parallel defers", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferSettingsParallelNotifications", + Query: ` + query DeferSettingsParallelNotifications { + user { + name + ... @defer { settings { language } } + ... @defer { notifications } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"settings":{"language":"en"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - all requires sources deferred together, then derived fields deferred in parallel", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferSourcesThenDerived", + Query: ` + query DeferSourcesThenDerived { + user { + name + ... @defer { + billing { plan } + settings { region language } + } + ... @defer { + account { type } + notifications + } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice"}},"hasNext":true} +{"incremental":[{"data":{"billing":{"plan":"pro"},"settings":{"region":"us-east","language":"en"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"},"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer - requires sources immediate, both derived fields deferred in parallel", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "DeferDerivedFieldsOnly", + Query: ` + query DeferDerivedFieldsOnly { + user { + name + billing { plan } + settings { region language } + ... @defer { account { type } } + ... @defer { notifications } + } + }`, + } + }, + dataSources: dataSources, + expectedResponse: `{"data":{"user":{"name":"Alice","billing":{"plan":"pro"},"settings":{"region":"us-east","language":"en"}}},"hasNext":true} +{"incremental":[{"data":{"account":{"type":"premium"}},"path":["user"]}],"hasNext":true} +{"incremental":[{"data":{"notifications":["msg1","msg2"]},"path":["user"]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + + t.Run("non-nullable field errors", func(t *testing.T) { + definition := ` + type Query { product: Product! } + type Product { + id: ID! + name: String! + nameWithError: String + price: Float! + } + ` + + firstSubgraphSDL := ` + type Query { product: Product! } + type Product @key(fields: "id") { + id: ID! + name: String! + nameWithError: String + } + ` + + secondSubgraphSDL := ` + type Product @key(fields: "id") { + id: ID! + price: Float! + } + ` + + dataSources := []plan.DataSource{ + mustGraphqlDataSourceConfiguration(t, + "id-1", + mustFactory(t, + testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "first", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"{product {___typename: __typename}}"}`: { + statusCode: 200, + body: `{"data":{"product":{"___typename":"Product"}}}`, + }, + `{"query":"{product {___typename: __typename __typename id}}"}`: { + statusCode: 200, + body: `{"data":{"product":{"___typename":"Product","__typename":"Product","id":"1"}}}`, + }, + `{"query":"{product {name}}"}`: { + statusCode: 200, + body: `{"data":{"product":{"name":null}}}`, + }, + `{"query":"{product {nameWithError}}"}`: { + statusCode: 200, + body: `{"data":{"product":{"nameWithError":null}},"errors":[{"message":"upstream name error","path":["product","nameWithError"]}]}`, + }, + }, + }), + ), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + {TypeName: "Query", FieldNames: []string{"product"}}, + {TypeName: "Product", FieldNames: []string{"id", "name", "nameWithError"}}, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + {TypeName: "Product", SelectionSet: "id"}, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{URL: "https://first/", Method: "POST"}, + SchemaConfiguration: mustSchemaConfig(t, + &graphql_datasource.FederationConfiguration{Enabled: true, ServiceSDL: firstSubgraphSDL}, + firstSubgraphSDL, + ), + }), + ), + mustGraphqlDataSourceConfiguration(t, + "id-2", + mustFactory(t, + testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "second", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on Product {__typename price}}}","variables":{"representations":[{"__typename":"Product","id":"1"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"Product","price":null}]}}`, + }, + }, + }), + ), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + {TypeName: "Product", FieldNames: []string{"price"}}, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + {TypeName: "Product", SelectionSet: "id"}, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{URL: "https://second/", Method: "POST"}, + SchemaConfiguration: mustSchemaConfig(t, + &graphql_datasource.FederationConfiguration{Enabled: true, ServiceSDL: secondSubgraphSDL}, + secondSubgraphSDL, + ), + }), + ), + } + + schema, err := graphql.NewSchemaFromString(definition) + require.NoError(t, err) + + t.Run("defer from first subgraph - null non-nullable field", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ product { ... @defer { name } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"product":{}},"hasNext":true} +{"incremental":[{"data":null,"path":["product"],"errors":[{"message":"Cannot return null for non-nullable field 'Query.product.name'.","path":["product","name"]}]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer from first subgraph - null field with upstream error", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ product { ... @defer { nameWithError } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"product":{}},"hasNext":true} +{"incremental":[{"data":{"nameWithError":null},"path":["product"],"errors":[{"message":"Failed to fetch from Subgraph 'id-1'."}]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer from second subgraph - null non-nullable field", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ product { ... @defer { price } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"product":{}},"hasNext":true} +{"incremental":[{"data":null,"path":["product"],"errors":[{"message":"Cannot return null for non-nullable field 'Query.product.price'.","path":["product","price"]}]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer from both subgraphs - null non-nullable fields - name first", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ product { ... @defer { name } ... @defer { price } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"product":{}},"hasNext":true} +{"incremental":[{"data":null,"path":["product"],"errors":[{"message":"Cannot return null for non-nullable field 'Query.product.name'.","path":["product","name"]}]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer from both subgraphs - null non-nullable fields - price first", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ product { ... @defer { price } ... @defer { name } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"product":{}},"hasNext":true} +{"incremental":[{"data":null,"path":["product"],"errors":[{"message":"Cannot return null for non-nullable field 'Query.product.price'.","path":["product","price"]}]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer error halts subsequent defers - nameWithError then price", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ product { ... @defer { nameWithError } ... @defer { price } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"product":{}},"hasNext":true} +{"incremental":[{"data":{"nameWithError":null},"path":["product"],"errors":[{"message":"Failed to fetch from Subgraph 'id-1'."}]}],"hasNext":false} +`, + }, withStreamingResponse())) + + }) + + t.Run("nested list entities", func(t *testing.T) { + definition := ` + type Query { items: [Item!]! } + type Item { + id: ID! + name: String! + title: String! + subItems: [SubItem!]! + } + type SubItem { + id: ID! + description: String! + } + ` + schema, err := graphql.NewSchemaFromString(definition) + require.NoError(t, err) + + // Sub1: owns Query.items, Item.{id,name,subItems}, SubItem.id + firstSubgraphSDL := ` + type Query { items: [Item!]! } + type Item @key(fields: "id") { + id: ID! + name: String! + subItems: [SubItem!]! + } + type SubItem @key(fields: "id") { + id: ID! + } + ` + firstSubgraphDS := mustGraphqlDataSourceConfiguration(t, + "id-1", + mustFactory(t, testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "first", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"{items {___typename: __typename __typename id}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"___typename":"Item","__typename":"Item","id":"1"},{"___typename":"Item","__typename":"Item","id":"2"}]}}`, + }, + `{"query":"{items {name}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"name":"ItemOne"},{"name":"ItemTwo"}]}}`, + }, + `{"query":"{items {___typename: __typename}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"___typename":"Item"},{"___typename":"Item"}]}}`, + }, + `{"query":"{items {subItems {___typename: __typename __typename id}}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"subItems":[{"___typename":"SubItem","__typename":"SubItem","id":"s1"},{"___typename":"SubItem","__typename":"SubItem","id":"s2"}]},{"subItems":[{"___typename":"SubItem","__typename":"SubItem","id":"s3"}]}]}}`, + }, + `{"query":"{items {id}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"id":"1"},{"id":"2"}]}}`, + }, + `{"query":"{items {id name}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"id":"1","name":"ItemOne"},{"id":"2","name":"ItemTwo"}]}}`, + }, + `{"query":"{items {subItems {id __typename __internal_id: id}}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"subItems":[{"id":"s1","__typename":"SubItem","__internal_id":"s1"},{"id":"s2","__typename":"SubItem","__internal_id":"s2"}]},{"subItems":[{"id":"s3","__typename":"SubItem","__internal_id":"s3"}]}]}}`, + }, + `{"query":"{items {___typename: __typename __typename __internal_id: id}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"___typename":"Item","__typename":"Item","__internal_id":"1"},{"___typename":"Item","__typename":"Item","__internal_id":"2"}]}}`, + }, + `{"query":"{items {id __typename __internal_id: id}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"id":"1","__typename":"Item","__internal_id":"1"},{"id":"2","__typename":"Item","__internal_id":"2"}]}}`, + }, + `{"query":"{items {id subItems {id __typename __internal_id: id}}}"}`: { + statusCode: 200, + body: `{"data":{"items":[{"id":"1","subItems":[{"id":"s1","__typename":"SubItem","__internal_id":"s1"},{"id":"s2","__typename":"SubItem","__internal_id":"s2"}]},{"id":"2","subItems":[{"id":"s3","__typename":"SubItem","__internal_id":"s3"}]}]}}`, + }, + }, + })), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + {TypeName: "Query", FieldNames: []string{"items"}}, + {TypeName: "Item", FieldNames: []string{"id", "name", "subItems"}}, + {TypeName: "SubItem", FieldNames: []string{"id"}}, + }, + ChildNodes: []plan.TypeField{ + {TypeName: "SubItem", FieldNames: []string{"id"}}, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + {TypeName: "Item", SelectionSet: "id"}, + {TypeName: "SubItem", SelectionSet: "id"}, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{URL: "https://first/", Method: "POST"}, + SchemaConfiguration: mustSchemaConfig(t, + &graphql_datasource.FederationConfiguration{Enabled: true, ServiceSDL: firstSubgraphSDL}, + firstSubgraphSDL, + ), + }), + ) + + // Sub2: extends Item with title + secondSubgraphSDL := ` + type Item @key(fields: "id") { + id: ID! + title: String! + } + ` + secondSubgraphDS := mustGraphqlDataSourceConfiguration(t, + "id-2", + mustFactory(t, testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "second", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on Item {__typename title}}}","variables":{"representations":[{"__typename":"Item","id":"1"},{"__typename":"Item","id":"2"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"Item","title":"TitleOne"},{"__typename":"Item","title":"TitleTwo"}]}}`, + }, + }, + })), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + {TypeName: "Item", FieldNames: []string{"id", "title"}}, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + {TypeName: "Item", SelectionSet: "id"}, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{URL: "https://second/", Method: "POST"}, + SchemaConfiguration: mustSchemaConfig(t, + &graphql_datasource.FederationConfiguration{Enabled: true, ServiceSDL: secondSubgraphSDL}, + secondSubgraphSDL, + ), + }), + ) + + // Sub3: extends SubItem with description + thirdSubgraphSDL := ` + type SubItem @key(fields: "id") { + id: ID! + description: String! + } + ` + thirdSubgraphDS := mustGraphqlDataSourceConfiguration(t, + "id-3", + mustFactory(t, testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "third", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on SubItem {__typename description}}}","variables":{"representations":[{"__typename":"SubItem","id":"s1"},{"__typename":"SubItem","id":"s2"},{"__typename":"SubItem","id":"s3"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"SubItem","description":"Desc1"},{"__typename":"SubItem","description":"Desc2"},{"__typename":"SubItem","description":"Desc3"}]}}`, + }, + }, + })), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + {TypeName: "SubItem", FieldNames: []string{"id", "description"}}, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + {TypeName: "SubItem", SelectionSet: "id"}, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{URL: "https://third/", Method: "POST"}, + SchemaConfiguration: mustSchemaConfig(t, + &graphql_datasource.FederationConfiguration{Enabled: true, ServiceSDL: thirdSubgraphSDL}, + thirdSubgraphSDL, + ), + }), + ) + + dataSources := []plan.DataSource{firstSubgraphDS, secondSubgraphDS, thirdSubgraphDS} + + t.Run("category A - no id in initial response", func(t *testing.T) { + t.Run("defer name from sub1", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ items { ... @defer { name } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"items":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"name":"ItemOne"},"path":["items",0]},{"data":{"name":"ItemTwo"},"path":["items",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer title from sub2", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ items { ... @defer { title } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"items":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"title":"TitleOne"},"path":["items",0]},{"data":{"title":"TitleTwo"},"path":["items",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer subItems description from sub3", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ items { subItems { ... @defer { description } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"items":[{"subItems":[{},{}]},{"subItems":[{}]}]},"hasNext":true} +{"incremental":[{"data":{"description":"Desc1"},"path":["items",0,"subItems",0]},{"data":{"description":"Desc2"},"path":["items",0,"subItems",1]},{"data":{"description":"Desc3"},"path":["items",1,"subItems",0]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("items subItems and description all in separate nested defers", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ ... @defer { items { id ... @defer { subItems { id ... @defer { description } } } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"items":[{"id":"1"},{"id":"2"}]},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"subItems":[{"id":"s1"},{"id":"s2"}]},"path":["items",0]},{"data":{"subItems":[{"id":"s3"}]},"path":["items",1]}],"hasNext":true} +{"incremental":[{"data":{"description":"Desc1"},"path":["items",0,"subItems",0]},{"data":{"description":"Desc2"},"path":["items",0,"subItems",1]},{"data":{"description":"Desc3"},"path":["items",1,"subItems",0]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + + t.Run("category B - id deferred with parallel defers", func(t *testing.T) { + t.Run("defer id only", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ items { ... @defer { id } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"items":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"id":"1"},"path":["items",0]},{"data":{"id":"2"},"path":["items",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer id and name together", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ items { ... @defer { id name } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"items":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"id":"1","name":"ItemOne"},"path":["items",0]},{"data":{"id":"2","name":"ItemTwo"},"path":["items",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer id in parallel with name", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ items { ... @defer { id } ... @defer { name } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"items":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"id":"1"},"path":["items",0]},{"data":{"id":"2"},"path":["items",1]}],"hasNext":true} +{"incremental":[{"data":{"name":"ItemOne"},"path":["items",0]},{"data":{"name":"ItemTwo"},"path":["items",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("defer id in parallel with title (cross-subgraph)", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ items { ... @defer { id } ... @defer { title } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"items":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"id":"1"},"path":["items",0]},{"data":{"id":"2"},"path":["items",1]}],"hasNext":true} +{"incremental":[{"data":{"title":"TitleOne"},"path":["items",0]},{"data":{"title":"TitleTwo"},"path":["items",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("parallel defers on subItems id and description", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ items { id ... @defer { subItems { id } } ... @defer { subItems { description } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"items":[{"id":"1"},{"id":"2"}]},"hasNext":true} +{"incremental":[{"data":{"subItems":[{"id":"s1"},{"id":"s2"}]},"path":["items",0]},{"data":{"subItems":[{"id":"s3"}]},"path":["items",1]}],"hasNext":true} +{"incremental":[{"data":{"description":"Desc1"},"path":["items",0,"subItems",0]},{"data":{"description":"Desc2"},"path":["items",0,"subItems",1]},{"data":{"description":"Desc3"},"path":["items",1,"subItems",0]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + + t.Run("parallel root defers", func(t *testing.T) { + t.Run("subItems id then description", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ ... @defer { items { subItems { id } } } ... @defer { items { subItems { description } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"items":[{"subItems":[{"id":"s1"},{"id":"s2"}]},{"subItems":[{"id":"s3"}]}]},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"description":"Desc1"},"path":["items",0,"subItems",0]},{"data":{"description":"Desc2"},"path":["items",0,"subItems",1]},{"data":{"description":"Desc3"},"path":["items",1,"subItems",0]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + + t.Run("category C - nested defers", func(t *testing.T) { + t.Run("outer defer items, inner defer name", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ ... @defer { items { id ... @defer { name } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"items":[{"id":"1"},{"id":"2"}]},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"name":"ItemOne"},"path":["items",0]},{"data":{"name":"ItemTwo"},"path":["items",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("outer defer items, inner defer title (cross-subgraph)", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ ... @defer { items { id ... @defer { title } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"items":[{"id":"1"},{"id":"2"}]},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"title":"TitleOne"},"path":["items",0]},{"data":{"title":"TitleTwo"},"path":["items",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("outer defer items with subItems, inner defer description", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ ... @defer { items { id subItems { id ... @defer { description } } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"items":[{"id":"1","subItems":[{"id":"s1"},{"id":"s2"}]},{"id":"2","subItems":[{"id":"s3"}]}]},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"description":"Desc1"},"path":["items",0,"subItems",0]},{"data":{"description":"Desc2"},"path":["items",0,"subItems",1]},{"data":{"description":"Desc3"},"path":["items",1,"subItems",0]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("three-level defer: query to items to subItems", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ ... @defer { items { id ... @defer { subItems { id ... @defer { description } } } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"items":[{"id":"1"},{"id":"2"}]},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"subItems":[{"id":"s1"},{"id":"s2"}]},"path":["items",0]},{"data":{"subItems":[{"id":"s3"}]},"path":["items",1]}],"hasNext":true} +{"incremental":[{"data":{"description":"Desc1"},"path":["items",0,"subItems",0]},{"data":{"description":"Desc2"},"path":["items",0,"subItems",1]},{"data":{"description":"Desc3"},"path":["items",1,"subItems",0]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("three-level defer with cross-subgraph at middle level", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `{ ... @defer { items { id ... @defer { title subItems { id ... @defer { description } } } } } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{},"hasNext":true} +{"incremental":[{"data":{"items":[{"id":"1"},{"id":"2"}]},"path":[]}],"hasNext":true} +{"incremental":[{"data":{"title":"TitleOne","subItems":[{"id":"s1"},{"id":"s2"}]},"path":["items",0]},{"data":{"title":"TitleTwo","subItems":[{"id":"s3"}]},"path":["items",1]}],"hasNext":true} +{"incremental":[{"data":{"description":"Desc1"},"path":["items",0,"subItems",0]},{"data":{"description":"Desc2"},"path":["items",0,"subItems",1]},{"data":{"description":"Desc3"},"path":["items",1,"subItems",0]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + }) + + t.Run("named fragments with defer", func(t *testing.T) { + definition := ` + type Query { products: [Product!]! } + type Product { + id: ID! + sku: String! + name: String! + price: Float! + } + ` + schema, err := graphql.NewSchemaFromString(definition) + require.NoError(t, err) + + firstSubgraphSDL := ` + type Query { products: [Product!]! } + type Product @key(fields: "id") { + id: ID! + sku: String! + } + ` + firstSubgraphDS := mustGraphqlDataSourceConfiguration(t, + "id-1", + mustFactory(t, testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "first", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"{products {___typename: __typename __typename id}}"}`: { + statusCode: 200, + body: `{"data":{"products":[{"___typename":"Product","__typename":"Product","id":"1"},{"___typename":"Product","__typename":"Product","id":"2"}]}}`, + }, + `{"query":"{products {___typename: __typename}}"}`: { + statusCode: 200, + body: `{"data":{"products":[{"___typename":"Product"},{"___typename":"Product"}]}}`, + }, + `{"query":"{products {id}}"}`: { + statusCode: 200, + body: `{"data":{"products":[{"id":"1"},{"id":"2"}]}}`, + }, + `{"query":"{products {sku}}"}`: { + statusCode: 200, + body: `{"data":{"products":[{"sku":"sku-1"},{"sku":"sku-2"}]}}`, + }, + `{"query":"{products {id sku}}"}`: { + statusCode: 200, + body: `{"data":{"products":[{"id":"1","sku":"sku-1"},{"id":"2","sku":"sku-2"}]}}`, + }, + `{"query":"{products {id __typename}}"}`: { + statusCode: 200, + body: `{"data":{"products":[{"id":"1","__typename":"Product"},{"id":"2","__typename":"Product"}]}}`, + }, + }, + })), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + {TypeName: "Query", FieldNames: []string{"products"}}, + {TypeName: "Product", FieldNames: []string{"id", "sku"}}, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + {TypeName: "Product", SelectionSet: "id"}, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{URL: "https://first/", Method: "POST"}, + SchemaConfiguration: mustSchemaConfig(t, + &graphql_datasource.FederationConfiguration{Enabled: true, ServiceSDL: firstSubgraphSDL}, + firstSubgraphSDL, + ), + }), + ) + + secondSubgraphSDL := ` + type Product @key(fields: "id") { + id: ID! + name: String! + price: Float! + } + ` + secondSubgraphDS := mustGraphqlDataSourceConfiguration(t, + "id-2", + mustFactory(t, testConditionalNetHttpClient(t, conditionalTestCase{ + reportUnused: true, + expectedHost: "second", + expectedPath: "/", + responses: map[string]sendResponse{ + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on Product {__typename name}}}","variables":{"representations":[{"__typename":"Product","id":"1"},{"__typename":"Product","id":"2"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"Product","name":"Product One"},{"__typename":"Product","name":"Product Two"}]}}`, + }, + `{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on Product {__typename name price}}}","variables":{"representations":[{"__typename":"Product","id":"1"},{"__typename":"Product","id":"2"}]}}`: { + statusCode: 200, + body: `{"data":{"_entities":[{"__typename":"Product","name":"Product One","price":9.99},{"__typename":"Product","name":"Product Two","price":19.99}]}}`, + }, + }, + })), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + {TypeName: "Product", FieldNames: []string{"id", "name", "price"}}, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + {TypeName: "Product", SelectionSet: "id"}, + }, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{URL: "https://second/", Method: "POST"}, + SchemaConfiguration: mustSchemaConfig(t, + &graphql_datasource.FederationConfiguration{Enabled: true, ServiceSDL: secondSubgraphSDL}, + secondSubgraphSDL, + ), + }), + ) + + dataSources := []plan.DataSource{firstSubgraphDS, secondSubgraphDS} + + t.Run("category A - defer on named fragment spread", func(t *testing.T) { + t.Run("A1 - defer sub1 field sku via fragment spread", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment SkuFields on Product { sku } { products { ...SkuFields @defer } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"sku":"sku-1"},"path":["products",0]},{"data":{"sku":"sku-2"},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("A2 - defer sub2 field name via fragment spread", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment NameFields on Product { name } { products { ...NameFields @defer } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"name":"Product One"},"path":["products",0]},{"data":{"name":"Product Two"},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("A3 - id non-deferred, sub2 name and price deferred via fragment", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment DetailFields on Product { name price } { products { id ...DetailFields @defer } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{"id":"1"},{"id":"2"}]},"hasNext":true} +{"incremental":[{"data":{"name":"Product One","price":9.99},"path":["products",0]},{"data":{"name":"Product Two","price":19.99},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("A4 - parallel fragment spreads from different subgraphs, both deferred", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment SkuFrag on Product { sku } fragment NameFrag on Product { name } { products { ...SkuFrag @defer ...NameFrag @defer } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"sku":"sku-1"},"path":["products",0]},{"data":{"sku":"sku-2"},"path":["products",1]}],"hasNext":true} +{"incremental":[{"data":{"name":"Product One"},"path":["products",0]},{"data":{"name":"Product Two"},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + + t.Run("category B - defer inside named fragment definition", func(t *testing.T) { + t.Run("B1 - defer sub1 field sku inside named fragment", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment ProductFrag on Product { id ... @defer { sku } } { products { ...ProductFrag } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{"id":"1"},{"id":"2"}]},"hasNext":true} +{"incremental":[{"data":{"sku":"sku-1"},"path":["products",0]},{"data":{"sku":"sku-2"},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("B2 - defer sub2 field name inside named fragment", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment ProductFrag on Product { id ... @defer { name } } { products { ...ProductFrag } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{"id":"1"},{"id":"2"}]},"hasNext":true} +{"incremental":[{"data":{"name":"Product One"},"path":["products",0]},{"data":{"name":"Product Two"},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("B3 - parallel sub1 and sub2 defers inside named fragment", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment ProductFrag on Product { id ... @defer { sku } ... @defer { name } } { products { ...ProductFrag } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{"id":"1"},{"id":"2"}]},"hasNext":true} +{"incremental":[{"data":{"sku":"sku-1"},"path":["products",0]},{"data":{"sku":"sku-2"},"path":["products",1]}],"hasNext":true} +{"incremental":[{"data":{"name":"Product One"},"path":["products",0]},{"data":{"name":"Product Two"},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + + t.Run("category C - defer on spread containing inner defers", func(t *testing.T) { + t.Run("C1 - multiple sub1 fields id and sku bundled in single deferred spread", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment SkuIdFrag on Product { id sku } { products { ...SkuIdFrag @defer } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{},{}]},"hasNext":true} +{"incremental":[{"data":{"id":"1","sku":"sku-1"},"path":["products",0]},{"data":{"id":"2","sku":"sku-2"},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + + t.Run("C2 - outer spread deferred delivering sub1 sku, with nested inner sub2 name defer", runWithoutError(ExecutionEngineTestCase{ + schema: schema, + operation: func(t *testing.T) graphql.Request { + return graphql.Request{Query: `fragment SkuWithName on Product { sku ... @defer { name } } { products { id ...SkuWithName @defer } }`} + }, + dataSources: dataSources, + expectedResponse: `{"data":{"products":[{"id":"1"},{"id":"2"}]},"hasNext":true} +{"incremental":[{"data":{"sku":"sku-1"},"path":["products",0]},{"data":{"sku":"sku-2"},"path":["products",1]}],"hasNext":true} +{"incremental":[{"data":{"name":"Product One"},"path":["products",0]},{"data":{"name":"Product Two"},"path":["products",1]}],"hasNext":false} +`, + }, withStreamingResponse())) + }) + }) + +} diff --git a/execution/engine/execution_engine_helpers_test.go b/execution/engine/execution_engine_helpers_test.go index 89b181d563..e05afa6ad0 100644 --- a/execution/engine/execution_engine_helpers_test.go +++ b/execution/engine/execution_engine_helpers_test.go @@ -59,6 +59,9 @@ type conditionalTestCase struct { // responses map an expected body to the output that should be sent responses map[string]sendResponse + + reportUnused bool + reportUsed bool } type sendResponse struct { @@ -71,6 +74,17 @@ func createConditionalTestRoundTripper(t *testing.T, testCase conditionalTestCas require.True(t, len(testCase.responses) > 0, "no responses defined") + used := make(map[string]bool) + if testCase.reportUnused { + t.Cleanup(func() { + for key := range testCase.responses { + if !used[key] { + t.Logf("UNUSED MOCK [%s]: %s", testCase.expectedHost, key) + } + } + }) + } + return func(req *http.Request) *http.Response { t.Helper() @@ -83,8 +97,27 @@ func createConditionalTestRoundTripper(t *testing.T, testCase conditionalTestCas require.NoError(t, err) defer req.Body.Close() - require.Containsf(t, testCase.responses, string(gotBody), "received unexpected body: %v", string(gotBody)) + if testCase.reportUsed { + t.Logf("Requested MOCK [%s]: %s", testCase.expectedHost, string(gotBody)) + } + + if !assert.Containsf(t, testCase.responses, string(gotBody), "received unexpected body: %v", string(gotBody)) { + return &http.Response{ + StatusCode: 400, + Body: io.NopCloser(bytes.NewBuffer([]byte("received unexpected body"))), + } + } + response := testCase.responses[string(gotBody)] + + if testCase.reportUnused { + used[string(gotBody)] = true + } + + if testCase.reportUsed { + t.Logf("Send MOCK Response:\n %s", response.body) + } + return &http.Response{ StatusCode: response.statusCode, Body: io.NopCloser(bytes.NewBuffer([]byte(response.body))), diff --git a/execution/engine/execution_engine_test.go b/execution/engine/execution_engine_test.go index 0f7c48ac00..85dc1bef85 100644 --- a/execution/engine/execution_engine_test.go +++ b/execution/engine/execution_engine_test.go @@ -106,6 +106,15 @@ func runExecutionTest(testCase ExecutionEngineTestCase, withError bool, expected operation := testCase.operation(t) resultWriter := graphql.NewEngineResultWriter() + + streamingBuf := bytes.NewBuffer(nil) + if opts.streamingResponse { + resultWriter.SetFlushCallback(func(data []byte) { + streamingBuf.Write(data) + streamingBuf.Write([]byte{'\n'}) + }) + } + execCtx, execCtxCancel := context.WithCancel(context.Background()) defer execCtxCancel() err = engine.Execute(execCtx, &operation, &resultWriter, testCase.engineOptions...) @@ -137,7 +146,12 @@ func runExecutionTest(testCase ExecutionEngineTestCase, withError bool, expected } if testCase.expectedResponse != "" { - assert.Equal(t, testCase.expectedResponse, actualResponse) + if opts.streamingResponse { + streamingResponse := streamingBuf.String() + assert.Equal(t, testCase.expectedResponse, streamingResponse) + } else { + assert.Equal(t, testCase.expectedResponse, actualResponse) + } } if testCase.expectedEstimatedCost != 0 { @@ -315,6 +329,7 @@ type _executionTestOptions struct { validateRequiredExternalFields bool computeCosts bool relaxFieldSelectionMergingNullability bool + streamingResponse bool } type executionTestOptions func(*_executionTestOptions) @@ -351,6 +366,12 @@ func relaxFieldSelectionMergingNullability() executionTestOptions { } } +func withStreamingResponse() executionTestOptions { + return func(options *_executionTestOptions) { + options.streamingResponse = true + } +} + func TestExecutionEngine_Execute(t *testing.T) { t.Run("apollo router compatibility subrequest HTTP error enabled", runWithoutError( ExecutionEngineTestCase{ @@ -1621,7 +1642,7 @@ func TestExecutionEngine_Execute(t *testing.T) { expectedHost: "example.com", expectedPath: "/", expectedBody: "", - sendResponseBody: `{"data":{"__internal__typename_placeholder":"Query"}}`, + sendResponseBody: `doesn't matter, no fetch will be done, as query typename resolved by engine`, sendStatusCode: 200, }), ), @@ -1659,6 +1680,82 @@ func TestExecutionEngine_Execute(t *testing.T) { expectedResponse: `{"data":{}}`, })) + t.Run("execute operation with all nested fields skipped", runWithoutError(ExecutionEngineTestCase{ + schema: func(t *testing.T) *graphql.Schema { + t.Helper() + schema := ` + type Query { + hero(name: String!): Hero! + } + + type Hero { + name: String! + } + ` + parseSchema, err := graphql.NewSchemaFromString(schema) + require.NoError(t, err) + return parseSchema + }(t), + operation: func(t *testing.T) graphql.Request { + return graphql.Request{ + OperationName: "MyHero", + Variables: []byte(`{"heroName": "Luke"}`), + Query: `query MyHero($heroName: String!){ + hero(name: $heroName) { + name @skip(if: true) + } + }`, + } + }, + dataSources: []plan.DataSource{ + mustGraphqlDataSourceConfiguration(t, + "id", + mustFactory(t, + testNetHttpClient(t, roundTripperTestCase{ + expectedHost: "example.com", + expectedPath: "/", + expectedBody: "", + sendResponseBody: `{"data":{"hero":{"__typename":"Hero"}}}`, + sendStatusCode: 200, + }), + ), + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + {TypeName: "Query", FieldNames: []string{"hero"}}, + }, + ChildNodes: []plan.TypeField{ + {TypeName: "Hero", FieldNames: []string{"name"}}, + }, + }, + mustConfiguration(t, graphql_datasource.ConfigurationInput{ + Fetch: &graphql_datasource.FetchConfiguration{ + URL: "https://example.com/", + Method: "POST", + }, + SchemaConfiguration: mustSchemaConfig( + t, + nil, + `type Query { hero(name: String!): Hero! } type Hero { name: String! }`, + ), + }), + ), + }, + fields: []plan.FieldConfiguration{ + { + TypeName: "Query", + FieldName: "hero", + Path: []string{"hero"}, + Arguments: []plan.ArgumentConfiguration{ + { + Name: "name", + SourceType: plan.FieldArgumentSource, + }, + }, + }, + }, + expectedResponse: `{"data":{"hero":{}}}`, + })) + t.Run("execute operation and apply input coercion for lists without variables", runWithoutError(ExecutionEngineTestCase{ schema: graphql.InputCoercionForListSchema(t), operation: func(t *testing.T) graphql.Request { diff --git a/execution/engine/testdata/full_introspection.json b/execution/engine/testdata/full_introspection.json index 8473834888..ee3242e238 100644 --- a/execution/engine/testdata/full_introspection.json +++ b/execution/engine/testdata/full_introspection.json @@ -573,7 +573,7 @@ { "kind": "SCALAR", "name": "Int", - "description": "The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", + "description": "The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", "fields": null, "inputFields": [], "interfaces": [], @@ -583,7 +583,7 @@ { "kind": "SCALAR", "name": "Float", - "description": "The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", + "description": "The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", "fields": null, "inputFields": [], "interfaces": [], @@ -593,7 +593,7 @@ { "kind": "SCALAR", "name": "String", - "description": "The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", + "description": "The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", "fields": null, "inputFields": [], "interfaces": [], @@ -603,7 +603,7 @@ { "kind": "SCALAR", "name": "Boolean", - "description": "The 'Boolean' scalar type represents 'true' or 'false' .", + "description": "The `Boolean` scalar type represents `true` or `false`.", "fields": null, "inputFields": [], "interfaces": [], @@ -613,7 +613,7 @@ { "kind": "SCALAR", "name": "ID", - "description": "The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID.", + "description": "The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as \"4\") or integer (such as 4) input value will be accepted as an ID.", "fields": null, "inputFields": [], "interfaces": [], @@ -715,14 +715,14 @@ }, { "name": "specifiedBy", - "description": "", + "description": "Exposes a URL that specifies the behavior of this scalar", "locations": [ "SCALAR" ], "args": [ { "name": "url", - "description": "", + "description": "The URL that specifies the behavior of this scalar.", "type": { "kind": "NON_NULL", "name": null, @@ -743,6 +743,40 @@ "INPUT_OBJECT" ], "args": [] + }, + { + "name": "defer", + "description": "Directs the executor to defer this fragment when the if argument is true or undefined.", + "locations": [ + "FRAGMENT_SPREAD", + "INLINE_FRAGMENT" + ], + "args": [ + { + "name": "label", + "description": "A unique identifier for the results.", + "type": { + "kind": "SCALAR", + "name": "String", + "ofType": null + }, + "defaultValue": null + }, + { + "name": "if", + "description": "Controls whether the fragment will be deferred, usually via a variable.", + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "Boolean", + "ofType": null + } + }, + "defaultValue": "true" + } + ] } ] } diff --git a/execution/engine/testdata/full_introspection_with_deprecated.json b/execution/engine/testdata/full_introspection_with_deprecated.json index 74f8fa552f..a885c3759e 100644 --- a/execution/engine/testdata/full_introspection_with_deprecated.json +++ b/execution/engine/testdata/full_introspection_with_deprecated.json @@ -597,7 +597,7 @@ { "kind": "SCALAR", "name": "Int", - "description": "The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", + "description": "The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", "fields": null, "inputFields": [], "interfaces": [], @@ -607,7 +607,7 @@ { "kind": "SCALAR", "name": "Float", - "description": "The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", + "description": "The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", "fields": null, "inputFields": [], "interfaces": [], @@ -617,7 +617,7 @@ { "kind": "SCALAR", "name": "String", - "description": "The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", + "description": "The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", "fields": null, "inputFields": [], "interfaces": [], @@ -627,7 +627,7 @@ { "kind": "SCALAR", "name": "Boolean", - "description": "The 'Boolean' scalar type represents 'true' or 'false' .", + "description": "The `Boolean` scalar type represents `true` or `false`.", "fields": null, "inputFields": [], "interfaces": [], @@ -637,7 +637,7 @@ { "kind": "SCALAR", "name": "ID", - "description": "The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID.", + "description": "The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as \"4\") or integer (such as 4) input value will be accepted as an ID.", "fields": null, "inputFields": [], "interfaces": [], @@ -749,14 +749,14 @@ }, { "name": "specifiedBy", - "description": "", + "description": "Exposes a URL that specifies the behavior of this scalar", "locations": [ "SCALAR" ], "args": [ { "name": "url", - "description": "", + "description": "The URL that specifies the behavior of this scalar.", "type": { "kind": "NON_NULL", "name": null, @@ -777,6 +777,40 @@ "INPUT_OBJECT" ], "args": [] + }, + { + "name": "defer", + "description": "Directs the executor to defer this fragment when the if argument is true or undefined.", + "locations": [ + "FRAGMENT_SPREAD", + "INLINE_FRAGMENT" + ], + "args": [ + { + "name": "label", + "description": "A unique identifier for the results.", + "type": { + "kind": "SCALAR", + "name": "String", + "ofType": null + }, + "defaultValue": null + }, + { + "name": "if", + "description": "Controls whether the fragment will be deferred, usually via a variable.", + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "Boolean", + "ofType": null + } + }, + "defaultValue": "true" + } + ] } ] } diff --git a/execution/engine/testdata/full_introspection_with_typenames.json b/execution/engine/testdata/full_introspection_with_typenames.json index 2eaf5e37e9..04017ea4f1 100644 --- a/execution/engine/testdata/full_introspection_with_typenames.json +++ b/execution/engine/testdata/full_introspection_with_typenames.json @@ -650,7 +650,7 @@ "__typename": "__Type", "kind": "SCALAR", "name": "Int", - "description": "The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", + "description": "The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", "fields": null, "inputFields": [], "interfaces": [], @@ -661,7 +661,7 @@ "__typename": "__Type", "kind": "SCALAR", "name": "Float", - "description": "The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", + "description": "The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", "fields": null, "inputFields": [], "interfaces": [], @@ -672,7 +672,7 @@ "__typename": "__Type", "kind": "SCALAR", "name": "String", - "description": "The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", + "description": "The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", "fields": null, "inputFields": [], "interfaces": [], @@ -683,7 +683,7 @@ "__typename": "__Type", "kind": "SCALAR", "name": "Boolean", - "description": "The 'Boolean' scalar type represents 'true' or 'false' .", + "description": "The `Boolean` scalar type represents `true` or `false`.", "fields": null, "inputFields": [], "interfaces": [], @@ -694,7 +694,7 @@ "__typename": "__Type", "kind": "SCALAR", "name": "ID", - "description": "The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID.", + "description": "The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as \"4\") or integer (such as 4) input value will be accepted as an ID.", "fields": null, "inputFields": [], "interfaces": [], @@ -809,7 +809,7 @@ { "__typename": "__Directive", "name": "specifiedBy", - "description": "", + "description": "Exposes a URL that specifies the behavior of this scalar", "locations": [ "SCALAR" ], @@ -817,7 +817,7 @@ { "__typename": "__InputValue", "name": "url", - "description": "", + "description": "The URL that specifies the behavior of this scalar.", "type": { "__typename": "__Type", "kind": "NON_NULL", @@ -840,6 +840,45 @@ "INPUT_OBJECT" ], "args": [] + }, + { + "__typename": "__Directive", + "name": "defer", + "description": "Directs the executor to defer this fragment when the if argument is true or undefined.", + "locations": [ + "FRAGMENT_SPREAD", + "INLINE_FRAGMENT" + ], + "args": [ + { + "__typename": "__InputValue", + "name": "label", + "description": "A unique identifier for the results.", + "type": { + "__typename": "__Type", + "kind": "SCALAR", + "name": "String", + "ofType": null + }, + "defaultValue": null + }, + { + "__typename": "__InputValue", + "name": "if", + "description": "Controls whether the fragment will be deferred, usually via a variable.", + "type": { + "__typename": "__Type", + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "Boolean", + "ofType": null + } + }, + "defaultValue": "true" + } + ] } ] } diff --git a/v2/pkg/ast/ast_argument.go b/v2/pkg/ast/ast_argument.go index f8d6b8aadf..2c74616793 100644 --- a/v2/pkg/ast/ast_argument.go +++ b/v2/pkg/ast/ast_argument.go @@ -3,6 +3,7 @@ package ast import ( "bytes" "io" + "strconv" "github.com/wundergraph/graphql-go-tools/v2/pkg/internal/unsafebytes" "github.com/wundergraph/graphql-go-tools/v2/pkg/lexer/literal" @@ -194,3 +195,29 @@ func (d *Document) ImportVariableValueArgument(argName, variableName ByteSlice) return } + +func (d *Document) AddStringArgument(name, value string) int { + strValueRef := d.AddStringValue(StringValue{ + Content: d.Input.AppendInputString(value), + }) + + arg := Argument{ + Name: d.Input.AppendInputString(name), + Value: Value{Kind: ValueKindString, Ref: strValueRef}, + } + + return d.AddArgument(arg) +} + +func (d *Document) AddIntArgument(name string, value int) int { + intValueRef := d.AddIntValue(IntValue{ + Raw: d.Input.AppendInputString(strconv.Itoa(value)), + }) + + arg := Argument{ + Name: d.Input.AppendInputString(name), + Value: Value{Kind: ValueKindInteger, Ref: intValueRef}, + } + + return d.AddArgument(arg) +} diff --git a/v2/pkg/ast/ast_directive.go b/v2/pkg/ast/ast_directive.go index 9b70b521ab..03bb2ad3fa 100644 --- a/v2/pkg/ast/ast_directive.go +++ b/v2/pkg/ast/ast_directive.go @@ -28,6 +28,15 @@ func (l *DirectiveList) HasDirectiveByName(document *Document, name string) bool return false } +func (l *DirectiveList) HasDirectiveByNameBytes(document *Document, directiveName ByteSlice) (directiveRef int, exists bool) { + for i := range l.Refs { + if bytes.Equal(directiveName, document.DirectiveNameBytes(l.Refs[i])) { + return l.Refs[i], true + } + } + return InvalidRef, false +} + func (l *DirectiveList) RemoveDirectiveByName(document *Document, name string) { for i := range l.Refs { if document.DirectiveNameString(l.Refs[i]) == name { @@ -41,6 +50,19 @@ func (l *DirectiveList) RemoveDirectiveByName(document *Document, name string) { } } +func (l *DirectiveList) RemoveDirectiveByRef(directiveRef int) { + for i := range l.Refs { + if l.Refs[i] == directiveRef { + if i < len(l.Refs)-1 { + l.Refs = append(l.Refs[:i], l.Refs[i+1:]...) + } else { + l.Refs = l.Refs[:i] + } + return + } + } +} + func (d *Document) CopyDirective(ref int) int { var arguments ArgumentList if d.Directives[ref].HasArguments { @@ -118,15 +140,79 @@ func (d *Document) DirectivesAreEqual(left, right int) bool { } func (d *Document) DirectiveSetsAreEqual(left, right []int) bool { - if len(left) != len(right) { - return false + if len(left) == 0 && len(right) == 0 { + return true + } + + // if left has no directives and right has only the defer directives, we consider them equal + if len(left) == 0 && len(right) > 0 { + for i := 0; i < len(right); i++ { + if !bytes.Equal(d.DirectiveNameBytes(right[i]), literal.DEFER_INTERNAL) { + return false + } + } + return true } + + // if right has no directives and left has only the defer directives, we consider them equal + if len(left) > 0 && len(right) == 0 { + for i := 0; i < len(left); i++ { + if !bytes.Equal(d.DirectiveNameBytes(left[i]), literal.DEFER_INTERNAL) { + return false + } + } + return true + } + + // check that every non-defer directive in the left has an equal in the right for i := 0; i < len(left); i++ { - leftDirective, rightDirective := left[i], right[i] - if !d.DirectivesAreEqual(leftDirective, rightDirective) { + leftDirective := left[i] + + if bytes.Equal(d.DirectiveNameBytes(leftDirective), literal.DEFER_INTERNAL) { + continue + } + + hasRightEqual := false + for j := 0; j < len(right); j++ { + rightDirective := right[j] + + if bytes.Equal(d.DirectiveNameBytes(rightDirective), literal.DEFER_INTERNAL) { + continue + } + + if d.DirectivesAreEqual(leftDirective, rightDirective) { + hasRightEqual = true + break + } + } + if !hasRightEqual { + return false + } + } + + // check that every non-defer directive in the right has an equal in the left + for i := 0; i < len(right); i++ { + rightDirective := right[i] + if bytes.Equal(d.DirectiveNameBytes(rightDirective), literal.DEFER_INTERNAL) { + continue + } + + hasLeftEqual := false + for j := 0; j < len(left); j++ { + leftDirective := left[j] + if bytes.Equal(d.DirectiveNameBytes(leftDirective), literal.DEFER_INTERNAL) { + continue + } + if d.DirectivesAreEqual(leftDirective, rightDirective) { + hasLeftEqual = true + break + } + } + if !hasLeftEqual { return false } } + return true } diff --git a/v2/pkg/ast/ast_field.go b/v2/pkg/ast/ast_field.go index f92e2bde52..b6e4bb24db 100644 --- a/v2/pkg/ast/ast_field.go +++ b/v2/pkg/ast/ast_field.go @@ -4,6 +4,7 @@ import ( "bytes" "github.com/wundergraph/graphql-go-tools/v2/pkg/internal/unsafebytes" + "github.com/wundergraph/graphql-go-tools/v2/pkg/lexer/literal" "github.com/wundergraph/graphql-go-tools/v2/pkg/lexer/position" ) @@ -178,3 +179,87 @@ func (d *Document) FieldTypeNode(fieldName []byte, enclosingNode Node) (node Nod return node, true } + +func (d *Document) MergeFieldsDefer(left, right int) { + leftDeferDirectiveRef, leftDeferExists := d.Fields[left].Directives.HasDirectiveByNameBytes(d, literal.DEFER_INTERNAL) + rightDeferDirectiveRef, rightDeferExists := d.Fields[right].Directives.HasDirectiveByNameBytes(d, literal.DEFER_INTERNAL) + + switch { + case !leftDeferExists && !rightDeferExists: + // do nothing + case leftDeferExists && !rightDeferExists: + d.Fields[left].Directives.RemoveDirectiveByRef(leftDeferDirectiveRef) + d.Fields[left].HasDirectives = len(d.Fields[left].Directives.Refs) > 0 + case !leftDeferExists: + // do nothing, as we are merging right into left + // and left do not have the defer, + // so right will be discarded + default: + // both have the defer; defer with smaller id wins + leftDeferIdValue, _ := d.DirectiveArgumentValueByName(leftDeferDirectiveRef, []byte("id")) + rightDeferIdValue, _ := d.DirectiveArgumentValueByName(rightDeferDirectiveRef, []byte("id")) + + leftId := int(d.IntValueAsInt(leftDeferIdValue.Ref)) + rightId := int(d.IntValueAsInt(rightDeferIdValue.Ref)) + + // TODO: need to handle parent id too + + switch { + case leftId == rightId: + // do nothing, they are equal + case leftId < rightId: + // left wins, right discarded + case leftId > rightId: + d.Fields[left].Directives.RemoveDirectiveByRef(leftDeferDirectiveRef) + // append a right defer to the left + // no need to import as a right will be discarded + d.Fields[left].Directives.Refs = append(d.Fields[left].Directives.Refs, rightDeferDirectiveRef) + } + } +} + +// AddDeferInternalDirectiveToField attaches @__defer_internal(id: id, label: label, parentDeferId: parentID) to the given field. +func (d *Document) AddDeferInternalDirectiveToField(fieldRef int, id int, label string, parentID int) { + if id == 0 { + return + } + + var argRefs []int + + argRefs = append(argRefs, d.AddIntArgument("id", id)) + + if label != "" { + argRefs = append(argRefs, d.AddStringArgument("label", label)) + } + if parentID != 0 { + argRefs = append(argRefs, d.AddIntArgument("parentDeferId", parentID)) + } + + directiveRef := d.AddDirective(Directive{ + Name: d.Input.AppendInputBytes(literal.DEFER_INTERNAL), + HasArguments: len(argRefs) > 0, + Arguments: ArgumentList{ + Refs: argRefs, + }, + }) + + d.AddDirectiveToNode(directiveRef, Node{ + Kind: NodeKindField, + Ref: fieldRef, + }) +} + +func (d *Document) FieldInternalDeferID(fieldRef int) (id int, exists bool) { + directiveRef, exists := d.Fields[fieldRef].Directives.HasDirectiveByNameBytes(d, literal.DEFER_INTERNAL) + if !exists { + return 0, false + } + idValue, exists := d.DirectiveArgumentValueByName(directiveRef, []byte("id")) + if !exists { + return 0, false + } + if idValue.Kind != ValueKindInteger { + return 0, false + } + return int(d.IntValueAsInt(idValue.Ref)), true +} diff --git a/v2/pkg/ast/ast_inline_fragment.go b/v2/pkg/ast/ast_inline_fragment.go index db110a8714..9c0d5e4d36 100644 --- a/v2/pkg/ast/ast_inline_fragment.go +++ b/v2/pkg/ast/ast_inline_fragment.go @@ -86,3 +86,11 @@ func (d *Document) InlineFragmentSelectionSet(ref int) (selectionSetRef int, ok func (d *Document) InlineFragmentDirectives(ref int) []int { return d.InlineFragments[ref].Directives.Refs } + +func (d *Document) InlineFragmentDirectiveByName(inlineFragmentRef int, directiveName ByteSlice) (ref int, exists bool) { + if !d.InlineFragments[inlineFragmentRef].HasDirectives { + return InvalidRef, false + } + + return d.InlineFragments[inlineFragmentRef].Directives.HasDirectiveByNameBytes(d, directiveName) +} diff --git a/v2/pkg/astnormalization/astnormalization.go b/v2/pkg/astnormalization/astnormalization.go index 04631f8398..52a7ab1295 100644 --- a/v2/pkg/astnormalization/astnormalization.go +++ b/v2/pkg/astnormalization/astnormalization.go @@ -151,6 +151,7 @@ type options struct { removeNotMatchingOperationDefinitions bool normalizeDefinition bool ignoreSkipInclude bool + inlineDefer bool } type Option func(options *options) @@ -161,6 +162,12 @@ func WithExtractVariables() Option { } } +func WithInlineDefer() Option { + return func(options *options) { + options.inlineDefer = true + } +} + func WithRemoveFragmentDefinitions() Option { return func(options *options) { options.removeFragmentDefinitions = true @@ -220,6 +227,8 @@ func (o *OperationNormalizer) setupOperationWalkers() { cleanup := astvisitor.NewWalkerWithID(8, "Cleanup") deduplicateFields(&cleanup) + // should happen after inlining defer fragments, to not produce unnecessary typename placeholders + deferEnsureTypename(&cleanup) if o.options.removeUnusedVariables { del := deleteUnusedVariables(&cleanup) // register variable usage detection on the first stage @@ -243,6 +252,15 @@ func (o *OperationNormalizer) setupOperationWalkers() { }) } + if o.options.inlineDefer { + inlineDefer := astvisitor.NewWalkerWithID(8, "Inline defer") + inlineFragmentExpandDefer(&inlineDefer) + o.operationWalkers = append(o.operationWalkers, walkerStage{ + name: "inlineDefer", + walker: &inlineDefer, + }) + } + if o.options.extractVariables { extractVariablesWalker := astvisitor.NewWalkerWithID(8, "ExtractVariables") extractVariables(&extractVariablesWalker) diff --git a/v2/pkg/astnormalization/astnormalization_test.go b/v2/pkg/astnormalization/astnormalization_test.go index 2bc22e84ac..0578643871 100644 --- a/v2/pkg/astnormalization/astnormalization_test.go +++ b/v2/pkg/astnormalization/astnormalization_test.go @@ -41,6 +41,7 @@ func TestNormalizeOperation(t *testing.T) { WithRemoveFragmentDefinitions(), WithRemoveUnusedVariables(), WithNormalizeDefinition(), + WithInlineDefer(), ) normalizer.NormalizeOperation(&operationDocument, &definitionDocument, &report) @@ -48,8 +49,8 @@ func TestNormalizeOperation(t *testing.T) { t.Fatal(report.Error()) } - got := mustString(astprinter.PrintString(&operationDocument)) - want := mustString(astprinter.PrintString(&expectedOutputDocument)) + got := mustString(astprinter.PrintStringIndent(&operationDocument, " ")) + want := mustString(astprinter.PrintStringIndent(&expectedOutputDocument, " ")) assert.Equal(t, want, got) assert.Equal(t, expectedVariables, string(operationDocument.Input.Variables)) @@ -510,6 +511,73 @@ func TestNormalizeOperation(t *testing.T) { }`, ``, ``) }) + t.Run("defer", func(t *testing.T) { + run(t, testDefinition, ` + query pet { + pet { + ... on Dog @defer { + name + nickname + ... @defer { + barkVolume + } + } + ... on Dog { + ... @defer { + extra { + noString + } + } + ... @defer { + extra { + string + noString + } + } + } + ... on Cat { + name + extra { + bool + } + } + ... on Cat @defer { + name + meowVolume + extra { + bool + } + } + ... on Cat @defer { + name + nickname + meowVolume + } + } + }`, ` + query pet { + pet { + ... on Dog { + name @__defer_internal(id: 1) + nickname @__defer_internal(id: 1) + barkVolume @__defer_internal(id: 2, parentDeferId: 1) + extra @__defer_internal(id: 3) { + noString @__defer_internal(id: 3) + string @__defer_internal(id: 4) + } + ___typename: __typename + } + ... on Cat { + name + extra { + bool + } + meowVolume @__defer_internal(id: 5) + nickname @__defer_internal(id: 6) + } + } + }`, ``, ``) + }) } func TestOperationNormalizer_NormalizeOperation(t *testing.T) { @@ -1243,7 +1311,24 @@ var runWithVariables = func(t *testing.T, normalizeFunc registerNormalizeFunc, d assert.Equal(t, want, got) } -var run = func(t *testing.T, normalizeFunc registerNormalizeFunc, definition, operation, expectedOutput string, indent ...bool) { +type runOptions struct { + indent bool + withInternalDefer bool +} + +var runWithOptions = func(t *testing.T, normalizeFunc registerNormalizeFunc, definition, operation, expectedOutput string, options runOptions) { + t.Helper() + run(t, normalizeFunc, definition, operation, expectedOutput, options) +} + +var run = func(t *testing.T, normalizeFunc registerNormalizeFunc, definition, operation, expectedOutput string, options ...runOptions) { + t.Helper() + + var opts runOptions + + if len(options) > 0 { + opts = options[0] + } definitionDocument := unsafeparser.ParseGraphqlDocumentString(definition) err := asttransform.MergeDefinitionWithBaseSchema(&definitionDocument) @@ -1265,7 +1350,7 @@ var run = func(t *testing.T, normalizeFunc registerNormalizeFunc, definition, op } var got, want string - if len(indent) > 0 && indent[0] { + if opts.indent { got = mustString(astprinter.PrintStringIndent(&operationDocument, " ")) want = mustString(astprinter.PrintStringIndent(&expectedOutputDocument, " ")) } else { diff --git a/v2/pkg/astnormalization/defer_ensure_typename.go b/v2/pkg/astnormalization/defer_ensure_typename.go new file mode 100644 index 0000000000..bfd2594c6a --- /dev/null +++ b/v2/pkg/astnormalization/defer_ensure_typename.go @@ -0,0 +1,125 @@ +package astnormalization + +import ( + "github.com/wundergraph/graphql-go-tools/v2/pkg/ast" + "github.com/wundergraph/graphql-go-tools/v2/pkg/astvisitor" + "github.com/wundergraph/graphql-go-tools/v2/pkg/lexer/literal" +) + +// deferEnsureTypename registers a visitor that ensures a non-deferred field always +// has at least one non-deferred field selection (a __typename placeholder) when all +// of its child fields carry @__defer_internal. This runs after defer expansion, so +// only the expanded field form with @__defer_internal is considered. +// +// This placeholder is necessary for the planner to not produce an empty selection set, +// when all nested fields are deffered +// +// When the enclosing parent field is not deferred, a plain placeholder is added. +// +// When the enclosing parent field is itself deferred, a placeholder is added only if +// none of the child fields share the same defer id as the parent (no intersection). +// In that case the placeholder is annotated with the parent's defer id so it lands +// in the correct defer scope. If there is an intersection (at least one child field +// has the same defer id as the parent), no placeholder is needed. +func deferEnsureTypename(walker *astvisitor.Walker) { + visitor := deferEnsureTypenameVisitor{ + Walker: walker, + } + walker.RegisterEnterDocumentVisitor(&visitor) + walker.RegisterEnterSelectionSetVisitor(&visitor) +} + +type deferEnsureTypenameVisitor struct { + *astvisitor.Walker + + operation *ast.Document +} + +func (f *deferEnsureTypenameVisitor) EnterDocument(operation, _ *ast.Document) { + f.operation = operation +} + +func (f *deferEnsureTypenameVisitor) EnterSelectionSet(ref int) { + // skip root-level selection sets: we need at least depth > 2 + // and a field ancestor to be inside a field's selection set + if len(f.Ancestors) <= 2 { + return + } + hasFieldAncestor := false + for i := len(f.Ancestors) - 1; i >= 0; i-- { + if f.Ancestors[i].Kind == ast.NodeKindField { + hasFieldAncestor = true + break + } + } + if !hasFieldAncestor { + return + } + + fieldSelectionRefs := f.operation.SelectionSetFieldSelections(ref) + if len(fieldSelectionRefs) == 0 { + return + } + + // single pass over field selections to gather: + // - whether all fields carry @__defer_internal + // - whether any field's defer id matches the parent field's defer id (intersection) + parentDeferID := f.parentFieldDeferID() + allDeferred := true + hasDeferIntersection := false + + for _, selectionRef := range fieldSelectionRefs { + fieldRef := f.operation.Selections[selectionRef].Ref + directiveRef, exists := f.operation.Fields[fieldRef].Directives.HasDirectiveByNameBytes(f.operation, literal.DEFER_INTERNAL) + if !exists { + allDeferred = false + break + } + if parentDeferID != 0 && !hasDeferIntersection { + idValue, ok := f.operation.DirectiveArgumentValueByName(directiveRef, []byte("id")) + if ok && idValue.Kind == ast.ValueKindInteger && int(f.operation.IntValueAsInt32(idValue.Ref)) == parentDeferID { + hasDeferIntersection = true + } + } + } + + // if at least one field is not deffered we do not need to add the typename placeholder + if !allDeferred { + return + } + + if parentDeferID == 0 { + // the enclosing field is not deferred; add a plain placeholder so the + // selection set has at least one non-deferred field selection + addInternalTypeNamePlaceholder(f.operation, ref) + return + } + + // the enclosing field is deferred; if at least one child shares the same + // defer id there is an intersection and no placeholder is needed + if hasDeferIntersection { + return + } + + // no intersection: add a placeholder annotated with the parent's defer id + // so it is planned in the parent field defer scope + fieldRef := addInternalTypeNamePlaceholder(f.operation, ref) + f.operation.AddDeferInternalDirectiveToField(fieldRef, parentDeferID, "", 0) +} + +// parentFieldDeferID returns the defer id of the nearest enclosing field that +// carries a @__defer_internal directive, or an empty string if there is none. +func (f *deferEnsureTypenameVisitor) parentFieldDeferID() int { + for i := len(f.Ancestors) - 1; i >= 0; i-- { + ancestor := f.Ancestors[i] + if ancestor.Kind != ast.NodeKindField { + continue + } + + id, exist := f.operation.FieldInternalDeferID(ancestor.Ref) + if exist { + return id + } + } + return 0 +} diff --git a/v2/pkg/astnormalization/defer_ensure_typename_test.go b/v2/pkg/astnormalization/defer_ensure_typename_test.go new file mode 100644 index 0000000000..f0920fc50f --- /dev/null +++ b/v2/pkg/astnormalization/defer_ensure_typename_test.go @@ -0,0 +1,91 @@ +package astnormalization + +import ( + "testing" +) + +func TestDeferEnsureTypename(t *testing.T) { + t.Run("mixed deferred and non-deferred fields - no placeholder needed", func(t *testing.T) { + run(t, deferEnsureTypename, testDefinition, ` + { + user { + id + name @__defer_internal(id: 1) + } + }`, ` + { + user { + id + name @__defer_internal(id: 1) + } + }`) + }) + + t.Run("all fields deferred, parent not deferred - plain placeholder added", func(t *testing.T) { + run(t, deferEnsureTypename, testDefinition, ` + { + user { + name @__defer_internal(id: 1) + age @__defer_internal(id: 1) + } + }`, ` + { + user { + name @__defer_internal(id: 1) + age @__defer_internal(id: 1) + ___typename: __typename + } + }`) + }) + + t.Run("all fields deferred with different ids, parent not deferred - plain placeholder added", func(t *testing.T) { + run(t, deferEnsureTypename, testDefinition, ` + { + user { + name @__defer_internal(id: 1) + age @__defer_internal(id: 2) + } + }`, ` + { + user { + name @__defer_internal(id: 1) + age @__defer_internal(id: 2) + ___typename: __typename + } + }`) + }) + + t.Run("all fields deferred, parent deferred with same id - intersection, no placeholder", func(t *testing.T) { + run(t, deferEnsureTypename, testDefinition, ` + { + user @__defer_internal(id: 1) { + name @__defer_internal(id: 1) + age @__defer_internal(id: 2) + } + }`, ` + { + user @__defer_internal(id: 1) { + name @__defer_internal(id: 1) + age @__defer_internal(id: 2) + } + }`) + }) + + t.Run("all fields deferred, parent deferred with different id - no intersection, placeholder with parent id added", func(t *testing.T) { + run(t, deferEnsureTypename, testDefinition, ` + { + user @__defer_internal(id: 1) { + name @__defer_internal(id: 2) + age @__defer_internal(id: 3) + } + }`, ` + { + user @__defer_internal(id: 1) { + name @__defer_internal(id: 2) + age @__defer_internal(id: 3) + ___typename: __typename @__defer_internal(id: 1) + } + }`) + }) + +} diff --git a/v2/pkg/astnormalization/directive_include_skip.go b/v2/pkg/astnormalization/directive_include_skip.go index f3f716adf9..0db16fc1c5 100644 --- a/v2/pkg/astnormalization/directive_include_skip.go +++ b/v2/pkg/astnormalization/directive_include_skip.go @@ -149,23 +149,25 @@ func (d *directiveIncludeSkipVisitor) removeParentNode() { selectionSetRef := grandParent.Ref if d.operation.SelectionSetIsEmpty(selectionSetRef) { - selectionRef, _ := d.typeNameSelection() - d.operation.AddSelectionRefToSelectionSet(selectionSetRef, selectionRef) + addInternalTypeNamePlaceholder(d.operation, selectionSetRef) } } -func (d *directiveIncludeSkipVisitor) typeNameSelection() (selectionRef int, fieldRef int) { - field := d.operation.AddField(ast.Field{ - Name: d.operation.Input.AppendInputString("__typename"), +func addInternalTypeNamePlaceholder(operation *ast.Document, selectionSetRef int) int { + field := operation.AddField(ast.Field{ + Name: operation.Input.AppendInputString("__typename"), // We are adding an alias to the __typename field to mark it as internally added // So planner could ignore this field during creation of the response shape Alias: ast.Alias{ IsDefined: true, - Name: d.operation.Input.AppendInputString("__internal__typename_placeholder"), + Name: operation.Input.AppendInputString("___typename"), }, }) - return d.operation.AddSelectionToDocument(ast.Selection{ + selectionRef := operation.AddSelectionToDocument(ast.Selection{ Ref: field.Ref, Kind: ast.SelectionKindField, - }), field.Ref + }) + + operation.AddSelectionRefToSelectionSet(selectionSetRef, selectionRef) + return field.Ref } diff --git a/v2/pkg/astnormalization/directive_include_skip_test.go b/v2/pkg/astnormalization/directive_include_skip_test.go index 03681c7e0a..0bfbed4805 100644 --- a/v2/pkg/astnormalization/directive_include_skip_test.go +++ b/v2/pkg/astnormalization/directive_include_skip_test.go @@ -53,9 +53,9 @@ func TestDirectiveIncludeVisitor(t *testing.T) { } }`, ` { - dog {__internal__typename_placeholder: __typename} - notInclude: dog {__internal__typename_placeholder: __typename} - skip: dog {__internal__typename_placeholder: __typename} + dog {___typename: __typename} + notInclude: dog {___typename: __typename} + skip: dog {___typename: __typename} }`) }) t.Run("include variables true", func(t *testing.T) { @@ -95,10 +95,10 @@ func TestDirectiveIncludeVisitor(t *testing.T) { }`, ` query($no: Boolean!){ dog { - __internal__typename_placeholder: __typename + ___typename: __typename } withAlias: dog { - __internal__typename_placeholder: __typename + ___typename: __typename } }`, `{"no":false}`) }) @@ -116,7 +116,7 @@ func TestDirectiveIncludeVisitor(t *testing.T) { }`, ` query($yes: Boolean! $no: Boolean!){ dog { - __internal__typename_placeholder: __typename + ___typename: __typename } withAlias: dog { name @@ -137,10 +137,10 @@ func TestDirectiveIncludeVisitor(t *testing.T) { }`, ` query($yes: Boolean!) { dog { - __internal__typename_placeholder: __typename + ___typename: __typename } withAlias: dog { - __internal__typename_placeholder: __typename + ___typename: __typename } }`, `{"yes":true}`) }) @@ -181,7 +181,7 @@ func TestDirectiveIncludeVisitor(t *testing.T) { }`, ` query($yes: Boolean!, $no: Boolean!) { dog { - __internal__typename_placeholder: __typename + ___typename: __typename } withAlias: dog { name @@ -202,10 +202,10 @@ func TestDirectiveIncludeVisitor(t *testing.T) { }`, ` query($yes: Boolean!, $no: Boolean!) { dog { - __internal__typename_placeholder: __typename + ___typename: __typename } withAlias: dog { - __internal__typename_placeholder: __typename + ___typename: __typename } }`, `{"yes":true,"no":false}`) }) @@ -246,7 +246,7 @@ func TestDirectiveIncludeVisitor(t *testing.T) { }`, ` query($yes: Boolean = true, $no: Boolean = false) { dog { - __internal__typename_placeholder: __typename + ___typename: __typename } withAlias: dog { name @@ -272,7 +272,7 @@ func TestDirectiveIncludeVisitor(t *testing.T) { } } withAlias: dog { - __internal__typename_placeholder: __typename + ___typename: __typename } }`, `{}`) }) @@ -290,7 +290,7 @@ func TestDirectiveIncludeVisitor(t *testing.T) { }`, ` query($yes: Boolean = false, $no: Boolean = true) { dog { - __internal__typename_placeholder: __typename + ___typename: __typename } withAlias: dog { name @@ -316,7 +316,7 @@ func TestDirectiveIncludeVisitor(t *testing.T) { } } withAlias: dog { - __internal__typename_placeholder: __typename + ___typename: __typename } }`, `{"yes":true,"no":false}`) }) diff --git a/v2/pkg/astnormalization/field_deduplication.go b/v2/pkg/astnormalization/field_deduplication.go index b57a8a0633..307ae58db2 100644 --- a/v2/pkg/astnormalization/field_deduplication.go +++ b/v2/pkg/astnormalization/field_deduplication.go @@ -51,9 +51,10 @@ func (d *deduplicateFieldsVisitor) EnterSelectionSet(ref int) { if d.operation.Fields[right].HasSelections { continue } - // here we will check full directive equality if they are not equal we won't deduplicate - // it means that even directives order matters + // here we will check full directive equality if they are not equal, we won't deduplicate. + // the order of directives doesn't matter if they are fully equal. if d.operation.FieldsAreEqualFlat(left, right, true) { + d.operation.MergeFieldsDefer(left, right) d.operation.RemoveFromSelectionSet(ref, b) d.RevisitNode() return diff --git a/v2/pkg/astnormalization/field_deduplication_test.go b/v2/pkg/astnormalization/field_deduplication_test.go index 8cfd76f6db..cb6b6d5cc5 100644 --- a/v2/pkg/astnormalization/field_deduplication_test.go +++ b/v2/pkg/astnormalization/field_deduplication_test.go @@ -35,4 +35,47 @@ func TestDeDuplicateFields(t *testing.T) { doesKnowCommand(dogCommand: 0) }`) }) + + t.Run("with internal defer", func(t *testing.T) { + run(t, deduplicateFields, testDefinition, ` + query pet { + pet { + ... on Dog { + name @__defer_internal(id: 1) + nickname @__defer_internal(id: 2, parentDeferId: 1) + nickname @__defer_internal(id: 1) + barkVolume @__defer_internal(id: 2, parentDeferId: 1) + } + ... on Cat { + name @__defer_internal(id: 4) + name @__defer_internal(id: 3) + name + extra { + bool + bool @__defer_internal(id: 3) + } + meowVolume @__defer_internal(id: 4) + meowVolume @__defer_internal(id: 3) + nickname @__defer_internal(id: 4) + } + } + }`, ` + query pet { + pet { + ... on Dog { + name @__defer_internal(id: 1) + nickname @__defer_internal(id: 1) + barkVolume @__defer_internal(id: 2, parentDeferId: 1) + } + ... on Cat { + name + extra { + bool + } + meowVolume @__defer_internal(id: 3) + nickname @__defer_internal(id: 4) + } + } + }`, runOptions{indent: true}) + }) } diff --git a/v2/pkg/astnormalization/fragment_spread_inlining_test.go b/v2/pkg/astnormalization/fragment_spread_inlining_test.go index b08fc67449..170c7ea5d8 100644 --- a/v2/pkg/astnormalization/fragment_spread_inlining_test.go +++ b/v2/pkg/astnormalization/fragment_spread_inlining_test.go @@ -628,7 +628,7 @@ func TestInlineFragments(t *testing.T) { }`) }) t.Run("non intersecting interfaces shouldn't merge", func(t *testing.T) { - run(t, fragmentSpreadInline, testDefinition, ` + runWithOptions(t, fragmentSpreadInline, testDefinition, ` { dog { ...nonIntersectingInterfaces @@ -652,7 +652,7 @@ func TestInlineFragments(t *testing.T) { } fragment sentientFragment on Sentient { name - }`, true) + }`, runOptions{indent: true}) }) t.Run("implicitly intersecting interfaces should merge", func(t *testing.T) { run(t, fragmentSpreadInline, ` diff --git a/v2/pkg/astnormalization/inline_fragment_expand_defer.go b/v2/pkg/astnormalization/inline_fragment_expand_defer.go new file mode 100644 index 0000000000..3689f71433 --- /dev/null +++ b/v2/pkg/astnormalization/inline_fragment_expand_defer.go @@ -0,0 +1,133 @@ +package astnormalization + +import ( + "github.com/wundergraph/graphql-go-tools/v2/pkg/ast" + "github.com/wundergraph/graphql-go-tools/v2/pkg/astvisitor" + "github.com/wundergraph/graphql-go-tools/v2/pkg/lexer/literal" +) + +// inlineFragmentExpandDefer registers a visitor that +// applies the defer directive to every nested field +func inlineFragmentExpandDefer(walker *astvisitor.Walker) { + visitor := inlineFragmentExpandDeferVisitor{ + Walker: walker, + } + walker.RegisterEnterDocumentVisitor(&visitor) + walker.RegisterInlineFragmentVisitor(&visitor) + walker.RegisterEnterSelectionSetVisitor(&visitor) +} + +type inlineFragmentExpandDeferVisitor struct { + *astvisitor.Walker + + operation *ast.Document + defers []deferInfo + currentDeferId int +} + +type deferInfo struct { + parentDeferId int + id int + label string + fragmentRef int +} + +func (f *inlineFragmentExpandDeferVisitor) EnterDocument(operation, _ *ast.Document) { + f.operation = operation +} + +func (f *inlineFragmentExpandDeferVisitor) EnterInlineFragment(ref int) { + if !f.operation.InlineFragmentHasDirectives(ref) { + return + } + + // has defer directive? + directiveRef, exists := f.operation.InlineFragmentDirectiveByName(ref, literal.DEFER) + if !exists { + return + } + + // check if defer is enabled + enabled := true + ifValue, hasIf := f.operation.DirectiveArgumentValueByName(directiveRef, literal.IF) + if hasIf { + enabled = bool(f.operation.BooleanValue(ifValue.Ref)) + } + + // remove defer directive from the inline fragment + // as it will be applied to every nested field + f.operation.RemoveDirectiveFromNode(ast.Node{ + Kind: ast.NodeKindInlineFragment, + Ref: ref, + }, directiveRef) + + if !enabled { + return + } + + selectionSetRef, ok := f.operation.InlineFragmentSelectionSet(ref) + if !ok { + return + } + + if len(f.operation.SelectionSetFieldSelections(selectionSetRef)) == 0 { + // if a deferred fragment has no fields, it should be ignored + return + } + + // get label argument if any + labelValue, hasLabel := f.operation.DirectiveArgumentValueByName(directiveRef, literal.LABEL) + label := "" + if hasLabel { + label = f.operation.StringValueContentString(labelValue.Ref) + } + + f.currentDeferId++ + + parentDeferId := 0 + if len(f.defers) > 0 { + parentDeferId = f.defers[len(f.defers)-1].id + } + + deferInfo := deferInfo{ + parentDeferId: parentDeferId, + id: f.currentDeferId, + label: label, + fragmentRef: ref, + } + + f.defers = append(f.defers, deferInfo) +} + +func (f *inlineFragmentExpandDeferVisitor) LeaveInlineFragment(ref int) { + if len(f.defers) == 0 { + return + } + + if f.defers[len(f.defers)-1].fragmentRef == ref { + f.defers = f.defers[:len(f.defers)-1] + } +} + +func (f *inlineFragmentExpandDeferVisitor) EnterSelectionSet(ref int) { + // if there are no active defers, nothing to do + if len(f.defers) == 0 { + return + } + + fieldSelectionRefs := f.operation.SelectionSetFieldSelections(ref) + // if there are no fields in the current selection set, nothing to do + if len(fieldSelectionRefs) == 0 { + return + } + + // apply the internal defer directive to every field in the current selection set + for _, fieldSelectionRef := range fieldSelectionRefs { + f.addInternalDeferDirective(f.operation.Selections[fieldSelectionRef].Ref) + } +} + +func (f *inlineFragmentExpandDeferVisitor) addInternalDeferDirective(fieldRef int) { + deferInfo := f.defers[len(f.defers)-1] + f.operation.AddDeferInternalDirectiveToField(fieldRef, deferInfo.id, deferInfo.label, deferInfo.parentDeferId) +} diff --git a/v2/pkg/astnormalization/inline_fragment_expand_defer_test.go b/v2/pkg/astnormalization/inline_fragment_expand_defer_test.go new file mode 100644 index 0000000000..6fbcf4e422 --- /dev/null +++ b/v2/pkg/astnormalization/inline_fragment_expand_defer_test.go @@ -0,0 +1,84 @@ +package astnormalization + +import "testing" + +func TestInlineFragmentExpandDefer(t *testing.T) { + t.Run("simple", func(t *testing.T) { + run(t, inlineFragmentExpandDefer, testDefinition, ` + query dog { + dog { + ... @defer { + name + } + } + }`, + ` + query dog { + dog { + ... { + name @__defer_internal(id: 1) + } + } + }`) + }) + t.Run("with interface type", func(t *testing.T) { + runWithOptions(t, inlineFragmentExpandDefer, testDefinition, ` + query pet { + pet { + ... on Dog @defer { + name + nickname + ... @defer { + barkVolume + } + } + ... on Dog { + ... @defer { + extra { + noString + } + } + ... @defer { + extra { + string + noString + } + } + } + ... on Cat @defer { + name + meowVolume + } + } + }`, + ` + query pet { + pet { + ... on Dog { + name @__defer_internal(id: 1) + nickname @__defer_internal(id: 1) + ... { + barkVolume @__defer_internal(id: 2, parentDeferId: 1) + } + } + ... on Dog { + ... { + extra @__defer_internal(id: 3) { + noString @__defer_internal(id: 3) + } + } + ... { + extra @__defer_internal(id: 4) { + string @__defer_internal(id: 4) + noString @__defer_internal(id: 4) + } + } + } + ... on Cat { + name @__defer_internal(id: 5) + meowVolume @__defer_internal(id: 5) + } + } + }`, runOptions{indent: true}) + }) +} diff --git a/v2/pkg/astnormalization/inline_fragment_selection_merging.go b/v2/pkg/astnormalization/inline_fragment_selection_merging.go index 118c9eb53c..36d54dc9a0 100644 --- a/v2/pkg/astnormalization/inline_fragment_selection_merging.go +++ b/v2/pkg/astnormalization/inline_fragment_selection_merging.go @@ -88,6 +88,8 @@ func (f *inlineFragmentSelectionMergeVisitor) mergeFields(left, right int) (ok b return false } + f.operation.MergeFieldsDefer(left, right) + f.operation.AppendSelectionSet(leftSet, rightSet) return true } @@ -119,6 +121,7 @@ func (f *inlineFragmentSelectionMergeVisitor) EnterSelectionSet(ref int) { if !f.fragmentsCanBeMerged(leftRef, rightRef) { continue } + if f.mergeInlineFragments(leftRef, rightRef) { f.operation.RemoveFromSelectionSet(ref, j) f.RevisitNode() diff --git a/v2/pkg/astnormalization/inline_fragment_selection_merging_test.go b/v2/pkg/astnormalization/inline_fragment_selection_merging_test.go index ffb9f7359f..f76b72c75b 100644 --- a/v2/pkg/astnormalization/inline_fragment_selection_merging_test.go +++ b/v2/pkg/astnormalization/inline_fragment_selection_merging_test.go @@ -216,6 +216,24 @@ func TestMergeInlineFragmentFieldSelections(t *testing.T) { } }`) }) + + t.Run("fields with the same directives but in different order", func(t *testing.T) { + run(t, mergeInlineFragmentSelections, testDefinition, ` + { + field @skip(if: $foo) @include(if: $foo) { + subfieldA + } + field @include(if: $foo) @skip(if: $foo) { + subfieldB + } + }`, ` + { + field @skip(if: $foo) @include(if: $foo) { + subfieldA + subfieldB + } + }`) + }) }) t.Run("fragments and fields", func(t *testing.T) { t.Run("field field fragment", func(t *testing.T) { @@ -331,5 +349,74 @@ func TestMergeInlineFragmentFieldSelections(t *testing.T) { }`) }) + t.Run("with internal defer", func(t *testing.T) { + runWithOptions(t, mergeInlineFragmentSelections, testDefinition, ` + query pet { + pet { + ... on Dog { + name @__defer_internal(id: 1) + nickname @__defer_internal(id: 1) + nickname @__defer_internal(id: 2, parentDeferId: 1) + barkVolume @__defer_internal(id: 2, parentDeferId: 1) + } + ... on Dog { + extra @__defer_internal(id: 3) { + noString @__defer_internal(id: 3) + } + extra @__defer_internal(id: 4) { + string @__defer_internal(id: 4) + noString @__defer_internal(id: 4) + } + } + ... on Cat { + name + extra { + bool + } + } + ... on Cat { + name @__defer_internal(id: 5) + meowVolume @__defer_internal(id: 5) + extra @__defer_internal(id: 5) { + bool @__defer_internal(id: 5) + } + } + ... on Cat { + name @__defer_internal(id: 6) + nickname @__defer_internal(id: 6) + meowVolume @__defer_internal(id: 6) + } + } + }`, + ` + query pet { + pet { + ... on Dog { + name @__defer_internal(id: 1) + nickname @__defer_internal(id: 1) + nickname @__defer_internal(id: 2, parentDeferId: 1) + barkVolume @__defer_internal(id: 2, parentDeferId: 1) + extra @__defer_internal(id: 3) { + noString @__defer_internal(id: 3) + string @__defer_internal(id: 4) + noString @__defer_internal(id: 4) + } + } + ... on Cat { + name + extra { + bool + bool @__defer_internal(id: 5) + } + name @__defer_internal(id: 5) + meowVolume @__defer_internal(id: 5) + name @__defer_internal(id: 6) + nickname @__defer_internal(id: 6) + meowVolume @__defer_internal(id: 6) + } + } + }`, runOptions{indent: true}) + }) + }) } diff --git a/v2/pkg/astnormalization/inline_selections_from_inline_fragments_test.go b/v2/pkg/astnormalization/inline_selections_from_inline_fragments_test.go index c74425e3ff..ebc9978aa4 100644 --- a/v2/pkg/astnormalization/inline_selections_from_inline_fragments_test.go +++ b/v2/pkg/astnormalization/inline_selections_from_inline_fragments_test.go @@ -96,4 +96,58 @@ func TestResolveInlineFragments(t *testing.T) { }`) }) + t.Run("with internal defer", func(t *testing.T) { + run(t, inlineSelectionsFromInlineFragments, testDefinition, ` + query pet { + pet { + ... on Dog { + name @__defer_internal(id: 1) + nickname @__defer_internal(id: 1) + ... { + barkVolume @__defer_internal(id: 2, parentDeferId: 1) + } + } + ... on Dog { + ... { + extra @__defer_internal(id: 3) { + noString @__defer_internal(id: 3) + } + } + ... { + extra @__defer_internal(id: 4) { + string @__defer_internal(id: 4) + noString @__defer_internal(id: 4) + } + } + } + ... on Cat { + name @__defer_internal(id: 5) + meowVolume @__defer_internal(id: 5) + } + } + }`, + ` + query pet { + pet { + ... on Dog { + name @__defer_internal(id: 1) + nickname @__defer_internal(id: 1) + barkVolume @__defer_internal(id: 2, parentDeferId: 1) + } + ... on Dog { + extra @__defer_internal(id: 3) { + noString @__defer_internal(id: 3) + } + extra @__defer_internal(id: 4) { + string @__defer_internal(id: 4) + noString @__defer_internal(id: 4) + } + } + ... on Cat { + name @__defer_internal(id: 5) + meowVolume @__defer_internal(id: 5) + } + } + }`) + }) } diff --git a/v2/pkg/asttransform/base.graphql b/v2/pkg/asttransform/base.graphql new file mode 100644 index 0000000000..e8105b4fb5 --- /dev/null +++ b/v2/pkg/asttransform/base.graphql @@ -0,0 +1,223 @@ +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +scalar Int + +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +scalar Float + +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +scalar String + +"The `Boolean` scalar type represents `true` or `false`." +scalar Boolean + +"""The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID.""" +scalar ID + +"Directs the executor to include this field or fragment only when the argument is true." +directive @include( + "Included when true." + if: Boolean! +) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT + +"Directs the executor to skip this field or fragment when the argument is true." +directive @skip( + "Skipped when true." + if: Boolean! +) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT + +"Marks an element of a GraphQL schema as no longer supported." +directive @deprecated( + """ + Explains why this element was deprecated, usually also including a suggestion + for how to access supported similar data. Formatted in + [Markdown](https://daringfireball.net/projects/markdown/). + """ + reason: String = "No longer supported" +) on FIELD_DEFINITION | ARGUMENT_DEFINITION | INPUT_FIELD_DEFINITION | ENUM_VALUE + +"Exposes a URL that specifies the behavior of this scalar" +directive @specifiedBy( + "The URL that specifies the behavior of this scalar." + url: String! +) on SCALAR + +""" +The @oneOf built-in directive marks an input object as a OneOf Input Object. +Exactly one field must be provided and its value must be non-null at runtime. +All fields defined within a @oneOf input must be nullable in the schema. +""" +directive @oneOf on INPUT_OBJECT + +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + +""" +A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. +In some cases, you need to provide options to alter GraphQL's execution behavior +in ways field arguments will not suffice, such as conditionally including or +skipping a field. Directives provide this by describing additional information +to the executor. +""" +type __Directive { + name: String! + description: String + locations: [__DirectiveLocation!]! + args(includeDeprecated: Boolean = false): [__InputValue!]! + isRepeatable: Boolean! +} + +""" +A Directive can be adjacent to many parts of the GraphQL language, a +__DirectiveLocation describes one such possible adjacencies. +""" +enum __DirectiveLocation { + "Location adjacent to a query operation." + QUERY + "Location adjacent to a mutation operation." + MUTATION + "Location adjacent to a subscription operation." + SUBSCRIPTION + "Location adjacent to a field." + FIELD + "Location adjacent to a fragment definition." + FRAGMENT_DEFINITION + "Location adjacent to a fragment spread." + FRAGMENT_SPREAD + "Location adjacent to an inline fragment." + INLINE_FRAGMENT + "Location adjacent to a variable definition" + VARIABLE_DEFINITION + "Location adjacent to a schema definition." + SCHEMA + "Location adjacent to a scalar definition." + SCALAR + "Location adjacent to an object type definition." + OBJECT + "Location adjacent to a field definition." + FIELD_DEFINITION + "Location adjacent to an argument definition." + ARGUMENT_DEFINITION + "Location adjacent to an interface definition." + INTERFACE + "Location adjacent to a union definition." + UNION + "Location adjacent to an enum definition." + ENUM + "Location adjacent to an enum value definition." + ENUM_VALUE + "Location adjacent to an input object type definition." + INPUT_OBJECT + "Location adjacent to an input object field definition." + INPUT_FIELD_DEFINITION +} +""" +One possible value for a given Enum. Enum values are unique values, not a +placeholder for a string or numeric value. However an Enum value is returned in +a JSON response as a string. +""" +type __EnumValue { + name: String! + description: String + isDeprecated: Boolean! + deprecationReason: String +} + +""" +Object and Interface types are described by a list of Fields, each of which has +a name, potentially a list of arguments, and a return type. +""" +type __Field { + name: String! + description: String + args(includeDeprecated: Boolean = false): [__InputValue!]! + type: __Type! + isDeprecated: Boolean! + deprecationReason: String +} + +"""Arguments provided to Fields or Directives and the input fields of an +InputObject are represented as Input Values which describe their type and +optionally a default value. +""" +type __InputValue { + name: String! + description: String + type: __Type! + defaultValue: String + isDeprecated: Boolean! + deprecationReason: String +} + +""" +A GraphQL Schema defines the capabilities of a GraphQL server. It exposes all +available types and directives on the server, as well as the entry points for +query, mutation, and subscription operations. +""" +type __Schema { + description: String + "A list of all types supported by this server." + types: [__Type!]! + "The type that query operations will be rooted at." + queryType: __Type! + "If this server supports mutation, the type that mutation operations will be rooted at." + mutationType: __Type + "If this server support subscription, the type that subscription operations will be rooted at." + subscriptionType: __Type + "A list of all directives supported by this server." + directives: [__Directive!]! +} + +""" +The fundamental unit of any GraphQL Schema is the type. There are many kinds of +types in GraphQL as represented by the '__TypeKind' enum. + +Depending on the kind of a type, certain fields describe information about that +type. Scalar types provide no information beyond a name and description, while +Enum types provide their values. Object and Interface types provide the fields +they describe. Abstract types, Union and Interface, provide the Object types +possible at runtime. List and NonNull types compose other types. +""" +type __Type { + kind: __TypeKind! + name: String + description: String + # must be non-null for OBJECT and INTERFACE, otherwise null. + fields(includeDeprecated: Boolean = false): [__Field!] + # must be non-null for OBJECT and INTERFACE, otherwise null. + interfaces: [__Type!] + # must be non-null for INTERFACE and UNION, otherwise null. + possibleTypes: [__Type!] + # must be non-null for ENUM, otherwise null. + enumValues(includeDeprecated: Boolean = false): [__EnumValue!] + # must be non-null for INPUT_OBJECT, otherwise null. + inputFields(includeDeprecated: Boolean = false): [__InputValue!] + # must be non-null for NON_NULL and LIST, otherwise null. + ofType: __Type + # may be non-null for custom SCALAR, otherwise null. + specifiedByURL: String +} + +"An enum describing what kind of type a given '__Type' is." +enum __TypeKind { + "Indicates this type is a scalar." + SCALAR + "Indicates this type is an object. 'fields' and 'interfaces' are valid fields." + OBJECT + "Indicates this type is an interface. 'fields' ' and ' 'possibleTypes' are valid fields." + INTERFACE + "Indicates this type is a union. 'possibleTypes' is a valid field." + UNION + "Indicates this type is an enum. 'enumValues' is a valid field." + ENUM + "Indicates this type is an input object. 'inputFields' is a valid field." + INPUT_OBJECT + "Indicates this type is a list. 'ofType' is a valid field." + LIST + "Indicates this type is a non-null. 'ofType' is a valid field." + NON_NULL +} \ No newline at end of file diff --git a/v2/pkg/asttransform/baseschema.go b/v2/pkg/asttransform/baseschema.go index 48a0a3dd9a..5bf6fd0025 100644 --- a/v2/pkg/asttransform/baseschema.go +++ b/v2/pkg/asttransform/baseschema.go @@ -2,14 +2,35 @@ package asttransform import ( "bytes" + _ "embed" "github.com/wundergraph/graphql-go-tools/v2/pkg/ast" "github.com/wundergraph/graphql-go-tools/v2/pkg/astparser" "github.com/wundergraph/graphql-go-tools/v2/pkg/operationreport" ) +var ( + //go:embed base.graphql + baseSchema []byte + + //go:embed internal.graphql + internalDefinition []byte +) + +type Options struct { + InternalDefer bool +} + func MergeDefinitionWithBaseSchema(definition *ast.Document) error { + return MergeDefinitionWithBaseSchemaWithInternal(definition, true) +} + +func MergeDefinitionWithBaseSchemaWithInternal(definition *ast.Document, includeInternal bool) error { definition.Input.AppendInputBytes(baseSchema) + if includeInternal { + definition.Input.AppendInputBytes(internalDefinition) + } + parser := astparser.NewParser() report := operationreport.Report{} parser.Parse(definition, &report) @@ -135,208 +156,3 @@ func findQueryNode(definition *ast.Document) (queryNode ast.Node, ok bool) { return queryNode, ok } - -var baseSchema = []byte(`"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." -scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." -scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." -scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." -scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." -scalar ID -"Directs the executor to include this field or fragment only when the argument is true." -directive @include( - "Included when true." - if: Boolean! -) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT -"Directs the executor to skip this field or fragment when the argument is true." -directive @skip( - "Skipped when true." - if: Boolean! -) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT -"Marks an element of a GraphQL schema as no longer supported." -directive @deprecated( - """ - Explains why this element was deprecated, usually also including a suggestion - for how to access supported similar data. Formatted in - [Markdown](https://daringfireball.net/projects/markdown/). - """ - reason: String = "No longer supported" -) on FIELD_DEFINITION | ARGUMENT_DEFINITION | INPUT_FIELD_DEFINITION | ENUM_VALUE - -directive @specifiedBy(url: String!) on SCALAR - -""" -The @oneOf built-in directive marks an input object as a OneOf Input Object. -Exactly one field must be provided and its value must be non-null at runtime. -All fields defined within a @oneOf input must be nullable in the schema. -""" -directive @oneOf on INPUT_OBJECT - -""" -A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. -In some cases, you need to provide options to alter GraphQL's execution behavior -in ways field arguments will not suffice, such as conditionally including or -skipping a field. Directives provide this by describing additional information -to the executor. -""" -type __Directive { - name: String! - description: String - locations: [__DirectiveLocation!]! - args(includeDeprecated: Boolean = false): [__InputValue!]! - isRepeatable: Boolean! -} - -""" -A Directive can be adjacent to many parts of the GraphQL language, a -__DirectiveLocation describes one such possible adjacencies. -""" -enum __DirectiveLocation { - "Location adjacent to a query operation." - QUERY - "Location adjacent to a mutation operation." - MUTATION - "Location adjacent to a subscription operation." - SUBSCRIPTION - "Location adjacent to a field." - FIELD - "Location adjacent to a fragment definition." - FRAGMENT_DEFINITION - "Location adjacent to a fragment spread." - FRAGMENT_SPREAD - "Location adjacent to an inline fragment." - INLINE_FRAGMENT - "Location adjacent to a variable definition" - VARIABLE_DEFINITION - "Location adjacent to a schema definition." - SCHEMA - "Location adjacent to a scalar definition." - SCALAR - "Location adjacent to an object type definition." - OBJECT - "Location adjacent to a field definition." - FIELD_DEFINITION - "Location adjacent to an argument definition." - ARGUMENT_DEFINITION - "Location adjacent to an interface definition." - INTERFACE - "Location adjacent to a union definition." - UNION - "Location adjacent to an enum definition." - ENUM - "Location adjacent to an enum value definition." - ENUM_VALUE - "Location adjacent to an input object type definition." - INPUT_OBJECT - "Location adjacent to an input object field definition." - INPUT_FIELD_DEFINITION -} -""" -One possible value for a given Enum. Enum values are unique values, not a -placeholder for a string or numeric value. However an Enum value is returned in -a JSON response as a string. -""" -type __EnumValue { - name: String! - description: String - isDeprecated: Boolean! - deprecationReason: String -} - -""" -Object and Interface types are described by a list of Fields, each of which has -a name, potentially a list of arguments, and a return type. -""" -type __Field { - name: String! - description: String - args(includeDeprecated: Boolean = false): [__InputValue!]! - type: __Type! - isDeprecated: Boolean! - deprecationReason: String -} - -"""Arguments provided to Fields or Directives and the input fields of an -InputObject are represented as Input Values which describe their type and -optionally a default value. -""" -type __InputValue { - name: String! - description: String - type: __Type! - defaultValue: String - isDeprecated: Boolean! - deprecationReason: String -} - -""" -A GraphQL Schema defines the capabilities of a GraphQL server. It exposes all -available types and directives on the server, as well as the entry points for -query, mutation, and subscription operations. -""" -type __Schema { - description: String - "A list of all types supported by this server." - types: [__Type!]! - "The type that query operations will be rooted at." - queryType: __Type! - "If this server supports mutation, the type that mutation operations will be rooted at." - mutationType: __Type - "If this server support subscription, the type that subscription operations will be rooted at." - subscriptionType: __Type - "A list of all directives supported by this server." - directives: [__Directive!]! -} - -""" -The fundamental unit of any GraphQL Schema is the type. There are many kinds of -types in GraphQL as represented by the '__TypeKind' enum. - -Depending on the kind of a type, certain fields describe information about that -type. Scalar types provide no information beyond a name and description, while -Enum types provide their values. Object and Interface types provide the fields -they describe. Abstract types, Union and Interface, provide the Object types -possible at runtime. List and NonNull types compose other types. -""" -type __Type { - kind: __TypeKind! - name: String - description: String - # must be non-null for OBJECT and INTERFACE, otherwise null. - fields(includeDeprecated: Boolean = false): [__Field!] - # must be non-null for OBJECT and INTERFACE, otherwise null. - interfaces: [__Type!] - # must be non-null for INTERFACE and UNION, otherwise null. - possibleTypes: [__Type!] - # must be non-null for ENUM, otherwise null. - enumValues(includeDeprecated: Boolean = false): [__EnumValue!] - # must be non-null for INPUT_OBJECT, otherwise null. - inputFields(includeDeprecated: Boolean = false): [__InputValue!] - # must be non-null for NON_NULL and LIST, otherwise null. - ofType: __Type - # may be non-null for custom SCALAR, otherwise null. - specifiedByURL: String -} - -"An enum describing what kind of type a given '__Type' is." -enum __TypeKind { - "Indicates this type is a scalar." - SCALAR - "Indicates this type is an object. 'fields' and 'interfaces' are valid fields." - OBJECT - "Indicates this type is an interface. 'fields' ' and ' 'possibleTypes' are valid fields." - INTERFACE - "Indicates this type is a union. 'possibleTypes' is a valid field." - UNION - "Indicates this type is an enum. 'enumValues' is a valid field." - ENUM - "Indicates this type is an input object. 'inputFields' is a valid field." - INPUT_OBJECT - "Indicates this type is a list. 'ofType' is a valid field." - LIST - "Indicates this type is a non-null. 'ofType' is a valid field." - NON_NULL -}`) diff --git a/v2/pkg/asttransform/baseschema_test.go b/v2/pkg/asttransform/baseschema_test.go index 7a856ea677..a30f02cb0e 100644 --- a/v2/pkg/asttransform/baseschema_test.go +++ b/v2/pkg/asttransform/baseschema_test.go @@ -2,10 +2,9 @@ package asttransform_test import ( "bytes" - "os" "testing" - "github.com/jensneuse/diffview" + "github.com/stretchr/testify/require" "github.com/wundergraph/graphql-go-tools/v2/pkg/astprinter" "github.com/wundergraph/graphql-go-tools/v2/pkg/asttransform" @@ -14,26 +13,24 @@ import ( ) func runTestMerge(definition, fixtureName string) func(t *testing.T) { + return runTestMergeWithInternal(definition, fixtureName, true) +} + +func runTestMergeWithInternal(definition, fixtureName string, includeInternal bool) func(t *testing.T) { return func(t *testing.T) { doc := unsafeparser.ParseGraphqlDocumentString(definition) - err := asttransform.MergeDefinitionWithBaseSchema(&doc) - if err != nil { - panic(err) + var err error + if includeInternal { + err = asttransform.MergeDefinitionWithBaseSchema(&doc) + } else { + err = asttransform.MergeDefinitionWithBaseSchemaWithInternal(&doc, false) } + require.NoError(t, err) buf := bytes.Buffer{} err = astprinter.PrintIndent(&doc, []byte(" "), &buf) - if err != nil { - panic(err) - } + require.NoError(t, err) got := buf.Bytes() goldie.Assert(t, fixtureName, got) - if t.Failed() { - want, err := os.ReadFile("./fixtures/" + fixtureName + ".golden") - if err != nil { - panic(err) - } - diffview.NewGoland().DiffViewBytes(fixtureName, want, got) - } } } @@ -56,6 +53,11 @@ func TestMergeDefinitionWithBaseSchema(t *testing.T) { m: String! } `, "mutation_only")) + t.Run("mutation only - no internal", runTestMergeWithInternal(` + type Mutation { + m: String! + } + `, "mutation_only_no_internal", false)) t.Run("schema with mutation", runTestMerge(` schema { mutation: Mutation diff --git a/v2/pkg/asttransform/fixtures/complete.golden b/v2/pkg/asttransform/fixtures/complete.golden index fa69f656e6..8efc7f4014 100644 --- a/v2/pkg/asttransform/fixtures/complete.golden +++ b/v2/pkg/asttransform/fixtures/complete.golden @@ -16,19 +16,21 @@ type Hello { __typename: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -53,7 +55,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -64,6 +68,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -229,4 +241,11 @@ enum __TypeKind { LIST "Indicates this type is a non-null. 'ofType' is a valid field." NON_NULL -} \ No newline at end of file +} + +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD \ No newline at end of file diff --git a/v2/pkg/asttransform/fixtures/custom_query_name.golden b/v2/pkg/asttransform/fixtures/custom_query_name.golden index b1b8ff8c13..ccb234a602 100644 --- a/v2/pkg/asttransform/fixtures/custom_query_name.golden +++ b/v2/pkg/asttransform/fixtures/custom_query_name.golden @@ -16,19 +16,21 @@ type Hello { __typename: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -53,7 +55,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -64,6 +68,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -229,4 +241,11 @@ enum __TypeKind { LIST "Indicates this type is a non-null. 'ofType' is a valid field." NON_NULL -} \ No newline at end of file +} + +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD \ No newline at end of file diff --git a/v2/pkg/asttransform/fixtures/mutation_only.golden b/v2/pkg/asttransform/fixtures/mutation_only.golden index bfe7dab3d3..5dba790b1b 100644 --- a/v2/pkg/asttransform/fixtures/mutation_only.golden +++ b/v2/pkg/asttransform/fixtures/mutation_only.golden @@ -8,19 +8,21 @@ type Mutation { __typename: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -45,7 +47,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -56,6 +60,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -223,6 +235,13 @@ enum __TypeKind { NON_NULL } +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD + type Query { __schema: __Schema! __type(name: String!): __Type diff --git a/v2/pkg/asttransform/fixtures/mutation_only_no_internal.golden b/v2/pkg/asttransform/fixtures/mutation_only_no_internal.golden new file mode 100644 index 0000000000..d24e1a244d --- /dev/null +++ b/v2/pkg/asttransform/fixtures/mutation_only_no_internal.golden @@ -0,0 +1,242 @@ +schema { + mutation: Mutation + query: Query +} + +type Mutation { + m: String! + __typename: String! +} + +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +scalar Int + +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +scalar Float + +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +scalar String + +"The `Boolean` scalar type represents `true` or `false`." +scalar Boolean + +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" +scalar ID + +"Directs the executor to include this field or fragment only when the argument is true." +directive @include( + "Included when true." + if: Boolean! +) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT + +"Directs the executor to skip this field or fragment when the argument is true." +directive @skip( + "Skipped when true." + if: Boolean! +) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT + +"Marks an element of a GraphQL schema as no longer supported." +directive @deprecated( + """ + Explains why this element was deprecated, usually also including a suggestion + for how to access supported similar data. Formatted in + [Markdown](https://daringfireball.net/projects/markdown/). + """ + reason: String = "No longer supported" +) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION + +"Exposes a URL that specifies the behavior of this scalar" +directive @specifiedBy( + "The URL that specifies the behavior of this scalar." + url: String! +) on SCALAR + +""" +The @oneOf built-in directive marks an input object as a OneOf Input Object. +Exactly one field must be provided and its value must be non-null at runtime. +All fields defined within a @oneOf input must be nullable in the schema. +""" +directive @oneOf on INPUT_OBJECT + +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + +""" +A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. +In some cases, you need to provide options to alter GraphQL's execution behavior +in ways field arguments will not suffice, such as conditionally including or +skipping a field. Directives provide this by describing additional information +to the executor. +""" +type __Directive { + name: String! + description: String + locations: [__DirectiveLocation!]! + args(includeDeprecated: Boolean = false): [__InputValue!]! + isRepeatable: Boolean! + __typename: String! +} + +""" +A Directive can be adjacent to many parts of the GraphQL language, a +__DirectiveLocation describes one such possible adjacencies. +""" +enum __DirectiveLocation { + "Location adjacent to a query operation." + QUERY + "Location adjacent to a mutation operation." + MUTATION + "Location adjacent to a subscription operation." + SUBSCRIPTION + "Location adjacent to a field." + FIELD + "Location adjacent to a fragment definition." + FRAGMENT_DEFINITION + "Location adjacent to a fragment spread." + FRAGMENT_SPREAD + "Location adjacent to an inline fragment." + INLINE_FRAGMENT + "Location adjacent to a variable definition" + VARIABLE_DEFINITION + "Location adjacent to a schema definition." + SCHEMA + "Location adjacent to a scalar definition." + SCALAR + "Location adjacent to an object type definition." + OBJECT + "Location adjacent to a field definition." + FIELD_DEFINITION + "Location adjacent to an argument definition." + ARGUMENT_DEFINITION + "Location adjacent to an interface definition." + INTERFACE + "Location adjacent to a union definition." + UNION + "Location adjacent to an enum definition." + ENUM + "Location adjacent to an enum value definition." + ENUM_VALUE + "Location adjacent to an input object type definition." + INPUT_OBJECT + "Location adjacent to an input object field definition." + INPUT_FIELD_DEFINITION +} + +""" +One possible value for a given Enum. Enum values are unique values, not a +placeholder for a string or numeric value. However an Enum value is returned in +a JSON response as a string. +""" +type __EnumValue { + name: String! + description: String + isDeprecated: Boolean! + deprecationReason: String + __typename: String! +} + +""" +Object and Interface types are described by a list of Fields, each of which has +a name, potentially a list of arguments, and a return type. +""" +type __Field { + name: String! + description: String + args(includeDeprecated: Boolean = false): [__InputValue!]! + type: __Type! + isDeprecated: Boolean! + deprecationReason: String + __typename: String! +} + +""" +Arguments provided to Fields or Directives and the input fields of an +InputObject are represented as Input Values which describe their type and +optionally a default value. +""" +type __InputValue { + name: String! + description: String + type: __Type! + defaultValue: String + isDeprecated: Boolean! + deprecationReason: String + __typename: String! +} + +""" +A GraphQL Schema defines the capabilities of a GraphQL server. It exposes all +available types and directives on the server, as well as the entry points for +query, mutation, and subscription operations. +""" +type __Schema { + description: String + "A list of all types supported by this server." + types: [__Type!]! + "The type that query operations will be rooted at." + queryType: __Type! + "If this server supports mutation, the type that mutation operations will be rooted at." + mutationType: __Type + "If this server support subscription, the type that subscription operations will be rooted at." + subscriptionType: __Type + "A list of all directives supported by this server." + directives: [__Directive!]! + __typename: String! +} + +""" +The fundamental unit of any GraphQL Schema is the type. There are many kinds of +types in GraphQL as represented by the '__TypeKind' enum. + +Depending on the kind of a type, certain fields describe information about that +type. Scalar types provide no information beyond a name and description, while +Enum types provide their values. Object and Interface types provide the fields +they describe. Abstract types, Union and Interface, provide the Object types +possible at runtime. List and NonNull types compose other types. +""" +type __Type { + kind: __TypeKind! + name: String + description: String + fields(includeDeprecated: Boolean = false): [__Field!] + interfaces: [__Type!] + possibleTypes: [__Type!] + enumValues(includeDeprecated: Boolean = false): [__EnumValue!] + inputFields(includeDeprecated: Boolean = false): [__InputValue!] + ofType: __Type + specifiedByURL: String + __typename: String! +} + +"An enum describing what kind of type a given '__Type' is." +enum __TypeKind { + "Indicates this type is a scalar." + SCALAR + "Indicates this type is an object. 'fields' and 'interfaces' are valid fields." + OBJECT + "Indicates this type is an interface. 'fields' ' and ' 'possibleTypes' are valid fields." + INTERFACE + "Indicates this type is a union. 'possibleTypes' is a valid field." + UNION + "Indicates this type is an enum. 'enumValues' is a valid field." + ENUM + "Indicates this type is an input object. 'inputFields' is a valid field." + INPUT_OBJECT + "Indicates this type is a list. 'ofType' is a valid field." + LIST + "Indicates this type is a non-null. 'ofType' is a valid field." + NON_NULL +} + +type Query { + __schema: __Schema! + __type(name: String!): __Type + __typename: String! +} \ No newline at end of file diff --git a/v2/pkg/asttransform/fixtures/schema_missing.golden b/v2/pkg/asttransform/fixtures/schema_missing.golden index fa69f656e6..8efc7f4014 100644 --- a/v2/pkg/asttransform/fixtures/schema_missing.golden +++ b/v2/pkg/asttransform/fixtures/schema_missing.golden @@ -16,19 +16,21 @@ type Hello { __typename: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -53,7 +55,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -64,6 +68,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -229,4 +241,11 @@ enum __TypeKind { LIST "Indicates this type is a non-null. 'ofType' is a valid field." NON_NULL -} \ No newline at end of file +} + +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD \ No newline at end of file diff --git a/v2/pkg/asttransform/fixtures/simple.golden b/v2/pkg/asttransform/fixtures/simple.golden index fa69f656e6..8efc7f4014 100644 --- a/v2/pkg/asttransform/fixtures/simple.golden +++ b/v2/pkg/asttransform/fixtures/simple.golden @@ -16,19 +16,21 @@ type Hello { __typename: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -53,7 +55,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -64,6 +68,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -229,4 +241,11 @@ enum __TypeKind { LIST "Indicates this type is a non-null. 'ofType' is a valid field." NON_NULL -} \ No newline at end of file +} + +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD \ No newline at end of file diff --git a/v2/pkg/asttransform/fixtures/subscription_only.golden b/v2/pkg/asttransform/fixtures/subscription_only.golden index 923037d7ff..2a79414f75 100644 --- a/v2/pkg/asttransform/fixtures/subscription_only.golden +++ b/v2/pkg/asttransform/fixtures/subscription_only.golden @@ -7,19 +7,21 @@ type Subscription { s: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -44,7 +46,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -55,6 +59,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -222,6 +234,13 @@ enum __TypeKind { NON_NULL } +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD + type Query { __schema: __Schema! __type(name: String!): __Type diff --git a/v2/pkg/asttransform/fixtures/subscription_renamed.golden b/v2/pkg/asttransform/fixtures/subscription_renamed.golden index 21a6637642..4ee100f7d8 100644 --- a/v2/pkg/asttransform/fixtures/subscription_renamed.golden +++ b/v2/pkg/asttransform/fixtures/subscription_renamed.golden @@ -7,19 +7,21 @@ type Sub { s: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -44,7 +46,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -55,6 +59,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -222,6 +234,13 @@ enum __TypeKind { NON_NULL } +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD + type Query { __schema: __Schema! __type(name: String!): __Type diff --git a/v2/pkg/asttransform/fixtures/with_mutation_subscription.golden b/v2/pkg/asttransform/fixtures/with_mutation_subscription.golden index 709ad78ac1..40c8e74061 100644 --- a/v2/pkg/asttransform/fixtures/with_mutation_subscription.golden +++ b/v2/pkg/asttransform/fixtures/with_mutation_subscription.golden @@ -27,19 +27,21 @@ type Hello { __typename: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -64,7 +66,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -75,6 +79,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior @@ -240,4 +252,11 @@ enum __TypeKind { LIST "Indicates this type is a non-null. 'ofType' is a valid field." NON_NULL -} \ No newline at end of file +} + +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD \ No newline at end of file diff --git a/v2/pkg/asttransform/internal.graphql b/v2/pkg/asttransform/internal.graphql new file mode 100644 index 0000000000..52f87f2f6d --- /dev/null +++ b/v2/pkg/asttransform/internal.graphql @@ -0,0 +1,6 @@ +directive @__defer_internal( + id: Int! + parentDeferId: Int + "A unique identifier for the results." + label: String +) repeatable on FIELD \ No newline at end of file diff --git a/v2/pkg/asttransform/stream.graphql b/v2/pkg/asttransform/stream.graphql new file mode 100644 index 0000000000..a0aef5f0a0 --- /dev/null +++ b/v2/pkg/asttransform/stream.graphql @@ -0,0 +1,9 @@ +"Directs the executor to stream this array field when the if argument is true or undefined." +directive @stream( + "A unique identifier for the results." + label: String + "Controls streaming, usually via a variable." + if: Boolean! = true + "The number of results to include in the initial (non-streamed) response." + initialCount: Int = 0 +) on FIELD \ No newline at end of file diff --git a/v2/pkg/engine/datasource/graphql_datasource/graphql_datasource.go b/v2/pkg/engine/datasource/graphql_datasource/graphql_datasource.go index f4268d1f6a..cdeb8474e5 100644 --- a/v2/pkg/engine/datasource/graphql_datasource/graphql_datasource.go +++ b/v2/pkg/engine/datasource/graphql_datasource/graphql_datasource.go @@ -151,6 +151,11 @@ func (p *Planner[T]) EnterDirective(ref int) { } func (p *Planner[T]) addDirectiveToNode(directiveRef int, node ast.Node) { + // do not propagate internal directives to upstream query document + if bytes.Equal(p.visitor.Operation.DirectiveNameBytes(directiveRef), literal.DEFER_INTERNAL) { + return + } + directiveName := p.visitor.Operation.DirectiveNameString(directiveRef) operationType := ast.OperationTypeQuery if !p.dataSourcePlannerConfig.IsNested { diff --git a/v2/pkg/engine/datasource/graphql_datasource/graphql_datasource_defer_test.go b/v2/pkg/engine/datasource/graphql_datasource/graphql_datasource_defer_test.go new file mode 100644 index 0000000000..eecfc5f18d --- /dev/null +++ b/v2/pkg/engine/datasource/graphql_datasource/graphql_datasource_defer_test.go @@ -0,0 +1,690 @@ +package graphql_datasource + +import ( + "testing" + + . "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/datasourcetesting" + "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/plan" + "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/postprocess" + "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/resolve" +) + +func TestGraphQLDataSourceDefer(t *testing.T) { + t.Run("basic", func(t *testing.T) { + t.Run("on root query node", func(t *testing.T) { + definition := ` + type User { + id: ID! + name: String! + title: String! + } + + type Query { + user: User! + } + ` + + firstSubgraphSDL := ` + type User { + id: ID! + name: String! + title: String! + } + + type Query { + user: User + } + ` + + firstDatasourceConfiguration := mustDataSourceConfiguration( + t, + "first-service", + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "Query", + FieldNames: []string{"user"}, + }, + }, + ChildNodes: []plan.TypeField{ + { + TypeName: "User", + FieldNames: []string{"id", "title", "name"}, + }, + }, + }, + mustCustomConfiguration(t, + ConfigurationInput{ + Fetch: &FetchConfiguration{ + URL: "http://first.service", + }, + SchemaConfiguration: mustSchema(t, + &FederationConfiguration{ + Enabled: true, + ServiceSDL: firstSubgraphSDL, + }, + firstSubgraphSDL, + ), + }, + ), + ) + + planConfiguration := plan.Configuration{ + DataSources: []plan.DataSource{ + firstDatasourceConfiguration, + }, + DisableResolveFieldPositions: true, + Debug: plan.DebugConfiguration{ + PrintQueryPlans: true, + PrintPlanningPaths: true, + + PlanningVisitor: true, + }, + } + + t.Run("defer User.title - defer postprocess disabled", func(t *testing.T) { + RunWithPermutations( + t, + definition, + ` + query User { + user { + name + ... @defer { + title + } + } + }`, + "User", + &plan.DeferResponsePlan{ + Response: &resolve.GraphQLDeferResponse{ + Response: &resolve.GraphQLResponse{ + Fetches: resolve.Sequence( + resolve.Single(&resolve.SingleFetch{ + FetchDependencies: resolve.FetchDependencies{ + FetchID: 0, + DeferID: 1, + }, + FetchConfiguration: resolve.FetchConfiguration{ + Input: `{"method":"POST","url":"http://first.service","body":{"query":"{user {title}}"}}`, + PostProcessing: DefaultPostProcessingConfiguration, + DataSource: &Source{}, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }), + resolve.Single(&resolve.SingleFetch{ + FetchDependencies: resolve.FetchDependencies{ + FetchID: 1, + }, + FetchConfiguration: resolve.FetchConfiguration{ + Input: `{"method":"POST","url":"http://first.service","body":{"query":"{user {name}}"}}`, + PostProcessing: DefaultPostProcessingConfiguration, + DataSource: &Source{}, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }), + ), + Data: &resolve.Object{ + Fields: []*resolve.Field{ + { + Name: []byte("user"), + Value: &resolve.Object{ + Path: []string{"user"}, + Nullable: false, + PossibleTypes: map[string]struct{}{ + "User": {}, + }, + TypeName: "User", + Fields: []*resolve.Field{ + { + Name: []byte("name"), + Value: &resolve.String{ + Path: []string{"name"}, + }, + }, + { + Name: []byte("title"), + Defer: &resolve.DeferField{ + DeferID: 1, + }, + Value: &resolve.String{ + Path: []string{"title"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + planConfiguration, + WithDefaultCustomPostProcessor(postprocess.DisableResolveInputTemplates(), postprocess.DisableCreateConcreteSingleFetchTypes(), postprocess.DisableCreateParallelNodes(), postprocess.DisableMergeFields(), postprocess.DisableExtractDeferFetches()), + WithDefer(), + WithCalculateFieldDependencies(), + ) + }) + + t.Run("defer User.title", func(t *testing.T) { + RunWithPermutations( + t, + definition, + ` + query User { + user { + name + ... @defer { + title + } + } + }`, + "User", + &plan.DeferResponsePlan{ + Response: &resolve.GraphQLDeferResponse{ + Response: &resolve.GraphQLResponse{ + Fetches: resolve.Sequence( + resolve.Single(&resolve.SingleFetch{ + FetchDependencies: resolve.FetchDependencies{ + FetchID: 1, + }, + FetchConfiguration: resolve.FetchConfiguration{ + Input: `{"method":"POST","url":"http://first.service","body":{"query":"{user {name}}"}}`, + PostProcessing: DefaultPostProcessingConfiguration, + DataSource: &Source{}, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }), + ), + Data: &resolve.Object{ + Fields: []*resolve.Field{ + { + Name: []byte("user"), + Value: &resolve.Object{ + Path: []string{"user"}, + Nullable: false, + PossibleTypes: map[string]struct{}{ + "User": {}, + }, + TypeName: "User", + Fields: []*resolve.Field{ + { + Name: []byte("name"), + Value: &resolve.String{ + Path: []string{"name"}, + }, + }, + { + Name: []byte("title"), + Defer: &resolve.DeferField{ + DeferID: 1, + }, + Value: &resolve.String{ + Path: []string{"title"}, + }, + }, + }, + }, + }, + }, + }, + }, + Defers: []*resolve.DeferFetchGroup{ + { + DeferID: 1, + Fetches: resolve.Sequence( + resolve.Single(&resolve.SingleFetch{ + FetchDependencies: resolve.FetchDependencies{ + FetchID: 0, + DeferID: 1, + }, + FetchConfiguration: resolve.FetchConfiguration{ + Input: `{"method":"POST","url":"http://first.service","body":{"query":"{user {title}}"}}`, + PostProcessing: DefaultPostProcessingConfiguration, + DataSource: &Source{}, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }), + ), + }, + }, + }, + }, + planConfiguration, + WithDefaultPostProcessor(), + WithDefer(), + WithCalculateFieldDependencies(), + ) + }) + }) + + t.Run("on entity from other subgraph", func(t *testing.T) { + definition := ` + type User { + id: ID! + title: String! + firstName: String! + lastName: String! + } + + type Query { + user: User! + } + ` + + firstSubgraphSDL := ` + type User @key(fields: "id") { + id: ID! + title: String! + } + + type Query { + user: User + } + ` + + firstDatasourceConfiguration := mustDataSourceConfiguration( + t, + "first-service", + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "Query", + FieldNames: []string{"user"}, + }, + { + TypeName: "User", + FieldNames: []string{"id", "title"}, + }, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + }, + }, + }, + mustCustomConfiguration(t, + ConfigurationInput{ + Fetch: &FetchConfiguration{ + URL: "http://first.service", + }, + SchemaConfiguration: mustSchema(t, + &FederationConfiguration{ + Enabled: true, + ServiceSDL: firstSubgraphSDL, + }, + firstSubgraphSDL, + ), + }, + ), + ) + + secondSubgraphSDL := ` + type User @key(fields: "id") { + id: ID! + firstName: String! + lastName: String! + } + ` + + secondDatasourceConfiguration := mustDataSourceConfiguration( + t, + "second-service", + &plan.DataSourceMetadata{ + RootNodes: []plan.TypeField{ + { + TypeName: "User", + FieldNames: []string{"id", "firstName", "lastName"}, + }, + }, + FederationMetaData: plan.FederationMetaData{ + Keys: plan.FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + }, + }, + }, + mustCustomConfiguration(t, + ConfigurationInput{ + Fetch: &FetchConfiguration{ + URL: "http://second.service", + }, + SchemaConfiguration: mustSchema(t, + &FederationConfiguration{ + Enabled: true, + ServiceSDL: secondSubgraphSDL, + }, + secondSubgraphSDL, + ), + }, + ), + ) + + planConfiguration := plan.Configuration{ + DataSources: []plan.DataSource{ + firstDatasourceConfiguration, + secondDatasourceConfiguration, + }, + DisableResolveFieldPositions: true, + Debug: plan.DebugConfiguration{ + PrintQueryPlans: true, + PrintPlanningPaths: true, + }, + } + + t.Run("defer User.lastName. defer postprocess disabled", func(t *testing.T) { + RunWithPermutations( + t, + definition, + ` + query User { + user { + title + firstName + ... @defer { + lastName + } + } + }`, + "User", + &plan.DeferResponsePlan{ + Response: &resolve.GraphQLDeferResponse{ + Response: &resolve.GraphQLResponse{ + Fetches: resolve.Sequence( + resolve.Single(&resolve.SingleFetch{ + FetchConfiguration: resolve.FetchConfiguration{ + Input: `{"method":"POST","url":"http://first.service","body":{"query":"{user {title __typename id}}"}}`, + PostProcessing: DefaultPostProcessingConfiguration, + DataSource: &Source{}, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }), + resolve.SingleWithPath(&resolve.SingleFetch{ + FetchDependencies: resolve.FetchDependencies{ + FetchID: 1, + DependsOnFetchIDs: []int{0}, + }, FetchConfiguration: resolve.FetchConfiguration{ + RequiresEntityBatchFetch: false, + RequiresEntityFetch: true, + Input: `{"method":"POST","url":"http://second.service","body":{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename firstName}}}","variables":{"representations":[$$0$$]}}}`, + DataSource: &Source{}, + SetTemplateOutputToNullOnVariableNull: true, + Variables: []resolve.Variable{ + &resolve.ResolvableObjectVariable{ + Renderer: resolve.NewGraphQLVariableResolveRenderer(&resolve.Object{ + Nullable: true, + Fields: []*resolve.Field{ + { + Name: []byte("__typename"), + Value: &resolve.String{ + Path: []string{"__typename"}, + }, + OnTypeNames: [][]byte{[]byte("User")}, + }, + { + Name: []byte("id"), + Value: &resolve.Scalar{ + Path: []string{"id"}, + }, + OnTypeNames: [][]byte{[]byte("User")}, + }, + }, + }), + }, + }, + PostProcessing: SingleEntityPostProcessingConfiguration, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }, "user", resolve.ObjectPath("user")), + resolve.SingleWithPath(&resolve.SingleFetch{ + FetchDependencies: resolve.FetchDependencies{ + FetchID: 2, + DependsOnFetchIDs: []int{0}, + DeferID: 1, + }, FetchConfiguration: resolve.FetchConfiguration{ + RequiresEntityBatchFetch: false, + RequiresEntityFetch: true, + Input: `{"method":"POST","url":"http://second.service","body":{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename lastName}}}","variables":{"representations":[$$0$$]}}}`, + DataSource: &Source{}, + SetTemplateOutputToNullOnVariableNull: true, + Variables: []resolve.Variable{ + &resolve.ResolvableObjectVariable{ + Renderer: resolve.NewGraphQLVariableResolveRenderer(&resolve.Object{ + Nullable: true, + Fields: []*resolve.Field{ + { + Name: []byte("__typename"), + Value: &resolve.String{ + Path: []string{"__typename"}, + }, + OnTypeNames: [][]byte{[]byte("User")}, + }, + { + Name: []byte("id"), + Value: &resolve.Scalar{ + Path: []string{"id"}, + }, + OnTypeNames: [][]byte{[]byte("User")}, + }, + }, + }), + }, + }, + PostProcessing: SingleEntityPostProcessingConfiguration, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }, "user", resolve.ObjectPath("user")), + ), + Data: &resolve.Object{ + Fields: []*resolve.Field{ + { + Name: []byte("user"), + Value: &resolve.Object{ + Path: []string{"user"}, + Nullable: false, + PossibleTypes: map[string]struct{}{ + "User": {}, + }, + TypeName: "User", + Fields: []*resolve.Field{ + { + Name: []byte("title"), + Value: &resolve.String{ + Path: []string{"title"}, + }, + }, + { + Name: []byte("firstName"), + Value: &resolve.String{ + Path: []string{"firstName"}, + }, + }, + { + Name: []byte("lastName"), + Defer: &resolve.DeferField{ + DeferID: 1, + }, + Value: &resolve.String{ + Path: []string{"lastName"}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + planConfiguration, + WithDefaultCustomPostProcessor(postprocess.DisableResolveInputTemplates(), postprocess.DisableCreateConcreteSingleFetchTypes(), postprocess.DisableCreateParallelNodes(), postprocess.DisableMergeFields(), postprocess.DisableExtractDeferFetches()), + WithDefer(), + WithCalculateFieldDependencies(), + ) + }) + + t.Run("defer User.lastName", func(t *testing.T) { + RunWithPermutations( + t, + definition, + ` + query User { + user { + title + firstName + ... @defer { + lastName + } + } + }`, + "User", + &plan.DeferResponsePlan{ + Response: &resolve.GraphQLDeferResponse{ + Response: &resolve.GraphQLResponse{ + Fetches: resolve.Sequence( + resolve.Single(&resolve.SingleFetch{ + FetchConfiguration: resolve.FetchConfiguration{ + Input: `{"method":"POST","url":"http://first.service","body":{"query":"{user {title __typename id}}"}}`, + PostProcessing: DefaultPostProcessingConfiguration, + DataSource: &Source{}, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }), + resolve.SingleWithPath(&resolve.SingleFetch{ + FetchDependencies: resolve.FetchDependencies{ + FetchID: 1, + DependsOnFetchIDs: []int{0}, + }, FetchConfiguration: resolve.FetchConfiguration{ + RequiresEntityBatchFetch: false, + RequiresEntityFetch: true, + Input: `{"method":"POST","url":"http://second.service","body":{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename firstName}}}","variables":{"representations":[$$0$$]}}}`, + DataSource: &Source{}, + SetTemplateOutputToNullOnVariableNull: true, + Variables: []resolve.Variable{ + &resolve.ResolvableObjectVariable{ + Renderer: resolve.NewGraphQLVariableResolveRenderer(&resolve.Object{ + Nullable: true, + Fields: []*resolve.Field{ + { + Name: []byte("__typename"), + Value: &resolve.String{ + Path: []string{"__typename"}, + }, + OnTypeNames: [][]byte{[]byte("User")}, + }, + { + Name: []byte("id"), + Value: &resolve.Scalar{ + Path: []string{"id"}, + }, + OnTypeNames: [][]byte{[]byte("User")}, + }, + }, + }), + }, + }, + PostProcessing: SingleEntityPostProcessingConfiguration, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }, "user", resolve.ObjectPath("user")), + ), + Data: &resolve.Object{ + Fields: []*resolve.Field{ + { + Name: []byte("user"), + Value: &resolve.Object{ + Path: []string{"user"}, + Nullable: false, + PossibleTypes: map[string]struct{}{ + "User": {}, + }, + TypeName: "User", + Fields: []*resolve.Field{ + { + Name: []byte("title"), + Value: &resolve.String{ + Path: []string{"title"}, + }, + }, + { + Name: []byte("firstName"), + Value: &resolve.String{ + Path: []string{"firstName"}, + }, + }, + { + Name: []byte("lastName"), + Defer: &resolve.DeferField{ + DeferID: 1, + }, + Value: &resolve.String{ + Path: []string{"lastName"}, + }, + }, + }, + }, + }, + }, + }, + }, + Defers: []*resolve.DeferFetchGroup{ + { + DeferID: 1, + Fetches: resolve.Sequence( + resolve.SingleWithPath(&resolve.SingleFetch{ + FetchDependencies: resolve.FetchDependencies{ + FetchID: 2, + DependsOnFetchIDs: []int{0}, + DeferID: 1, + }, FetchConfiguration: resolve.FetchConfiguration{ + RequiresEntityBatchFetch: false, + RequiresEntityFetch: true, + Input: `{"method":"POST","url":"http://second.service","body":{"query":"query($representations: [_Any!]!){_entities(representations: $representations){... on User {__typename lastName}}}","variables":{"representations":[$$0$$]}}}`, + DataSource: &Source{}, + SetTemplateOutputToNullOnVariableNull: true, + Variables: []resolve.Variable{ + &resolve.ResolvableObjectVariable{ + Renderer: resolve.NewGraphQLVariableResolveRenderer(&resolve.Object{ + Nullable: true, + Fields: []*resolve.Field{ + { + Name: []byte("__typename"), + Value: &resolve.String{ + Path: []string{"__typename"}, + }, + OnTypeNames: [][]byte{[]byte("User")}, + }, + { + Name: []byte("id"), + Value: &resolve.Scalar{ + Path: []string{"id"}, + }, + OnTypeNames: [][]byte{[]byte("User")}, + }, + }, + }), + }, + }, + PostProcessing: SingleEntityPostProcessingConfiguration, + }, + DataSourceIdentifier: []byte("graphql_datasource.Source"), + }, "user", resolve.ObjectPath("user")), + ), + }, + }, + }, + }, + planConfiguration, + WithDefaultPostProcessor(), + WithDefer(), + WithCalculateFieldDependencies(), + ) + }) + }) + }) +} diff --git a/v2/pkg/engine/datasource/introspection_datasource/fixtures/schema_introspection.golden b/v2/pkg/engine/datasource/introspection_datasource/fixtures/schema_introspection.golden index d6f62343c4..0ed5151ec8 100644 --- a/v2/pkg/engine/datasource/introspection_datasource/fixtures/schema_introspection.golden +++ b/v2/pkg/engine/datasource/introspection_datasource/fixtures/schema_introspection.golden @@ -185,7 +185,7 @@ { "kind": "SCALAR", "name": "Int", - "description": "The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", + "description": "The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -194,7 +194,7 @@ { "kind": "SCALAR", "name": "Float", - "description": "The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", + "description": "The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -203,7 +203,7 @@ { "kind": "SCALAR", "name": "String", - "description": "The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", + "description": "The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -212,7 +212,7 @@ { "kind": "SCALAR", "name": "Boolean", - "description": "The 'Boolean' scalar type represents 'true' or 'false' .", + "description": "The `Boolean` scalar type represents `true` or `false`.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -221,7 +221,7 @@ { "kind": "SCALAR", "name": "ID", - "description": "The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID.", + "description": "The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as \"4\") or integer (such as 4) input value will be accepted as an ID.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -323,14 +323,14 @@ }, { "name": "specifiedBy", - "description": "", + "description": "Exposes a URL that specifies the behavior of this scalar", "locations": [ "SCALAR" ], "args": [ { "name": "url", - "description": "", + "description": "The URL that specifies the behavior of this scalar.", "type": { "kind": "NON_NULL", "name": null, @@ -360,6 +360,51 @@ "args": [], "isRepeatable": false, "__typename": "__Directive" + }, + { + "name": "defer", + "description": "Directs the executor to defer this fragment when the if argument is true or undefined.", + "locations": [ + "FRAGMENT_SPREAD", + "INLINE_FRAGMENT" + ], + "args": [ + { + "name": "label", + "description": "A unique identifier for the results.", + "type": { + "kind": "SCALAR", + "name": "String", + "ofType": null, + "__typename": "__Type" + }, + "defaultValue": null, + "isDeprecated": false, + "deprecationReason": null, + "__typename": "__InputValue" + }, + { + "name": "if", + "description": "Controls whether the fragment will be deferred, usually via a variable.", + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "Boolean", + "ofType": null, + "__typename": "__Type" + }, + "__typename": "__Type" + }, + "defaultValue": "true", + "isDeprecated": false, + "deprecationReason": null, + "__typename": "__InputValue" + } + ], + "isRepeatable": false, + "__typename": "__Directive" } ], "__typename": "__Schema" diff --git a/v2/pkg/engine/datasource/introspection_datasource/fixtures/schema_introspection_with_custom_root_operation_types.golden b/v2/pkg/engine/datasource/introspection_datasource/fixtures/schema_introspection_with_custom_root_operation_types.golden index f56fee360b..567ff556de 100644 --- a/v2/pkg/engine/datasource/introspection_datasource/fixtures/schema_introspection_with_custom_root_operation_types.golden +++ b/v2/pkg/engine/datasource/introspection_datasource/fixtures/schema_introspection_with_custom_root_operation_types.golden @@ -333,7 +333,7 @@ { "kind": "SCALAR", "name": "Int", - "description": "The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", + "description": "The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -342,7 +342,7 @@ { "kind": "SCALAR", "name": "Float", - "description": "The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", + "description": "The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point).", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -351,7 +351,7 @@ { "kind": "SCALAR", "name": "String", - "description": "The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", + "description": "The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -360,7 +360,7 @@ { "kind": "SCALAR", "name": "Boolean", - "description": "The 'Boolean' scalar type represents 'true' or 'false' .", + "description": "The `Boolean` scalar type represents `true` or `false`.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -369,7 +369,7 @@ { "kind": "SCALAR", "name": "ID", - "description": "The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID.", + "description": "The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as \"4\") or integer (such as 4) input value will be accepted as an ID.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -471,14 +471,14 @@ }, { "name": "specifiedBy", - "description": "", + "description": "Exposes a URL that specifies the behavior of this scalar", "locations": [ "SCALAR" ], "args": [ { "name": "url", - "description": "", + "description": "The URL that specifies the behavior of this scalar.", "type": { "kind": "NON_NULL", "name": null, @@ -508,6 +508,51 @@ "args": [], "isRepeatable": false, "__typename": "__Directive" + }, + { + "name": "defer", + "description": "Directs the executor to defer this fragment when the if argument is true or undefined.", + "locations": [ + "FRAGMENT_SPREAD", + "INLINE_FRAGMENT" + ], + "args": [ + { + "name": "label", + "description": "A unique identifier for the results.", + "type": { + "kind": "SCALAR", + "name": "String", + "ofType": null, + "__typename": "__Type" + }, + "defaultValue": null, + "isDeprecated": false, + "deprecationReason": null, + "__typename": "__InputValue" + }, + { + "name": "if", + "description": "Controls whether the fragment will be deferred, usually via a variable.", + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "Boolean", + "ofType": null, + "__typename": "__Type" + }, + "__typename": "__Type" + }, + "defaultValue": "true", + "isDeprecated": false, + "deprecationReason": null, + "__typename": "__InputValue" + } + ], + "isRepeatable": false, + "__typename": "__Directive" } ], "__typename": "__Schema" diff --git a/v2/pkg/engine/datasourcetesting/datasourcetesting.go b/v2/pkg/engine/datasourcetesting/datasourcetesting.go index ec9c8907f3..f6f170e67d 100644 --- a/v2/pkg/engine/datasourcetesting/datasourcetesting.go +++ b/v2/pkg/engine/datasourcetesting/datasourcetesting.go @@ -28,18 +28,20 @@ import ( ) type testOptions struct { - postProcessors []*postprocess.Processor - skipReason string - withFieldInfo bool - withPrintPlan bool - withFieldDependencies bool - withFetchReasons bool - validationOptions []astvalidation.Option + postProcessor *postprocess.Processor + skipReason string + withFieldInfo bool + withPrintPlan bool + withIncludeFieldDependencies bool + withFetchReasons bool + withDefer bool + validationOptions []astvalidation.Option + withCalculateFieldDependencies bool } -func WithPostProcessors(postProcessors ...*postprocess.Processor) func(*testOptions) { +func WithDefer() func(*testOptions) { return func(o *testOptions) { - o.postProcessors = postProcessors + o.withDefer = true } } @@ -50,11 +52,16 @@ func WithSkipReason(reason string) func(*testOptions) { } func WithDefaultPostProcessor() func(*testOptions) { - return WithPostProcessors(postprocess.NewProcessor(postprocess.DisableResolveInputTemplates(), postprocess.DisableCreateConcreteSingleFetchTypes(), postprocess.DisableCreateParallelNodes(), postprocess.DisableMergeFields())) + return func(o *testOptions) { + o.postProcessor = postprocess.NewProcessor(postprocess.DisableResolveInputTemplates(), postprocess.DisableCreateConcreteSingleFetchTypes(), postprocess.DisableCreateParallelNodes(), postprocess.DisableMergeFields()) + } } func WithDefaultCustomPostProcessor(options ...postprocess.ProcessorOption) func(*testOptions) { - return WithPostProcessors(postprocess.NewProcessor(options...)) + // TODO: rename to WithPostProcessor + return func(o *testOptions) { + o.postProcessor = postprocess.NewProcessor(options...) + } } func WithFieldInfo() func(*testOptions) { @@ -70,17 +77,25 @@ func WithPrintPlan() func(*testOptions) { } } -func WithFieldDependencies() func(*testOptions) { +func WithIncludeFieldDependencies() func(*testOptions) { return func(o *testOptions) { o.withFieldInfo = true - o.withFieldDependencies = true + o.withIncludeFieldDependencies = true + o.withCalculateFieldDependencies = true + } +} + +func WithCalculateFieldDependencies() func(*testOptions) { + return func(o *testOptions) { + o.withCalculateFieldDependencies = true } } func WithFetchReasons() func(*testOptions) { return func(o *testOptions) { o.withFieldInfo = true - o.withFieldDependencies = true + o.withIncludeFieldDependencies = true + o.withCalculateFieldDependencies = true o.withFetchReasons = true } } @@ -150,6 +165,7 @@ func RunTestWithVariables(definition, operation, operationName, variables string // by default, we don't want to have field info in the tests because it's too verbose config.DisableIncludeInfo = true config.DisableIncludeFieldDependencies = true + config.DisableCalculateFieldDependencies = true opts := &testOptions{} for _, o := range options { @@ -160,10 +176,14 @@ func RunTestWithVariables(definition, operation, operationName, variables string config.DisableIncludeInfo = false } - if opts.withFieldDependencies { + if opts.withIncludeFieldDependencies { config.DisableIncludeFieldDependencies = false } + if opts.withCalculateFieldDependencies { + config.DisableCalculateFieldDependencies = false + } + if opts.withFetchReasons { config.BuildFetchReasons = true } @@ -177,11 +197,24 @@ func RunTestWithVariables(definition, operation, operationName, variables string if variables != "" { op.Input.Variables = []byte(variables) } + err := asttransform.MergeDefinitionWithBaseSchema(&def) if err != nil { t.Fatal(err) } - norm := astnormalization.NewWithOpts(astnormalization.WithExtractVariables(), astnormalization.WithInlineFragmentSpreads(), astnormalization.WithRemoveFragmentDefinitions(), astnormalization.WithRemoveUnusedVariables()) + + normalizationOptions := []astnormalization.Option{ + astnormalization.WithExtractVariables(), + astnormalization.WithInlineFragmentSpreads(), + astnormalization.WithRemoveFragmentDefinitions(), + astnormalization.WithRemoveUnusedVariables(), + } + + if opts.withDefer { + normalizationOptions = append(normalizationOptions, astnormalization.WithInlineDefer()) + } + + norm := astnormalization.NewWithOpts(normalizationOptions...) var report operationreport.Report norm.NormalizeOperation(&op, &def, &report) @@ -212,10 +245,8 @@ func RunTestWithVariables(definition, operation, operationName, variables string t.Fatal(report.Error()) } - if opts.postProcessors != nil { - for _, processor := range opts.postProcessors { - processor.Process(actualPlan) - } + if opts.postProcessor != nil { + opts.postProcessor.Process(actualPlan) } if opts.withPrintPlan { diff --git a/v2/pkg/engine/plan/abstract_selection_rewriter.go b/v2/pkg/engine/plan/abstract_selection_rewriter.go index 2cf814ee5e..1864ca5ecd 100644 --- a/v2/pkg/engine/plan/abstract_selection_rewriter.go +++ b/v2/pkg/engine/plan/abstract_selection_rewriter.go @@ -260,8 +260,6 @@ func (r *fieldSelectionRewriter) unionFieldSelectionNeedsRewrite(selectionSetInf func (r *fieldSelectionRewriter) rewriteUnionSelection(fieldRef int, fieldInfo selectionSetInfo, unionTypeNames []string) error { newSelectionRefs := make([]int, 0, len(unionTypeNames)+1) // 1 for __typename - r.preserveTypeNameSelection(fieldInfo, &newSelectionRefs) - r.flattenFragmentOnUnion(fieldInfo, unionTypeNames, &newSelectionRefs) return r.replaceFieldSelections(fieldRef, newSelectionRefs) @@ -276,10 +274,14 @@ func (r *fieldSelectionRewriter) replaceFieldSelections(fieldRef int, newSelecti } if len(newSelectionRefs) == 0 { + deferID, _ := r.operation.FieldInternalDeferID(fieldRef) // we have to add __typename selection in case there is no other selections - typeNameSelectionRef, typeNameFieldRef := r.typeNameSelection() + typeNameSelectionRef, typeNameFieldRef := r.typeNameSelection(deferID) r.skipFieldRefs = append(r.skipFieldRefs, typeNameFieldRef) r.operation.AddSelectionRefToSelectionSet(fieldSelectionSetRef, typeNameSelectionRef) + + // if there is no other selections we could skip normalization + return nil } normalizer := astnormalization.NewAbstractFieldNormalizer(r.operation, r.definition, fieldRef) @@ -579,7 +581,8 @@ func (r *fieldSelectionRewriter) rewriteInterfaceSelection(fieldRef int, fieldIn // When we have fragments on concrete types, // And we do not have __typename selection - we are adding it if fieldInfo.isInterfaceObject && !fieldInfo.hasTypeNameSelection && fieldInfo.hasInlineFragmentsOnObjects { - typeNameSelectionRef, typeNameFieldRef := r.typeNameSelection() + deferID, _ := r.operation.FieldInternalDeferID(fieldRef) + typeNameSelectionRef, typeNameFieldRef := r.typeNameSelection(deferID) r.skipFieldRefs = append(r.skipFieldRefs, typeNameFieldRef) newSelectionRefs = append(newSelectionRefs, typeNameSelectionRef) } @@ -608,35 +611,15 @@ func (r *fieldSelectionRewriter) flattenFragmentOnInterface(selectionSetInfo sel } } - for _, inlineFragmentInfo := range selectionSetInfo.inlineFragmentsOnObjects { - // for object fragments it is necessary to check if inline fragment type is allowed - if !slices.Contains(allowedImplementingTypes, inlineFragmentInfo.typeName) { - // remove fragment which not allowed - continue - } - - r.flattenFragmentOnObject(inlineFragmentInfo.selectionSetInfo, inlineFragmentInfo.typeName, selectionRefs) - } - - for _, inlineFragmentInfo := range selectionSetInfo.inlineFragmentsOnInterfaces { - // We do not check if interface fragment type not exists in the current datasource - // in case of interfaces the only thing which is matter is an interception of implementing types - // and parent allowed types - - r.flattenFragmentOnInterface(inlineFragmentInfo.selectionSetInfo, inlineFragmentInfo.typeNamesImplementingInterface, allowedImplementingTypes, selectionRefs) - } - - for _, inlineFragmentInfo := range selectionSetInfo.inlineFragmentsOnUnions { - // We do not check if union fragment type not exists in the current datasource - // in case of unions the only thing which is matter is an interception of implementing types - // and parent allowed types - r.flattenFragmentOnUnion(inlineFragmentInfo.selectionSetInfo, allowedImplementingTypes, selectionRefs) - } + r.flattenFragments(selectionSetInfo, allowedImplementingTypes, selectionRefs) } func (r *fieldSelectionRewriter) flattenFragmentOnUnion(selectionSetInfo selectionSetInfo, allowedTypeNames []string, selectionRefs *[]int) { r.preserveTypeNameSelection(selectionSetInfo, selectionRefs) + r.flattenFragments(selectionSetInfo, allowedTypeNames, selectionRefs) +} +func (r *fieldSelectionRewriter) flattenFragments(selectionSetInfo selectionSetInfo, allowedTypeNames []string, selectionRefs *[]int) { for _, inlineFragmentInfo := range selectionSetInfo.inlineFragmentsOnObjects { // for object fragments it is necessary to check if inline fragment type is allowed if !slices.Contains(allowedTypeNames, inlineFragmentInfo.typeName) { diff --git a/v2/pkg/engine/plan/abstract_selection_rewriter_helpers.go b/v2/pkg/engine/plan/abstract_selection_rewriter_helpers.go index b0f78a1edb..5da5c5b40d 100644 --- a/v2/pkg/engine/plan/abstract_selection_rewriter_helpers.go +++ b/v2/pkg/engine/plan/abstract_selection_rewriter_helpers.go @@ -437,10 +437,15 @@ func (r *fieldSelectionRewriter) createFragmentSelection(typeName string, fields }) } -func (r *fieldSelectionRewriter) typeNameSelection() (selectionRef int, fieldRef int) { +func (r *fieldSelectionRewriter) typeNameSelection(deferID int) (selectionRef int, fieldRef int) { field := r.operation.AddField(ast.Field{ - Name: r.operation.Input.AppendInputString("__typename"), + Name: r.operation.Input.AppendInputString(typeNameField), }) + + if deferID != 0 { + r.operation.AddDeferInternalDirectiveToField(field.Ref, deferID, "", 0) + } + return r.operation.AddSelectionToDocument(ast.Selection{ Ref: field.Ref, Kind: ast.SelectionKindField, @@ -453,7 +458,7 @@ func (r *fieldSelectionRewriter) preserveTypeNameSelection(selectionSetInfo sele return } - selectionRef, _ := r.typeNameSelection() + selectionRef, _ := r.typeNameSelection(selectionSetInfo.typenameFieldDeferId) *selectionRefs = append(*selectionRefs, selectionRef) } diff --git a/v2/pkg/engine/plan/abstract_selection_rewriter_info.go b/v2/pkg/engine/plan/abstract_selection_rewriter_info.go index 905ce2aa52..3e0cb85c18 100644 --- a/v2/pkg/engine/plan/abstract_selection_rewriter_info.go +++ b/v2/pkg/engine/plan/abstract_selection_rewriter_info.go @@ -18,6 +18,7 @@ type selectionSetInfo struct { hasInlineFragmentsOnInterfaces bool inlineFragmentsOnUnions []inlineFragmentSelectionOnUnion hasInlineFragmentsOnUnions bool + typenameFieldDeferId int } type fieldSelection struct { @@ -62,15 +63,16 @@ func (s *inlineFragmentSelection) isFragmentOnInterface() bool { return s.definitionNodeKind == ast.NodeKindInterfaceTypeDefinition } -func (r *fieldSelectionRewriter) selectionSetFieldSelections(selectionSetRef int) (fieldSelections []fieldSelection, hasTypename bool) { +func (r *fieldSelectionRewriter) selectionSetFieldSelections(selectionSetRef int) (fieldSelections []fieldSelection, hasTypename bool, typeNameFieldDeferID int) { fieldSelectionRefs := r.operation.SelectionSetFieldSelections(selectionSetRef) fieldSelections = make([]fieldSelection, 0, len(fieldSelectionRefs)) for _, fieldSelectionRef := range fieldSelectionRefs { fieldRef := r.operation.Selections[fieldSelectionRef].Ref fieldName := r.operation.FieldNameString(fieldRef) - if fieldName == "__typename" { + if fieldName == typeNameField { hasTypename = true + typeNameFieldDeferID, _ = r.operation.FieldInternalDeferID(fieldRef) } fieldSelections = append(fieldSelections, fieldSelection{ @@ -79,7 +81,7 @@ func (r *fieldSelectionRewriter) selectionSetFieldSelections(selectionSetRef int }) } - return fieldSelections, hasTypename + return fieldSelections, hasTypename, typeNameFieldDeferID } func (r *fieldSelectionRewriter) collectFieldInformation(fieldRef int) (selectionSetInfo, error) { @@ -185,7 +187,7 @@ func (r *fieldSelectionRewriter) collectInlineFragmentInformation( } func (r *fieldSelectionRewriter) collectSelectionSetInformation(selectionSetRef int) (selectionSetInfo, error) { - fieldSelections, hasSharedTypename := r.selectionSetFieldSelections(selectionSetRef) + fieldSelections, hasSharedTypename, typenameFieldDeferId := r.selectionSetFieldSelections(selectionSetRef) inlineFragmentSelectionRefs := r.operation.SelectionSetInlineFragmentSelections(selectionSetRef) inlineFragmentSelectionsOnObjects := make([]inlineFragmentSelection, 0, len(inlineFragmentSelectionRefs)) @@ -203,6 +205,7 @@ func (r *fieldSelectionRewriter) collectSelectionSetInformation(selectionSetRef fields: fieldSelections, hasFields: len(fieldSelections) > 0, hasTypeNameSelection: hasSharedTypename, + typenameFieldDeferId: typenameFieldDeferId, inlineFragmentsOnObjects: inlineFragmentSelectionsOnObjects, hasInlineFragmentsOnObjects: len(inlineFragmentSelectionsOnObjects) > 0, inlineFragmentsOnInterfaces: inlineFragmentsOnInterfaces, diff --git a/v2/pkg/engine/plan/abstract_selection_rewriter_test.go b/v2/pkg/engine/plan/abstract_selection_rewriter_test.go index 61cdf08cd5..992f4aead2 100644 --- a/v2/pkg/engine/plan/abstract_selection_rewriter_test.go +++ b/v2/pkg/engine/plan/abstract_selection_rewriter_test.go @@ -4147,6 +4147,279 @@ func TestInterfaceSelectionRewriter_RewriteOperation(t *testing.T) { }`, shouldRewrite: true, }, + { + name: "union selection with deferred __typename - preserves defer directive on __typename after rewrite", + fieldName: "accounts", + definition: definition, + upstreamDefinition: ` + type User { + id: ID! + name: String! + isUser: Boolean! + } + + type Admin { + id: ID! + } + + union Account = User | Admin + + type Query { + accounts: [Account!]! + } + `, + dsBuilder: dsb(). + RootNode("Query", "iface"). + RootNode("User", "id", "name", "isUser"). + RootNode("Admin", "id"). + KeysMetadata(FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + { + TypeName: "Admin", + SelectionSet: "id", + }, + }), + operation: ` + query { + accounts { + __typename @__defer_internal(id: 1) + ... on Node { + name + } + } + }`, + expectedOperation: ` + query { + accounts { + __typename @__defer_internal(id: 1) + ... on Admin { + name + } + ... on User { + name + } + } + }`, + shouldRewrite: true, + }, + { + name: "interface selection with deferred __typename - preserves defer directive when shared field is copied into fragments", + definition: definition, + upstreamDefinition: ` + interface Node { + id: ID! + name: String! + } + + type User implements Node { + id: ID! + name: String! + isUser: Boolean! + } + + type Admin implements Node { + id: ID! + } + + type Query { + iface: Node! + } + `, + dsBuilder: dsb(). + RootNode("Query", "iface"). + RootNode("User", "id", "isUser"). + RootNode("Admin", "id"). + KeysMetadata(FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + { + TypeName: "Admin", + SelectionSet: "id", + }, + }), + operation: ` + query { + iface { + __typename @__defer_internal(id: 1) + name + ... on User { + isUser + } + ... on Admin { + id + } + } + }`, + expectedOperation: ` + query { + iface { + ... on Admin { + __typename @__defer_internal(id: 1) + name + id + } + ... on User { + __typename @__defer_internal(id: 1) + name + isUser + } + } + }`, + shouldRewrite: true, + }, + { + name: "interface field with defer directive - fallback __typename inherits defer directive when all fragments are removed", + definition: ` + interface Node { + id: ID! + name: String! + } + + type User implements Node { + id: ID! + name: String! + isUser: Boolean! + } + + type Admin implements Node { + id: ID! + name: String! + } + + type Moderator implements Node { + id: ID! + name: String! + isModerator: Boolean! + } + + type Query { + iface: Node! + } + `, + upstreamDefinition: ` + interface Node { + id: ID! + name: String! + } + + type User implements Node { + id: ID! + name: String! + isUser: Boolean! + } + + type Admin implements Node { + id: ID! + name: String! + } + + type Query { + iface: Node! + } + `, + dsBuilder: dsb(). + RootNode("Query", "iface"). + RootNode("User", "id", "name", "isUser"). + RootNode("Admin", "id"). + KeysMetadata(FederationFieldConfigurations{ + { + TypeName: "User", + SelectionSet: "id", + }, + { + TypeName: "Admin", + SelectionSet: "id", + }, + }), + operation: ` + query { + iface @__defer_internal(id: 1) { + ... on Moderator { + isModerator + } + } + }`, + expectedOperation: ` + query { + iface @__defer_internal(id: 1) { + __typename @__defer_internal(id: 1) + } + }`, + shouldRewrite: true, + }, + { + name: "interface object field with defer directive - added __typename inherits defer directive from field", + definition: ` + type User implements Account { + id: ID! + name: String! + } + + type Admin implements Account { + id: ID! + name: String! + login: String! + } + + interface Account { + id: ID! + name: String! + } + + type Query { + user: Account! + }`, + upstreamDefinition: ` + type Account @key(fields: "id") @interfaceObject { + id: ID! + name: String! + } + + type Query { + user: Account! + }`, + dsBuilder: dsb(). + RootNode("Query", "user"). + RootNode("Account", "id", "name"). + WithMetadata(func(m *FederationMetaData) { + m.InterfaceObjects = []EntityInterfaceConfiguration{ + { + InterfaceTypeName: "Account", + ConcreteTypeNames: []string{"Admin", "User"}, + }, + } + m.Keys = []FederationFieldConfiguration{ + { + TypeName: "Account", + SelectionSet: "id", + }, + } + }), + fieldName: "user", + operation: ` + query { + user @__defer_internal(id: 1) { + ... on Admin { + id + } + } + }`, + expectedOperation: ` + query { + user @__defer_internal(id: 1) { + __typename @__defer_internal(id: 1) + ... on Admin { + id + } + } + }`, + shouldRewrite: true, + }, } for _, testCase := range testCases { diff --git a/v2/pkg/engine/plan/analyze_plan_kind.go b/v2/pkg/engine/plan/analyze_plan_kind.go deleted file mode 100644 index 9d949884af..0000000000 --- a/v2/pkg/engine/plan/analyze_plan_kind.go +++ /dev/null @@ -1,64 +0,0 @@ -package plan - -import ( - "github.com/wundergraph/graphql-go-tools/v2/pkg/ast" - "github.com/wundergraph/graphql-go-tools/v2/pkg/astvisitor" - "github.com/wundergraph/graphql-go-tools/v2/pkg/operationreport" -) - -func AnalyzePlanKind(operation, definition *ast.Document, operationName string) (operationType ast.OperationType, streaming bool, error error) { - walker := astvisitor.NewWalkerWithID(48, "PlanKindVisitor") - visitor := &planKindVisitor{ - Walker: &walker, - operationName: operationName, - } - - walker.RegisterEnterDocumentVisitor(visitor) - walker.RegisterEnterOperationVisitor(visitor) - walker.RegisterEnterDirectiveVisitor(visitor) - - var report operationreport.Report - walker.Walk(operation, definition, &report) - if report.HasErrors() { - return ast.OperationTypeUnknown, false, report - } - operationType = visitor.operationType - streaming = visitor.hasDeferDirective || visitor.hasStreamDirective - return -} - -type planKindVisitor struct { - *astvisitor.Walker - - operation, definition *ast.Document - operationName string - hasStreamDirective, hasDeferDirective bool - operationType ast.OperationType -} - -func (p *planKindVisitor) EnterDirective(ref int) { - directiveName := p.operation.DirectiveNameString(ref) - ancestor := p.Ancestors[len(p.Ancestors)-1] - switch ancestor.Kind { - case ast.NodeKindField: - switch directiveName { - case "defer": - p.hasDeferDirective = true - case "stream": - p.hasStreamDirective = true - } - } -} - -func (p *planKindVisitor) EnterOperationDefinition(ref int) { - name := p.operation.OperationDefinitionNameString(ref) - if p.operationName != name { - p.SkipNode() - return - } - p.operationType = p.operation.OperationDefinitions[ref].OperationType -} - -func (p *planKindVisitor) EnterDocument(operation, definition *ast.Document) { - p.operation, p.definition = operation, definition -} diff --git a/v2/pkg/engine/plan/analyze_plan_kind_test.go b/v2/pkg/engine/plan/analyze_plan_kind_test.go deleted file mode 100644 index f489b10ca5..0000000000 --- a/v2/pkg/engine/plan/analyze_plan_kind_test.go +++ /dev/null @@ -1,186 +0,0 @@ -package plan - -import ( - "testing" - - "github.com/stretchr/testify/assert" - - "github.com/wundergraph/graphql-go-tools/v2/pkg/ast" - "github.com/wundergraph/graphql-go-tools/v2/pkg/asttransform" - "github.com/wundergraph/graphql-go-tools/v2/pkg/internal/unsafeparser" -) - -type expectation func(t *testing.T, operationKind ast.OperationType, streaming bool, err error) - -func mustNotErr() expectation { - return func(t *testing.T, operationKind ast.OperationType, streaming bool, err error) { - assert.NoError(t, err) - } -} - -func mustSubscription(expect bool) expectation { - return func(t *testing.T, operationKind ast.OperationType, streaming bool, err error) { - if expect { - assert.Equal(t, ast.OperationTypeSubscription, operationKind) - } else { - assert.NotEqual(t, ast.OperationTypeSubscription, operationKind) - } - } -} - -func mustStreaming(expectStreaming bool) expectation { - return func(t *testing.T, operationKind ast.OperationType, streaming bool, err error) { - assert.Equal(t, expectStreaming, streaming) - } -} - -func TestAnalyzePlanKind(t *testing.T) { - run := func(definition, operation, operationName string, expectations ...expectation) func(t *testing.T) { - return func(t *testing.T) { - def := unsafeparser.ParseGraphqlDocumentString(definition) - op := unsafeparser.ParseGraphqlDocumentString(operation) - err := asttransform.MergeDefinitionWithBaseSchema(&def) - if err != nil { - t.Fatal(err) - } - operationKind, streaming, err := AnalyzePlanKind(&op, &def, operationName) - for i := range expectations { - expectations[i](t, operationKind, streaming, err) - } - } - } - - t.Run("query", run(testDefinition, ` - query MyQuery($id: ID!) { - droid(id: $id){ - name - friends { - name - } - friends { - name - } - primaryFunction - favoriteEpisode - } - }`, - "MyQuery", - mustNotErr(), - mustStreaming(false), - mustSubscription(false), - )) - t.Run("query stream", run(testDefinition, ` - query MyQuery($id: ID!) { - droid(id: $id){ - name - friends @stream { - name - } - friends { - name - } - primaryFunction - favoriteEpisode - } - }`, - "MyQuery", - mustNotErr(), - mustStreaming(true), - mustSubscription(false), - )) - t.Run("query defer", run(testDefinition, ` - query MyQuery($id: ID!) { - droid(id: $id){ - name - friends { - name - } - friends { - name - } - primaryFunction - favoriteEpisode @defer - } - }`, - "MyQuery", - mustNotErr(), - mustStreaming(true), - mustSubscription(false), - )) - t.Run("query defer", run(testDefinition, ` - query MyQuery($id: ID!) { - droid(id: $id){ - name - friends { - name - } - friends { - name - } - primaryFunction - favoriteEpisode - } - } - query OtherDeferredQuery { - droid(id: $id){ - name - friends @stream { - name - } - } - }`, - "MyQuery", - mustNotErr(), - mustStreaming(false), - mustSubscription(false), - )) - t.Run("query defer different name", run(testDefinition, ` - query MyQuery($id: ID!) { - droid(id: $id){ - name - friends { - name - } - friends { - name - } - primaryFunction - favoriteEpisode @defer - } - }`, - "OperationNameNotExists", - mustNotErr(), - mustStreaming(false), - mustSubscription(false), - )) - t.Run("subscription", run(testDefinition, ` - subscription RemainingJedis { - remainingJedis - }`, - "RemainingJedis", - mustNotErr(), - mustStreaming(false), - mustSubscription(true), - )) - t.Run("subscription with streaming", run(testDefinition, ` - subscription NewReviews { - newReviews { - id - stars @defer - } - }`, - "NewReviews", - mustNotErr(), - mustStreaming(true), - mustSubscription(true), - )) - t.Run("subscription name not exists", run(testDefinition, ` - subscription RemainingJedis { - remainingJedis - }`, - "OperationNameNotExists", - mustNotErr(), - mustStreaming(false), - mustSubscription(false), - )) -} diff --git a/v2/pkg/engine/plan/configuration.go b/v2/pkg/engine/plan/configuration.go index eebd9df352..489ebecc88 100644 --- a/v2/pkg/engine/plan/configuration.go +++ b/v2/pkg/engine/plan/configuration.go @@ -35,6 +35,10 @@ type Configuration struct { // It requires DisableIncludeInfo set to false. DisableIncludeFieldDependencies bool + // DisableCalculateFieldDependencies controls whether the planner calculates + // field dependencies at all. + DisableCalculateFieldDependencies bool + // BuildFetchReasons allows generating the FetchReasons structure for all the fields. // It may be enabled by some other components of the engine. // It requires DisableIncludeInfo and DisableIncludeFieldDependencies set to false. diff --git a/v2/pkg/engine/plan/datasource_configuration.go b/v2/pkg/engine/plan/datasource_configuration.go index 0601e3abfb..1ad0b1d8d9 100644 --- a/v2/pkg/engine/plan/datasource_configuration.go +++ b/v2/pkg/engine/plan/datasource_configuration.go @@ -360,7 +360,6 @@ type DataSourcePlannerConfiguration struct { PathType PlannerPathType IsNested bool Options plannerConfigurationOptions - FetchID int } type PlannerPathType int @@ -436,6 +435,7 @@ type DataSourcePlanningBehavior struct { // } // When true expected response will be { "rootField": ..., "alias": ... } // When false expected response will be { "rootField": ..., "original": ... } + // Deprecated: has no effect anymore OverrideFieldPathFromAlias bool // AllowPlanningTypeName set to true will allow the planner to plan __typename fields. diff --git a/v2/pkg/engine/plan/datasource_filter_collect_nodes_visitor.go b/v2/pkg/engine/plan/datasource_filter_collect_nodes_visitor.go index 82f05fddea..9c0a0ce24b 100644 --- a/v2/pkg/engine/plan/datasource_filter_collect_nodes_visitor.go +++ b/v2/pkg/engine/plan/datasource_filter_collect_nodes_visitor.go @@ -8,6 +8,7 @@ import ( "github.com/wundergraph/graphql-go-tools/v2/pkg/ast" "github.com/wundergraph/graphql-go-tools/v2/pkg/astvisitor" + "github.com/wundergraph/graphql-go-tools/v2/pkg/lexer/literal" "github.com/wundergraph/graphql-go-tools/v2/pkg/operationreport" ) @@ -521,6 +522,7 @@ func (f *collectNodesDSVisitor) EnterField(fieldRef int, itemIds []int, treeNode IsLeaf: isLeaf, isTypeName: info.isTypeName, treeNodeId: treeNodeId, + deferInfo: info.deferInfo, } f.localSuggestions = append(f.localSuggestions, &node) @@ -575,6 +577,7 @@ type fieldInfo struct { possibleTypeNames []string currentPathWithoutFragments string enclosingTypeDefinition ast.Node + deferInfo *DeferInfo } func (f *treeBuilderVisitor) collectFieldInfo(fieldRef int) { @@ -611,5 +614,32 @@ func (f *treeBuilderVisitor) collectFieldInfo(fieldRef int) { currentPathWithoutFragments: currentPathWithoutFragments, isTypeName: isTypeName, enclosingTypeDefinition: f.walker.EnclosingTypeDefinition, + deferInfo: f.deferInfo(fieldRef), } } + +func (f *treeBuilderVisitor) deferInfo(fieldRef int) *DeferInfo { + deferDirectiveRef, exists := f.operation.Fields[fieldRef].Directives.HasDirectiveByNameBytes(f.operation, literal.DEFER_INTERNAL) + if !exists { + return nil + } + + info := &DeferInfo{} + + idValue, idExists := f.operation.DirectiveArgumentValueByName(deferDirectiveRef, []byte("id")) + if idExists && idValue.Kind == ast.ValueKindInteger { + info.ID = int(f.operation.IntValueAsInt(idValue.Ref)) + } + + parentIdValue, exists := f.operation.DirectiveArgumentValueByName(deferDirectiveRef, []byte("parentDeferId")) + if exists && parentIdValue.Kind == ast.ValueKindInteger { + info.ParentID = int(f.operation.IntValueAsInt(parentIdValue.Ref)) + } + + labelValue, exists := f.operation.DirectiveArgumentValueByName(deferDirectiveRef, []byte("label")) + if exists { + info.Label = f.operation.StringValueContentString(labelValue.Ref) + } + + return info +} diff --git a/v2/pkg/engine/plan/datasource_filter_node_suggestions.go b/v2/pkg/engine/plan/datasource_filter_node_suggestions.go index 71ea3ebfc8..ffc1d22576 100644 --- a/v2/pkg/engine/plan/datasource_filter_node_suggestions.go +++ b/v2/pkg/engine/plan/datasource_filter_node_suggestions.go @@ -4,6 +4,7 @@ import ( "encoding/json" "fmt" "iter" + "slices" "github.com/kingledion/go-tools/tree" "github.com/phf/go-queue/queue" @@ -37,9 +38,30 @@ type NodeSuggestion struct { treeNodeId uint possibleTypeNames []string + deferInfo *DeferInfo + deferParentPath bool + deferIDs []int + requiresKey *SourceConnection } +type DeferInfo struct { + ID int + Label string + ParentID int +} + +func (d *DeferInfo) Equals(o *DeferInfo) bool { + if d == nil && o == nil { + return true + } + if d == nil || o == nil { + return false + } + + return d.ID == o.ID && d.Label == o.Label && d.ParentID == o.ParentID +} + func (n *NodeSuggestion) treeNodeID() uint { return TreeNodeID(n.FieldRef) } @@ -138,6 +160,100 @@ func NewNodeSuggestionsWithSize(size int) *NodeSuggestions { } } +func (f *NodeSuggestions) ProcessDefer(fieldRequirementsConfigs map[fieldIndexKey][]FederationFieldConfiguration) { + for i := range f.items { + if !f.items[i].Selected { + continue + } + + if f.items[i].deferInfo == nil { + continue + } + + f.propagateDeferParentsUpToRootNode(i, fieldRequirementsConfigs) + } +} + +func (f *NodeSuggestions) propagateDeferParentsUpToRootNode(i int, fieldRequirementsConfigs map[fieldIndexKey][]FederationFieldConfiguration) { + // if the item is a root node and requires a key we are already able to jump from here, + // so we skip propagating defer id + + hasKeyDependency := false + hasRequiresKey := f.items[i].requiresKey != nil + + // when the deffered field is on the entity and the parent field is on the same datasource + // we won't have hasRequiresKey set. + // but in case this field has requires directive it will be resolved by entity call, + // and it will have requires key configuration + if !hasRequiresKey && fieldRequirementsConfigs != nil { + requirements, ok := fieldRequirementsConfigs[fieldIndexKey{fieldRef: f.items[i].FieldRef, dsHash: f.items[i].DataSourceHash}] + if ok { + for _, r := range requirements { + if r.FieldName == "" { + hasKeyDependency = true + } + } + } + } + + if f.items[i].IsRootNode && hasRequiresKey || hasKeyDependency { + return + } + + parentIndexesToAddDeferID := make([]int, 0, 2) + current := i + for { + treeNode := f.treeNode(current) + parentNodeIndexes := treeNode.GetParent().GetData() + + parentIdToUpdate := -1 + for _, parentIdx := range parentNodeIndexes { + if f.items[parentIdx].DataSourceHash != f.items[current].DataSourceHash { + continue + } + + if f.items[parentIdx].deferInfo != nil && f.items[parentIdx].deferInfo.ID == f.items[i].deferInfo.ID { + // if parent item is in the same defer - + // we should not mark it as a defer parent, + // because defer parents are planned twice - in a deffered planner and regular + break + } + + if slices.Contains(f.items[parentIdx].deferIDs, f.items[i].deferInfo.ID) { + // no need to update already contains this defer id + break + } else { + parentIdToUpdate = parentIdx + } + } + + if parentIdToUpdate == -1 { + // could happen if we haven't set it + // because it already contains this defer id + break + } + + parentIndexesToAddDeferID = append(parentIndexesToAddDeferID, parentIdToUpdate) + + // if we have found a root node, and it requires a key - we have found the root node from which we could branch out. + // if the node is a root node, but it doesn't require a key, we need to go up to the root query node, + // because it is an entity node within the query started from the root query node + if f.items[parentIdToUpdate].IsRootNode && f.items[parentIdToUpdate].requiresKey != nil { + break + } + + current = parentIdToUpdate + } + + for _, parentIdx := range parentIndexesToAddDeferID { + f.items[parentIdx].deferParentPath = true + + if !slices.Contains(f.items[parentIdx].deferIDs, f.items[i].deferInfo.ID) { + f.items[parentIdx].deferIDs = append(f.items[parentIdx].deferIDs, f.items[i].deferInfo.ID) + } + } +} + func (f *NodeSuggestions) AddItems(items ...*NodeSuggestion) { f.items = append(f.items, items...) f.populateHasSuggestions() diff --git a/v2/pkg/engine/plan/node_selection_builder.go b/v2/pkg/engine/plan/node_selection_builder.go index b60363c289..4bfb71dfd5 100644 --- a/v2/pkg/engine/plan/node_selection_builder.go +++ b/v2/pkg/engine/plan/node_selection_builder.go @@ -121,7 +121,7 @@ func (p *NodeSelectionBuilder) SelectNodes(operation, definition *ast.Document, } if p.config.Debug.PrintOperationTransformations { - debugMessage("Selected nodes on run #1 for operation:") + debugMessage("SelectNodes. on run #1 operation:") p.printOperation(operation) } @@ -147,7 +147,7 @@ func (p *NodeSelectionBuilder) SelectNodes(operation, definition *ast.Document, } if p.config.Debug.PrintOperationTransformations || p.config.Debug.PrintNodeSuggestions { - debugMessage(fmt.Sprintf("Selected nodes on additional run #%d.", i+1)) + debugMessage(fmt.Sprintf("SelectNodes. on run #%d.", i+1)) } if p.config.Debug.PrintNodeSuggestions { @@ -194,6 +194,8 @@ func (p *NodeSelectionBuilder) SelectNodes(operation, definition *ast.Document, } } + p.nodeSelectionsVisitor.nodeSuggestions.ProcessDefer(p.nodeSelectionsVisitor.fieldRequirementsConfigs) + return &NodeSelectionResult{ dataSources: p.nodeSelectionsVisitor.dataSources, nodeSuggestions: p.nodeSelectionsVisitor.nodeSuggestions, diff --git a/v2/pkg/engine/plan/node_selection_visitor.go b/v2/pkg/engine/plan/node_selection_visitor.go index db8403cd3c..535717b67f 100644 --- a/v2/pkg/engine/plan/node_selection_visitor.go +++ b/v2/pkg/engine/plan/node_selection_visitor.go @@ -1,6 +1,7 @@ package plan import ( + "bytes" "fmt" "slices" @@ -53,10 +54,13 @@ type nodeSelectionVisitor struct { newFieldRefs map[int]struct{} // newFieldRefs is a set of field refs which were added by the visitor or was modified by a rewrite } +func (c *nodeSelectionVisitor) addNewSkipFieldRefs(fieldRefs ...int) { + c.addSkipFieldRefs(fieldRefs...) + c.addNewFieldRefs(fieldRefs...) +} + func (c *nodeSelectionVisitor) addSkipFieldRefs(fieldRefs ...int) { c.skipFieldsRefs = append(c.skipFieldsRefs, fieldRefs...) - - c.addNewFieldRefs(fieldRefs...) } func (c *nodeSelectionVisitor) addNewFieldRefs(fieldRefs ...int) { @@ -75,9 +79,14 @@ type fieldIndexKey struct { } // selectionSetPendingRequirements - is a wrapper to been able to have predictable order of keyRequirements but at the same time deduplicate keyRequirements +type pendingKeyRequirementExistsKey struct { + dsHash DSHash + deferID int +} + type pendingKeyRequirements struct { - existsTracker map[DSHash]struct{} // existsTracker allows us to not add duplicated keyRequirements - requirementConfigs []keyRequirements // requirementConfigs is a list of keyRequirements which should be added to the selection set + existsTracker map[pendingKeyRequirementExistsKey]struct{} // existsTracker allows us to not add duplicated keyRequirements + requirementConfigs []keyRequirements // requirementConfigs is a list of keyRequirements which should be added to the selection set } // keyRequirements is a mapping between requestedByPlannerID or requestedByFieldRef, which requested required fields, @@ -89,6 +98,8 @@ type keyRequirements struct { sc SourceConnection requestedByFieldRefs []int typeName string + deferInfo *DeferInfo + parentFieldDeferID int } type fieldRequirements struct { @@ -97,6 +108,8 @@ type fieldRequirements struct { selectionSet string requestedByFieldRefs []int isTypenameForEntityInterface bool + deferInfo *DeferInfo + parentFieldDeferID int } type pendingFieldRequirements struct { @@ -108,6 +121,7 @@ type pendingFieldRequirementExistsKey struct { dsHash DSHash selectionSet string isTypenameForEntityInterface bool + deferID int } func (c *nodeSelectionVisitor) currentSelectionSet() int { @@ -209,6 +223,17 @@ func (c *nodeSelectionVisitor) EnterField(fieldRef int) { c.handleEnterField(fieldRef, false) } +type fieldRequirementsContext struct { + fieldRef int + parentPath string + typeName string + fieldName string + currentPath string + dsConfig DataSource + deferInfo *DeferInfo + parentFieldDeferID int +} + func (c *nodeSelectionVisitor) handleEnterField(fieldRef int, handleRequires bool) { root := c.walker.Ancestors[0] if root.Kind != ast.NodeKindOperationDefinition { @@ -234,50 +259,84 @@ func (c *nodeSelectionVisitor) handleEnterField(fieldRef int, handleRequires boo c.walker.StopWithInternalErr(fmt.Errorf("do not have a datasource for a field suggestion for field %s at path %s", fieldName, currentPath)) return } - ds := c.dataSources[dsIdx] + + fieldCtx := fieldRequirementsContext{ + fieldRef: fieldRef, + parentPath: parentPath, + typeName: typeName, + fieldName: fieldName, + currentPath: currentPath, + dsConfig: c.dataSources[dsIdx], + deferInfo: suggestion.deferInfo, + parentFieldDeferID: c.wrappingFieldDeferID(), + } if handleRequires { // check if the field has @requires directive - c.handleFieldRequiredByRequires(fieldRef, parentPath, typeName, fieldName, currentPath, ds) + c.handleFieldRequiredByRequires(fieldCtx) // skip to the next suggestion as we only handle requires here continue } if suggestion.requiresKey != nil { // add @key requirements for the field - c.handleFieldsRequiredByKey(fieldRef, parentPath, typeName, fieldName, currentPath, ds, *suggestion.requiresKey) + c.handleFieldsRequiredByKey(fieldCtx, *suggestion.requiresKey) } // check if field selections are abstract and needs rewrites - c.rewriteSelectionSetHavingAbstractFragments(fieldRef, ds) + c.rewriteSelectionSetHavingAbstractFragments(fieldRef, fieldCtx.dsConfig) + } +} + +// wrappingFieldDeferID walks the walker ancestors in reverse to find the nearest wrapping field +// that has a @__defer_internal directive and returns its "id" argument value. +func (c *nodeSelectionVisitor) wrappingFieldDeferID() int { + for i := len(c.walker.Ancestors) - 1; i >= 0; i-- { + ancestor := c.walker.Ancestors[i] + if ancestor.Kind != ast.NodeKindField { + continue + } + id, exists := c.operation.FieldInternalDeferID(ancestor.Ref) + if !exists { + return 0 + } + return id } + return 0 } func (c *nodeSelectionVisitor) LeaveField(ref int) { + // "___typename" is an internal typename placeholder + // added by astnormalization.directiveIncludeSkip or astnormalization.deferEnsureTypename normalization rule + if bytes.Equal(c.operation.FieldAliasOrNameBytes(ref), []byte("___typename")) { + // we should skip such typename as it was added as a placeholder to keep query valid + // when normalizaion removed all other selections from the selection set + c.addSkipFieldRefs(ref) + } } -func (c *nodeSelectionVisitor) handleFieldRequiredByRequires(fieldRef int, parentPath, typeName, fieldName, currentPath string, dsConfig DataSource) { - fieldKey := fieldIndexKey{fieldRef, dsConfig.Hash()} +func (c *nodeSelectionVisitor) handleFieldRequiredByRequires(fieldCtx fieldRequirementsContext) { + fieldKey := fieldIndexKey{fieldCtx.fieldRef, fieldCtx.dsConfig.Hash()} _, visited := c.visitedFieldsRequiresChecks[fieldKey] if visited { return } c.visitedFieldsRequiresChecks[fieldKey] = struct{}{} - if fieldName == typeNameField { + if fieldCtx.fieldName == typeNameField { // the __typename field could not have @requires directive return } - requiresConfiguration, exists := dsConfig.RequiredFieldsByRequires(typeName, fieldName) + requiresConfiguration, exists := fieldCtx.dsConfig.RequiredFieldsByRequires(fieldCtx.typeName, fieldCtx.fieldName) if !exists { - for _, io := range dsConfig.FederationConfiguration().InterfaceObjects { - if slices.Contains(io.ConcreteTypeNames, typeName) { + for _, io := range fieldCtx.dsConfig.FederationConfiguration().InterfaceObjects { + if slices.Contains(io.ConcreteTypeNames, fieldCtx.typeName) { // we should check if we have a @requires configuration for the interface object - requiresConfiguration, exists = dsConfig.RequiredFieldsByRequires(io.InterfaceTypeName, fieldName) + requiresConfiguration, exists = fieldCtx.dsConfig.RequiredFieldsByRequires(io.InterfaceTypeName, fieldCtx.fieldName) if exists { - requiresConfiguration.TypeName = typeName + requiresConfiguration.TypeName = fieldCtx.typeName break } } @@ -291,17 +350,17 @@ func (c *nodeSelectionVisitor) handleFieldRequiredByRequires(fieldRef int, paren // check if the required fields are already provided input := areRequiredFieldsProvidedInput{ - typeName: typeName, + typeName: fieldCtx.typeName, requiredFields: requiresConfiguration.SelectionSet, definition: c.definition, - dataSource: dsConfig, - providedFields: c.nodeSuggestions.providedFields[dsConfig.Hash()], - parentPath: parentPath, + dataSource: fieldCtx.dsConfig, + providedFields: c.nodeSuggestions.providedFields[fieldCtx.dsConfig.Hash()], + parentPath: fieldCtx.parentPath, } provided, report := areRequiredFieldsProvided(input) if report.HasErrors() { - c.walker.StopWithInternalErr(fmt.Errorf("failed to check if required fields are provided for field %s at path %s: %w", fieldName, currentPath, report)) + c.walker.StopWithInternalErr(fmt.Errorf("failed to check if required fields are provided for field %s at path %s: %w", fieldCtx.fieldName, fieldCtx.currentPath, report)) return } @@ -313,19 +372,19 @@ func (c *nodeSelectionVisitor) handleFieldRequiredByRequires(fieldRef int, paren // we should plan to add required fields for the field // they will be added in the on LeaveSelectionSet callback for the current selection set, // and the current field ref will be added to the fieldDependsOn map - c.addPendingFieldRequirements(fieldRef, dsConfig.Hash(), requiresConfiguration, currentPath, false) - c.handleKeyRequirementsForBackJumpOnSameDataSource(fieldRef, dsConfig, typeName, parentPath) + c.addPendingFieldRequirements(fieldCtx, requiresConfiguration, false) + c.handleKeyRequirementsForBackJumpOnSameDataSource(fieldCtx) } -func (c *nodeSelectionVisitor) handleFieldsRequiredByKey(fieldRef int, parentPath, typeName, fieldName, currentPath string, dsConfig DataSource, sc SourceConnection) { - fieldKey := fieldIndexKey{fieldRef, dsConfig.Hash()} +func (c *nodeSelectionVisitor) handleFieldsRequiredByKey(fieldCtx fieldRequirementsContext, sc SourceConnection) { + fieldKey := fieldIndexKey{fieldCtx.fieldRef, fieldCtx.dsConfig.Hash()} _, visited := c.visitedFieldsKeyChecks[fieldKey] if visited { return } c.visitedFieldsKeyChecks[fieldKey] = struct{}{} - selectedParentsDSHashes := c.getSelectedParentsDSHashes(fieldRef) + selectedParentsDSHashes := c.getSelectedParentsDSHashes(fieldCtx.fieldRef) isParentHasInterfaceObject := slices.ContainsFunc(selectedParentsDSHashes, func(dsHash DSHash) bool { dsIdx := slices.IndexFunc(c.dataSources, func(d DataSource) bool { @@ -335,13 +394,13 @@ func (c *nodeSelectionVisitor) handleFieldsRequiredByKey(fieldRef int, parentPat return false } - return c.dataSources[dsIdx].HasInterfaceObject(typeName) + return c.dataSources[dsIdx].HasInterfaceObject(fieldCtx.typeName) }) - entityInterface := dsConfig.HasEntityInterface(typeName) - interfaceObject := dsConfig.HasInterfaceObject(typeName) + entityInterface := fieldCtx.dsConfig.HasEntityInterface(fieldCtx.typeName) + interfaceObject := fieldCtx.dsConfig.HasInterfaceObject(fieldCtx.typeName) - if fieldName == typeNameField && !entityInterface { + if fieldCtx.fieldName == typeNameField && !entityInterface { // the __typename field could not have @key directive // but for the interface object we have to plan it differently // e.g. we should get a __typename from a concrete type, not the interface object @@ -349,18 +408,16 @@ func (c *nodeSelectionVisitor) handleFieldsRequiredByKey(fieldRef int, parentPat return } - c.addPendingKeyRequirements(fieldRef, dsConfig.Hash(), sc, interfaceObject, parentPath, typeName) + c.addPendingKeyRequirements(fieldCtx, sc, interfaceObject) if isParentHasInterfaceObject && !interfaceObject && !entityInterface { c.addPendingFieldRequirements( - fieldRef, - dsConfig.Hash(), + fieldCtx, FederationFieldConfiguration{ - TypeName: typeName, - FieldName: fieldName, - SelectionSet: "__typename", + TypeName: fieldCtx.typeName, + FieldName: fieldCtx.fieldName, + SelectionSet: typeNameField, }, - currentPath, true, ) } @@ -384,19 +441,19 @@ func (c *nodeSelectionVisitor) getSelectedParentsDSHashes(fieldRef int) (out []D return out } -func (c *nodeSelectionVisitor) handleKeyRequirementsForBackJumpOnSameDataSource(fieldRef int, dsConfig DataSource, typeName string, parentPath string) { - selectedParentsDSHashes := c.getSelectedParentsDSHashes(fieldRef) +func (c *nodeSelectionVisitor) handleKeyRequirementsForBackJumpOnSameDataSource(fieldCtx fieldRequirementsContext) { + selectedParentsDSHashes := c.getSelectedParentsDSHashes(fieldCtx.fieldRef) // regularly keys are required only when the datasource hash differs from the parent datasource hash // one exception when the field has requires directive and planned on the same datasource as a parent // in this case we have to add a back jump on the same datasource to get required fields for the field resolver // but jump is possible only with keys, so we have to add any key for this datasource - sameAsParentDS := len(selectedParentsDSHashes) == 1 && selectedParentsDSHashes[0] == dsConfig.Hash() + sameAsParentDS := len(selectedParentsDSHashes) == 1 && selectedParentsDSHashes[0] == fieldCtx.dsConfig.Hash() if !sameAsParentDS { return } - keyConfigurations := dsConfig.RequiredFieldsByKey(typeName) + keyConfigurations := fieldCtx.dsConfig.RequiredFieldsByKey(fieldCtx.typeName) if len(keyConfigurations) == 0 { // required fields could be of zero length in case type is not entity @@ -405,8 +462,8 @@ func (c *nodeSelectionVisitor) handleKeyRequirementsForBackJumpOnSameDataSource( // When entity has disabled entity resolver, but we have field with requires directive on this entity // we should add key fields for the field with requires - to pass them into field resolver - keys := dsConfig.FederationConfiguration().Keys - keyConfigurations = keys.FilterByTypeAndResolvability(typeName, false) + keys := fieldCtx.dsConfig.FederationConfiguration().Keys + keyConfigurations = keys.FilterByTypeAndResolvability(fieldCtx.typeName, false) } if len(keyConfigurations) == 0 { @@ -419,18 +476,18 @@ func (c *nodeSelectionVisitor) handleKeyRequirementsForBackJumpOnSameDataSource( Type: SourceConnectionTypeDirect, Jumps: []KeyJump{ { - From: dsConfig.Hash(), - To: dsConfig.Hash(), + From: fieldCtx.dsConfig.Hash(), + To: fieldCtx.dsConfig.Hash(), SelectionSet: keyToUse.SelectionSet, - TypeName: typeName, + TypeName: fieldCtx.typeName, }, }, } - c.addPendingKeyRequirements(fieldRef, dsConfig.Hash(), sc, false, parentPath, typeName) + c.addPendingKeyRequirements(fieldCtx, sc, false) } -func (c *nodeSelectionVisitor) addPendingFieldRequirements(requestedByFieldRef int, dsHash DSHash, fieldConfiguration FederationFieldConfiguration, currentPath string, isTypenameForEntityInterface bool) { +func (c *nodeSelectionVisitor) addPendingFieldRequirements(fieldCtx fieldRequirementsContext, fieldConfiguration FederationFieldConfiguration, isTypenameForEntityInterface bool) { currentSelectionSet := c.currentSelectionSet() requirements, hasRequirements := c.pendingFieldRequirements[currentSelectionSet] @@ -440,25 +497,31 @@ func (c *nodeSelectionVisitor) addPendingFieldRequirements(requestedByFieldRef i } } - existsKey := pendingFieldRequirementExistsKey{dsHash, fieldConfiguration.SelectionSet, isTypenameForEntityInterface} + deferID := 0 + if fieldCtx.deferInfo != nil { + deferID = fieldCtx.deferInfo.ID + } + existsKey := pendingFieldRequirementExistsKey{fieldCtx.dsConfig.Hash(), fieldConfiguration.SelectionSet, isTypenameForEntityInterface, deferID} if _, exists := requirements.existsTracker[existsKey]; !exists { config := fieldRequirements{ - dsHash: dsHash, - path: currentPath, + dsHash: fieldCtx.dsConfig.Hash(), + path: fieldCtx.currentPath, selectionSet: fieldConfiguration.SelectionSet, - requestedByFieldRefs: []int{requestedByFieldRef}, + requestedByFieldRefs: []int{fieldCtx.fieldRef}, isTypenameForEntityInterface: isTypenameForEntityInterface, + deferInfo: fieldCtx.deferInfo, + parentFieldDeferID: fieldCtx.parentFieldDeferID, } requirements.existsTracker[existsKey] = struct{}{} requirements.requirementConfigs = append(requirements.requirementConfigs, config) } else { for i := range requirements.requirementConfigs { - if requirements.requirementConfigs[i].selectionSet == fieldConfiguration.SelectionSet && requirements.requirementConfigs[i].dsHash == dsHash && requirements.requirementConfigs[i].isTypenameForEntityInterface == isTypenameForEntityInterface { + if requirements.requirementConfigs[i].selectionSet == fieldConfiguration.SelectionSet && requirements.requirementConfigs[i].dsHash == fieldCtx.dsConfig.Hash() && requirements.requirementConfigs[i].isTypenameForEntityInterface == isTypenameForEntityInterface { if slices.IndexFunc(requirements.requirementConfigs[i].requestedByFieldRefs, func(fieldRef int) bool { - return fieldRef == requestedByFieldRef + return fieldRef == fieldCtx.fieldRef }) == -1 { - requirements.requirementConfigs[i].requestedByFieldRefs = append(requirements.requirementConfigs[i].requestedByFieldRefs, requestedByFieldRef) + requirements.requirementConfigs[i].requestedByFieldRefs = append(requirements.requirementConfigs[i].requestedByFieldRefs, fieldCtx.fieldRef) } break } @@ -468,35 +531,41 @@ func (c *nodeSelectionVisitor) addPendingFieldRequirements(requestedByFieldRef i c.pendingFieldRequirements[currentSelectionSet] = requirements } -func (c *nodeSelectionVisitor) addPendingKeyRequirements(requestedByFieldRef int, dsHash DSHash, sc SourceConnection, isInterfaceObject bool, parentPath string, typeName string) { +func (c *nodeSelectionVisitor) addPendingKeyRequirements(fieldCtx fieldRequirementsContext, sc SourceConnection, isInterfaceObject bool) { currentSelectionSet := c.currentSelectionSet() requirements, hasRequirements := c.pendingKeyRequirements[currentSelectionSet] if !hasRequirements { requirements = pendingKeyRequirements{ - existsTracker: make(map[DSHash]struct{}), + existsTracker: make(map[pendingKeyRequirementExistsKey]struct{}), } } - existsKey := dsHash + deferID := 0 + if fieldCtx.deferInfo != nil { + deferID = fieldCtx.deferInfo.ID + } + existsKey := pendingKeyRequirementExistsKey{dsHash: fieldCtx.dsConfig.Hash(), deferID: deferID} if _, exists := requirements.existsTracker[existsKey]; !exists { config := keyRequirements{ - targetDSHash: dsHash, - path: parentPath, + targetDSHash: fieldCtx.dsConfig.Hash(), + path: fieldCtx.parentPath, isInterfaceObject: isInterfaceObject, sc: sc, - requestedByFieldRefs: []int{requestedByFieldRef}, - typeName: typeName, + requestedByFieldRefs: []int{fieldCtx.fieldRef}, + typeName: fieldCtx.typeName, + deferInfo: fieldCtx.deferInfo, + parentFieldDeferID: fieldCtx.parentFieldDeferID, } requirements.existsTracker[existsKey] = struct{}{} requirements.requirementConfigs = append(requirements.requirementConfigs, config) } else { for i := range requirements.requirementConfigs { - if requirements.requirementConfigs[i].targetDSHash == dsHash { - if !slices.Contains(requirements.requirementConfigs[i].requestedByFieldRefs, requestedByFieldRef) { - requirements.requirementConfigs[i].requestedByFieldRefs = append(requirements.requirementConfigs[i].requestedByFieldRefs, requestedByFieldRef) + if requirements.requirementConfigs[i].targetDSHash == fieldCtx.dsConfig.Hash() && requirements.requirementConfigs[i].deferInfo.Equals(fieldCtx.deferInfo) { + if !slices.Contains(requirements.requirementConfigs[i].requestedByFieldRefs, fieldCtx.fieldRef) { + requirements.requirementConfigs[i].requestedByFieldRefs = append(requirements.requirementConfigs[i].requestedByFieldRefs, fieldCtx.fieldRef) } break } @@ -530,6 +599,8 @@ func (c *nodeSelectionVisitor) addFieldRequirementsToOperation(selectionSetRef i allowTypename: false, typeName: typeName, fieldSet: requirements.selectionSet, + deferInfo: requirements.deferInfo, + parentFieldDeferID: requirements.parentFieldDeferID, addTypenameInNestedSelections: c.addTypenameInNestedSelections, } @@ -540,7 +611,7 @@ func (c *nodeSelectionVisitor) addFieldRequirementsToOperation(selectionSetRef i } c.resetVisitedAbstractChecksForModifiedFields(addFieldsResult.modifiedFieldRefs) - c.addSkipFieldRefs(addFieldsResult.skipFieldRefs...) + c.addNewSkipFieldRefs(addFieldsResult.skipFieldRefs...) // add mapping for the field dependencies for _, requestedByFieldRef := range requirements.requestedByFieldRefs { fieldKey := fieldIndexKey{requestedByFieldRef, requirements.dsHash} @@ -618,6 +689,8 @@ func (c *nodeSelectionVisitor) addKeyRequirementsToOperation(selectionSetRef int allowTypename: allowTypeName, typeName: jump.TypeName, fieldSet: jump.SelectionSet, + deferInfo: pendingKey.deferInfo, + parentFieldDeferID: pendingKey.parentFieldDeferID, } addFieldsResult, report := addRequiredFields(input) @@ -630,7 +703,7 @@ func (c *nodeSelectionVisitor) addKeyRequirementsToOperation(selectionSetRef int // op, _ := astprinter.PrintStringIndentDebug(c.operation, " ") // fmt.Println("operation: ", op) - c.addSkipFieldRefs(addFieldsResult.skipFieldRefs...) + c.addNewSkipFieldRefs(addFieldsResult.skipFieldRefs...) // setup deps between key chain items if currentFieldRefs != nil && previousJump != nil { @@ -648,8 +721,9 @@ func (c *nodeSelectionVisitor) addKeyRequirementsToOperation(selectionSetRef int } c.fieldRequirementsConfigs[fieldKey] = append(c.fieldRequirementsConfigs[fieldKey], FederationFieldConfiguration{ - TypeName: previousJump.TypeName, - SelectionSet: previousJump.SelectionSet, + TypeName: previousJump.TypeName, + SelectionSet: previousJump.SelectionSet, + RemappedPaths: addFieldsResult.remappedPaths, }) for _, requiredFieldRef := range currentFieldRefs { c.fieldDependencyKind[fieldDependencyKey{field: requestedByFieldRef, dependsOn: requiredFieldRef}] = fieldDependencyKindKey @@ -675,8 +749,9 @@ func (c *nodeSelectionVisitor) addKeyRequirementsToOperation(selectionSetRef int } c.fieldRequirementsConfigs[fieldKey] = append(c.fieldRequirementsConfigs[fieldKey], FederationFieldConfiguration{ - TypeName: jump.TypeName, - SelectionSet: jump.SelectionSet, + TypeName: jump.TypeName, + SelectionSet: jump.SelectionSet, + RemappedPaths: addFieldsResult.remappedPaths, }) for _, requiredFieldRef := range currentFieldRefs { c.fieldDependencyKind[fieldDependencyKey{field: requestedByFieldRef, dependsOn: requiredFieldRef}] = fieldDependencyKindKey @@ -730,7 +805,7 @@ func (c *nodeSelectionVisitor) rewriteSelectionSetHavingAbstractFragments(fieldR return } - c.addSkipFieldRefs(rewriter.skipFieldRefs...) + c.addNewSkipFieldRefs(rewriter.skipFieldRefs...) c.hasNewFields = true c.rewrittenFieldRefs = append(c.rewrittenFieldRefs, fieldRef) c.persistedRewrittenFieldRefs[fieldRef] = struct{}{} diff --git a/v2/pkg/engine/plan/path_builder.go b/v2/pkg/engine/plan/path_builder.go index 5e456028a7..c5efde4ee0 100644 --- a/v2/pkg/engine/plan/path_builder.go +++ b/v2/pkg/engine/plan/path_builder.go @@ -144,7 +144,14 @@ func (p *PathBuilder) printRevisitInfo() { fmt.Println("\n Fields waiting for dependency:") for fieldKey, deps := range p.visitor.fieldDependsOn { - fmt.Printf(" Field ref: %d ds: %d depends on fields: %v\n", fieldKey.fieldRef, fieldKey.dsHash, deps) + fmt.Printf(" Field: %s ref: %d ds: %d depends on fields: ", p.visitor.operation.FieldAliasOrNameString(fieldKey.fieldRef), fieldKey.fieldRef, fieldKey.dsHash) + for i, depFieldRef := range deps { + fmt.Printf("field: %s ref: %d ", p.visitor.operation.FieldAliasOrNameString(depFieldRef), depFieldRef) + if len(deps) > 1 && i < len(deps)-1 { + fmt.Printf(", ") + } + } + fmt.Println() } } } diff --git a/v2/pkg/engine/plan/path_builder_visitor.go b/v2/pkg/engine/plan/path_builder_visitor.go index b66b41375a..d9f54e5091 100644 --- a/v2/pkg/engine/plan/path_builder_visitor.go +++ b/v2/pkg/engine/plan/path_builder_visitor.go @@ -51,6 +51,7 @@ type pathBuilderVisitor struct { fieldDependsOn map[fieldIndexKey][]int // fieldDependsOn is a map[fieldRef][]fieldRef - holds list of field refs which are required by a field ref, e.g. field should be planned only after required fields were planned fieldRequirementsConfigs map[fieldIndexKey][]FederationFieldConfiguration + processedFieldDeps map[fieldIndexKey][]int // processedFieldDeps tracks which plannerIds have already had dependencies wired for a given fieldIndexKey - pair of fieldRef and dsHash currentFetchPath []resolve.FetchItemPathElement currentResponsePath []string @@ -122,6 +123,21 @@ type objectFetchConfiguration struct { dependsOnFetchIDs []int rootFields []resolve.GraphCoordinate operationType ast.OperationType + deferID int +} + +type currentFieldInfo struct { + fieldRef int + typeName string + fieldName string + currentPath string + parentPath string + precedingParentPath string + suggestion *NodeSuggestion + ds DataSource + shareable bool + deferID int + deferField bool } func (c *pathBuilderVisitor) currentSelectionSetInfo() (info selectionSetTypeInfo, ok bool) { @@ -329,6 +345,7 @@ func (c *pathBuilderVisitor) EnterDocument(operation, definition *ast.Document) c.fieldDependenciesForPlanners = make(map[int][]int) c.fieldsPlannedOn = make(map[int][]int) + c.processedFieldDeps = make(map[fieldIndexKey][]int) } func (c *pathBuilderVisitor) LeaveDocument(operation, definition *ast.Document) { @@ -464,6 +481,17 @@ func (c *pathBuilderVisitor) EnterField(fieldRef int) { suggestions := c.nodeSuggestions.SuggestionsForPath(typeName, fieldName, currentPath) shareable := len(suggestions) > 1 + + field := ¤tFieldInfo{ + fieldRef: fieldRef, + typeName: typeName, + fieldName: fieldName, + currentPath: currentPath, + parentPath: parentPath, + precedingParentPath: precedingParentPath, + shareable: shareable, + } + for _, suggestion := range suggestions { if idx := slices.IndexFunc(c.skipDS, func(skip DSSkip) bool { return skip.DSHash == suggestion.DataSourceHash @@ -481,7 +509,7 @@ func (c *pathBuilderVisitor) EnterField(fieldRef int) { ds := c.dataSources[dsIdx] if !c.couldPlanField(fieldRef, ds.Hash()) { - c.handleMissingPath(false, typeName, fieldName, currentPath, shareable) + c.handleMissingPath(false, field) /* if we could not plan the field, we should skip planning children on the same datasource @@ -519,7 +547,48 @@ func (c *pathBuilderVisitor) EnterField(fieldRef int) { continue } - c.handlePlanningField(fieldRef, typeName, fieldName, currentPath, parentPath, precedingParentPath, suggestion, ds, shareable) + field.ds = ds + field.suggestion = suggestion + + // the field was deffered, but it also could be a parent path for some other defer + hasDeferInfo := suggestion.deferInfo != nil + // the field may be not deferred, but it is a parent for the child node which was deferred + isDeferParent := len(suggestion.deferIDs) > 0 + + // plan defer parent paths + if isDeferParent { + for _, deferID := range suggestion.deferIDs { + field.deferID = deferID + field.deferField = false + // defer parent path planning - should be planned as a deferred path + c.handlePlanningField(field) + } + } + + // plan deferred field + if hasDeferInfo { + field.deferID = suggestion.deferInfo.ID + field.deferField = true + // should be planned only as a deferred path + c.handlePlanningField(field) + } + + // normal field planning is handled if the field itself is not deferred + if !hasDeferInfo { + field.deferID = 0 + field.deferField = false + c.handlePlanningField(field) + } + } + + // Clean up fieldDependsOn entries that were fully processed during this EnterField call. + // We keep entries alive throughout the suggestions loop so couldPlanField can still read them, + // and delete them only after all planners for this fieldRef have been wired up. + for _, suggestion := range suggestions { + fieldKey := fieldIndexKey{fieldRef, suggestion.DataSourceHash} + if _, processed := c.processedFieldDeps[fieldKey]; processed { + delete(c.fieldDependsOn, fieldKey) + } } c.addArrayField(fieldRef, currentPath) @@ -547,11 +616,35 @@ func (c *pathBuilderVisitor) LeaveField(ref int) { }) } -func (c *pathBuilderVisitor) handlePlanningField(fieldRef int, typeName, fieldName, currentPath, parentPath, precedingParentPath string, suggestion *NodeSuggestion, ds DataSource, shareable bool) { - plannedOnPlannerIds := c.fieldsPlannedOn[fieldRef] +func (c *pathBuilderVisitor) haveChildFieldsToPlan(field *currentFieldInfo) bool { + nodeId := field.suggestion.treeNodeID() + + node, ok := c.nodeSuggestions.responseTree.Find(nodeId) + if !ok { + return false + } + + return slices.ContainsFunc(treeNodeChildren(node), func(child int) bool { + childNode := c.nodeSuggestions.items[child] + + if childNode.DataSourceHash != field.ds.Hash() || !childNode.Selected { + return false + } + + if field.deferID == 0 { + return childNode.deferInfo == nil + } + + isDeferParentPath := childNode.deferParentPath && slices.Contains(childNode.deferIDs, field.deferID) + return isDeferParentPath || (childNode.deferInfo != nil && childNode.deferInfo.ID == field.deferID) + }) +} + +func (c *pathBuilderVisitor) handlePlanningField(field *currentFieldInfo) { + plannedOnPlannerIds := c.fieldsPlannedOn[field.fieldRef] if slices.ContainsFunc(plannedOnPlannerIds, func(plannerIdx int) bool { - return c.planners[plannerIdx].DataSourceConfiguration().Hash() == ds.Hash() + return c.planners[plannerIdx].DataSourceConfiguration().Hash() == field.ds.Hash() && c.planners[plannerIdx].DeferID() == field.deferID }) { // when we have already planned the field on the same datasource as was suggested // we do not need to try to plan it again @@ -559,29 +652,35 @@ func (c *pathBuilderVisitor) handlePlanningField(fieldRef int, typeName, fieldNa return } - isMutationRoot := c.isMutationRoot(currentPath) + if !field.suggestion.IsLeaf && !c.haveChildFieldsToPlan(field) { + return + } + + isMutationRoot := c.isMutationRoot(field.currentPath) var ( plannerIdx int planned bool ) + // mutation root fields should always be planned on a new planner + // because mutations must be executed sequentially if isMutationRoot { - plannerIdx, planned = c.addNewPlanner(fieldRef, typeName, fieldName, currentPath, parentPath, isMutationRoot, ds) + plannerIdx, planned = c.addNewPlanner(field, isMutationRoot) } else { - plannerIdx, planned = c.planWithExistingPlanners(fieldRef, typeName, fieldName, currentPath, parentPath, precedingParentPath, suggestion) + plannerIdx, planned = c.planWithExistingPlanners(field) if !planned { - plannerIdx, planned = c.addNewPlanner(fieldRef, typeName, fieldName, currentPath, parentPath, isMutationRoot, ds) + plannerIdx, planned = c.addNewPlanner(field, isMutationRoot) } } if planned { - c.recordFieldPlannedOn(fieldRef, plannerIdx) - c.addFieldDependencies(fieldRef, typeName, fieldName, plannerIdx) - c.addRootField(fieldRef, plannerIdx) + c.recordFieldPlannedOn(field.fieldRef, plannerIdx) + c.addFieldDependencies(field, plannerIdx) + c.addRootField(field.fieldRef, plannerIdx) } - c.handleMissingPath(planned, typeName, fieldName, currentPath, shareable) + c.handleMissingPath(planned, field) } func (c *pathBuilderVisitor) couldPlanField(fieldRef int, dsHash DSHash) (ok bool) { @@ -635,31 +734,6 @@ func (c *pathBuilderVisitor) fieldIsChildNode(plannerIdx int) bool { return strings.ContainsAny(fieldPath, ".") } -// addPlannerDependencies adds dependencies between planners based on @key directive -// e.g. when we have a record in a map, that this fieldRef is a dependency for the planner id -// we will notify that planner about the dependency on thecurrentPlannerIdx where this field is landed -func (c *pathBuilderVisitor) addPlannerDependencies(fieldRef int, plannedOnPlannerId int) { - plannerIds, mappingExists := c.fieldDependenciesForPlanners[fieldRef] - if !mappingExists { - return - } - - for _, notifyPlannerIdx := range plannerIds { - fetchConfiguration := c.planners[notifyPlannerIdx].ObjectFetchConfiguration() - - notified := slices.Contains(fetchConfiguration.dependsOnFetchIDs, plannedOnPlannerId) - if !notified { - if notifyPlannerIdx == plannedOnPlannerId { - return - // c.walker.StopWithInternalErr(fmt.Errorf("wrong fetch dependencies planner %d depends on itself", notifyPlannerIdx)) - } - - fetchConfiguration.dependsOnFetchIDs = append(fetchConfiguration.dependsOnFetchIDs, plannedOnPlannerId) - slices.Sort(fetchConfiguration.dependsOnFetchIDs) - } - } -} - // recordFieldPlannedOn - records the planner id on which the field was planned func (c *pathBuilderVisitor) recordFieldPlannedOn(fieldRef int, plannerIdx int) { if !slices.Contains(c.fieldsPlannedOn[fieldRef], plannerIdx) { @@ -675,19 +749,23 @@ func (c *pathBuilderVisitor) hasFieldsWaitingForDependency() bool { // in case current field has @requires directive, and we were able to plan it - it means that all fields from requires selection set was planned before that. // So we need to notify planner of current fieldRef about dependencies on those other fields // we know where fields were planned, because we record planner id of each planned field -func (c *pathBuilderVisitor) addFieldDependencies(fieldRef int, typeName, fieldName string, currentPlannerIdx int) { +func (c *pathBuilderVisitor) addFieldDependencies(field *currentFieldInfo, currentPlannerIdx int) { dsHash := c.planners[currentPlannerIdx].DataSourceConfiguration().Hash() - fieldKey := fieldIndexKey{fieldRef, dsHash} + fieldKey := fieldIndexKey{field.fieldRef, dsHash} fieldRefs, mappingExists := c.fieldDependsOn[fieldKey] if !mappingExists { return } - delete(c.fieldDependsOn, fieldKey) + + if slices.Contains(c.processedFieldDeps[fieldKey], currentPlannerIdx) { + return + } + c.processedFieldDeps[fieldKey] = append(c.processedFieldDeps[fieldKey], currentPlannerIdx) requiresConfigurations, ok := c.fieldRequirementsConfigs[fieldKey] if !ok { - c.walker.StopWithInternalErr(fmt.Errorf("missing field requirements configuration for field %s.%s fieldRef %d", typeName, fieldName, fieldRef)) + c.walker.StopWithInternalErr(fmt.Errorf("missing field requirements configuration for field %s.%s fieldRef %d", field.typeName, field.fieldName, field.fieldRef)) } for _, requiresConfiguration := range requiresConfigurations { // add required fields to the current planner to pass it in the representation variables @@ -711,8 +789,12 @@ func (c *pathBuilderVisitor) addFieldDependencies(fieldRef int, typeName, fieldN notified := slices.Contains(fetchConfiguration.dependsOnFetchIDs, plannerIdx) if !notified { + fetchConfiguration.dependsOnFetchIDs = append(fetchConfiguration.dependsOnFetchIDs, plannerIdx) + // sort slices.Sort(fetchConfiguration.dependsOnFetchIDs) + // remove consecutive duplicates + fetchConfiguration.dependsOnFetchIDs = slices.Compact(fetchConfiguration.dependsOnFetchIDs) } } } @@ -747,32 +829,38 @@ func (c *pathBuilderVisitor) isPlannerDependenciesAllowsToPlanField(fieldRef int return true } -func (c *pathBuilderVisitor) planWithExistingPlanners(fieldRef int, typeName, fieldName, currentPath, parentPath, precedingParentPath string, suggestion *NodeSuggestion) (plannerIdx int, planned bool) { +func (c *pathBuilderVisitor) planWithExistingPlanners(field *currentFieldInfo) (plannerIdx int, planned bool) { for plannerIdx, plannerConfig := range c.planners { dsConfiguration := plannerConfig.DataSourceConfiguration() planningBehaviour := dsConfiguration.PlanningBehavior() currentPlannerDSHash := dsConfiguration.Hash() - hasSuggestion := suggestion != nil - if !hasSuggestion { + if field.suggestion.DataSourceHash != currentPlannerDSHash { + continue + } + + if plannerConfig.DeferID() != 0 && field.deferID == 0 { + // do not plan a non-deferred field on a deferred planner continue } - if suggestion.DataSourceHash != currentPlannerDSHash { + if field.deferID != 0 && plannerConfig.DeferID() != field.deferID { + // do not plan a deferred field on a planner with different defer id + // or not a deferred planner continue } - isProvided := suggestion.IsProvided - isRootNode := suggestion.IsRootNode + isProvided := field.suggestion.IsProvided + isRootNode := field.suggestion.IsRootNode isChildNode := !isRootNode - if c.secondaryRun && plannerConfig.HasPath(currentPath) { + if c.secondaryRun && plannerConfig.HasPath(field.currentPath) { // on the secondary run we need to process only new fields added by the first run return plannerIdx, true } dsHash := dsConfiguration.Hash() - fieldKey := fieldIndexKey{fieldRef, dsHash} + fieldKey := fieldIndexKey{field.fieldRef, dsHash} requiresConfigurations := c.fieldRequirementsConfigs[fieldKey] fieldHasRequiresDirective := slices.ContainsFunc(requiresConfigurations, func(config FederationFieldConfiguration) bool { return config.FieldName != "" @@ -782,28 +870,30 @@ func (c *pathBuilderVisitor) planWithExistingPlanners(fieldRef int, typeName, fi // we should not plan fields with requires on the same planner as its dependencies, // because field with requires always will need an additional fetch before could be planned. // or the current planner provides dependencies for one of the requires dependency - if !c.isPlannerDependenciesAllowsToPlanField(fieldRef, plannerIdx) { + if !c.isPlannerDependenciesAllowsToPlanField(field.fieldRef, plannerIdx) { continue } } - if plannerConfig.HasPath(parentPath) || plannerConfig.HasPath(precedingParentPath) { - if pathAdded := c.addPlannerPathForTypename(plannerIdx, currentPath, parentPath, fieldRef, fieldName, typeName, planningBehaviour); pathAdded { + if plannerConfig.HasPath(field.parentPath) || plannerConfig.HasPath(field.precedingParentPath) { + if pathAdded := c.addPlannerPathForTypename(field, plannerIdx, planningBehaviour); pathAdded { return plannerIdx, true } if isProvided || (isRootNode && planningBehaviour.MergeAliasedRootNodes) || isChildNode { c.addPath(plannerIdx, pathConfiguration{ - parentPath: parentPath, - path: currentPath, + parentPath: field.parentPath, + path: field.currentPath, shouldWalkFields: true, - typeName: typeName, - fieldRef: fieldRef, + typeName: field.typeName, + fieldRef: field.fieldRef, fragmentRef: ast.InvalidRef, enclosingNode: c.walker.EnclosingTypeDefinition, dsHash: currentPlannerDSHash, isRootNode: isRootNode, pathType: PathTypeField, + deferID: field.deferID, + deferredField: field.deferField, }) return plannerIdx, true @@ -818,9 +908,9 @@ func (c *pathBuilderVisitor) isParentPathIsRootOperationPath(parentPath string) return parentPath == "query" || parentPath == "mutation" || parentPath == "subscription" } -func (c *pathBuilderVisitor) allowNewPlannerForTypenameField(fieldName string, typeName string, parentPath string, dsCfg DataSource) bool { - fedCfg := dsCfg.FederationConfiguration() - isEntityInterface := fedCfg.HasEntityInterface(typeName) +func (c *pathBuilderVisitor) allowNewPlannerForTypenameField(field *currentFieldInfo) bool { + fedCfg := field.ds.FederationConfiguration() + isEntityInterface := fedCfg.HasEntityInterface(field.typeName) if isEntityInterface { return true @@ -829,31 +919,33 @@ func (c *pathBuilderVisitor) allowNewPlannerForTypenameField(fieldName string, t // we should handle a new planner for a __typename // only when it is the first field on a query, // or we are on the entity interface object - return c.isParentPathIsRootOperationPath(parentPath) + return c.isParentPathIsRootOperationPath(field.parentPath) } -func (c *pathBuilderVisitor) addNewPlanner(fieldRef int, typeName, fieldName, currentPath, parentPath string, isMutationRoot bool, dsConfig DataSource) (plannerIdx int, planned bool) { - if !dsConfig.HasRootNode(typeName, fieldName) { - if fieldName != typeNameField { +func (c *pathBuilderVisitor) addNewPlanner(field *currentFieldInfo, isMutationRoot bool) (plannerIdx int, planned bool) { + if !field.ds.HasRootNode(field.typeName, field.fieldName) { + if field.fieldName != typeNameField { return -1, false } - if !c.allowNewPlannerForTypenameField(fieldName, typeName, parentPath, dsConfig) { + if !c.allowNewPlannerForTypenameField(field) { return -1, false } } currentPathConfiguration := pathConfiguration{ - parentPath: parentPath, - path: currentPath, + parentPath: field.parentPath, + path: field.currentPath, shouldWalkFields: true, - typeName: typeName, - fieldRef: fieldRef, + typeName: field.typeName, + fieldRef: field.fieldRef, fragmentRef: ast.InvalidRef, enclosingNode: c.walker.EnclosingTypeDefinition, - dsHash: dsConfig.Hash(), + dsHash: field.ds.Hash(), isRootNode: true, pathType: PathTypeField, + deferID: field.deferID, + deferredField: field.deferField, } paths := []pathConfiguration{ @@ -875,9 +967,9 @@ func (c *pathBuilderVisitor) addNewPlanner(fieldRef int, typeName, fieldName, cu // so we'd miss the selection sets and inline fragments in the root paths = append([]pathConfiguration{ { - path: parentPath, + path: field.parentPath, shouldWalkFields: false, - dsHash: dsConfig.Hash(), + dsHash: field.ds.Hash(), fieldRef: ast.InvalidRef, fragmentRef: fragmentRef, pathType: PathTypeFragment, @@ -893,9 +985,9 @@ func (c *pathBuilderVisitor) addNewPlanner(fieldRef int, typeName, fieldName, cu // this could happen when the parent is a fragment and we walking nested selection sets paths = append([]pathConfiguration{ { - path: parentPath, + path: field.parentPath, shouldWalkFields: true, - dsHash: dsConfig.Hash(), + dsHash: field.ds.Hash(), fieldRef: ast.InvalidRef, fragmentRef: fragmentRef, pathType: pathType, @@ -903,7 +995,7 @@ func (c *pathBuilderVisitor) addNewPlanner(fieldRef int, typeName, fieldName, cu }, paths...) } - plannerPath := parentPath + plannerPath := field.parentPath if isParentFragment { precedingFragmentPath := c.walker.Path[:len(c.walker.Path)-1].DotDelimitedString() @@ -913,7 +1005,7 @@ func (c *pathBuilderVisitor) addNewPlanner(fieldRef int, typeName, fieldName, cu { path: precedingFragmentPath, shouldWalkFields: false, - dsHash: dsConfig.Hash(), + dsHash: field.ds.Hash(), fieldRef: ast.InvalidRef, fragmentRef: ast.InvalidRef, pathType: PathTypeParent, @@ -925,7 +1017,7 @@ func (c *pathBuilderVisitor) addNewPlanner(fieldRef int, typeName, fieldName, cu plannerPath = precedingFragmentPath } - fieldDefinition, ok := c.walker.FieldDefinition(fieldRef) + fieldDefinition, ok := c.walker.FieldDefinition(field.fieldRef) if !ok { return -1, false } @@ -934,20 +1026,21 @@ func (c *pathBuilderVisitor) addNewPlanner(fieldRef int, typeName, fieldName, cu fetchID := len(c.planners) // the filter needs access to fieldRef to retrieve the field argument variable - c.fieldRef = fieldRef + c.fieldRef = field.fieldRef - isSubscription := c.isSubscriptionRoot(currentPath) + isSubscription := c.isSubscriptionRoot(field.currentPath) fetchConfiguration := &objectFetchConfiguration{ isSubscription: isSubscription, - fieldRef: fieldRef, + fieldRef: field.fieldRef, fieldDefinitionRef: fieldDefinition, fetchID: fetchID, + deferID: field.deferID, fetchItem: c.fetchItem(), - sourceID: dsConfig.Id(), - sourceName: dsConfig.Name(), - operationType: c.resolveRootFieldOperationType(typeName), - filter: c.resolveSubscriptionFilterCondition(typeName, fieldName), + sourceID: field.ds.Id(), + sourceName: field.ds.Name(), + operationType: c.resolveRootFieldOperationType(field.typeName), + filter: c.resolveSubscriptionFilterCondition(field.typeName, field.fieldName), } if isMutationRoot { @@ -966,7 +1059,7 @@ func (c *pathBuilderVisitor) addNewPlanner(fieldRef int, typeName, fieldName, cu paths, ) - plannerConfig := dsConfig.CreatePlannerConfiguration(c.logger, fetchConfiguration, plannerPathConfig, c.plannerConfiguration) + plannerConfig := field.ds.CreatePlannerConfiguration(c.logger, fetchConfiguration, plannerPathConfig, c.plannerConfiguration) c.planners = append(c.planners, plannerConfig) @@ -1205,8 +1298,8 @@ func (c *pathBuilderVisitor) resolveRootFieldOperationType(typeName string) ast. } // handleMissingPath - records missing path for the case when we don't yet have a planner for the field -func (c *pathBuilderVisitor) handleMissingPath(planned bool, typeName string, fieldName string, currentPath string, shareable bool) { - suggestions := c.nodeSuggestions.SuggestionsForPath(typeName, fieldName, currentPath) +func (c *pathBuilderVisitor) handleMissingPath(planned bool, field *currentFieldInfo) { + suggestions := c.nodeSuggestions.SuggestionsForPath(field.typeName, field.fieldName, field.currentPath) if len(suggestions) <= 1 { if planned { @@ -1215,9 +1308,9 @@ func (c *pathBuilderVisitor) handleMissingPath(planned bool, typeName string, fi } if c.plannerConfiguration.Debug.PrintPlanningPaths { - fmt.Println("Found potentially missing path", currentPath) + fmt.Println("Found potentially missing path", field.currentPath) } - c.potentiallyMissingPathTracker[currentPath] = struct{}{} + c.potentiallyMissingPathTracker[field.currentPath] = struct{}{} } allSuggestionsPlanned := true @@ -1228,7 +1321,7 @@ func (c *pathBuilderVisitor) handleMissingPath(planned bool, typeName string, fi if c.planners[i].DataSourceConfiguration().Hash() != suggestion.DataSourceHash { continue } - if c.planners[i].HasPath(currentPath) { + if c.planners[i].HasPath(field.currentPath) { hasPlannedSuggestion = true break } @@ -1247,32 +1340,35 @@ func (c *pathBuilderVisitor) handleMissingPath(planned bool, typeName string, fi // addPlannerPathForTypename adds a path for the __typename field. func (c *pathBuilderVisitor) addPlannerPathForTypename( - plannerIndex int, currentPath string, parentPath string, fieldRef int, fieldName string, typeName string, + field *currentFieldInfo, + plannerIndex int, planningBehaviour DataSourcePlanningBehavior, ) (pathAdded bool) { // Adding __typename should happen only if particular planner has parent path, // otherwise it will be added to all planners and will cause visiting of incorrect selection sets. - if fieldName != typeNameField { + if field.fieldName != typeNameField { return false } if !planningBehaviour.AllowPlanningTypeName { return false } - if c.planners[plannerIndex].HasPath(currentPath) { + if c.planners[plannerIndex].HasPath(field.currentPath) { // do not add a path for __typename if it already exists return true } c.addPath(plannerIndex, pathConfiguration{ - parentPath: parentPath, - path: currentPath, + parentPath: field.parentPath, + path: field.currentPath, shouldWalkFields: true, - typeName: typeName, - fieldRef: fieldRef, + typeName: field.typeName, + fieldRef: field.fieldRef, fragmentRef: ast.InvalidRef, dsHash: c.planners[plannerIndex].DataSourceConfiguration().Hash(), pathType: PathTypeField, + deferID: field.deferID, + deferredField: field.deferField, }) return true } diff --git a/v2/pkg/engine/plan/plan.go b/v2/pkg/engine/plan/plan.go index 8674f3a0a8..18763d04b1 100644 --- a/v2/pkg/engine/plan/plan.go +++ b/v2/pkg/engine/plan/plan.go @@ -9,6 +9,7 @@ type Kind int const ( SynchronousResponseKind Kind = iota + 1 SubscriptionResponseKind + DeferResponsePlanKind ) type Plan interface { @@ -61,3 +62,25 @@ func (s *SubscriptionResponsePlan) GetCostCalculator() *CostCalculator { func (s *SubscriptionResponsePlan) SetCostCalculator(c *CostCalculator) { s.CostCalculator = c } + +type DeferResponsePlan struct { + Response *resolve.GraphQLDeferResponse + FlushInterval int64 + CostCalculator *CostCalculator +} + +func (d *DeferResponsePlan) PlanKind() Kind { + return DeferResponsePlanKind +} + +func (d *DeferResponsePlan) SetFlushInterval(interval int64) { + d.FlushInterval = interval +} + +func (d *DeferResponsePlan) GetCostCalculator() *CostCalculator { + return d.CostCalculator +} + +func (d *DeferResponsePlan) SetCostCalculator(c *CostCalculator) { + d.CostCalculator = c +} diff --git a/v2/pkg/engine/plan/planner_configuration.go b/v2/pkg/engine/plan/planner_configuration.go index 7bc5614d66..b335fcfdc7 100644 --- a/v2/pkg/engine/plan/planner_configuration.go +++ b/v2/pkg/engine/plan/planner_configuration.go @@ -28,6 +28,7 @@ type PlannerConfiguration interface { ObjectFetchConfiguration() *objectFetchConfiguration DataSourceConfiguration() DataSource + DeferID() int RequiredFields() *FederationFieldConfigurations @@ -42,7 +43,6 @@ func (p *plannerConfiguration[T]) Register(visitor *Visitor) error { ParentPath: p.parentPath, PathType: p.parentPathType, IsNested: p.IsNestedPlanner(), - FetchID: p.objectFetchConfiguration.fetchID, Options: p.options, } @@ -62,6 +62,10 @@ func (p *plannerConfiguration[T]) ObjectFetchConfiguration() *objectFetchConfigu return p.objectFetchConfiguration } +func (p *plannerConfiguration[T]) DeferID() int { + return p.objectFetchConfiguration.deferID +} + func (p *plannerConfiguration[T]) DownstreamResponseFieldAlias(downstreamFieldRef int) (alias string, exists bool) { return p.planner.DownstreamResponseFieldAlias(downstreamFieldRef) } @@ -82,6 +86,7 @@ type PlannerPathConfiguration interface { IsNestedPlanner() bool HasPath(path string) bool HasPathWithFieldRef(fieldRef int) bool + PathWithFieldRef(fieldRef int) (*pathConfiguration, bool) HasFragmentPath(fragmentRef int) bool ShouldWalkFieldsOnPath(path string, typeName string) bool HasParent(parent string) bool @@ -92,7 +97,7 @@ func newPlannerPathsConfiguration(parentPath string, parentPathType PlannerPathT parentPath: parentPath, parentPathType: parentPathType, index: make(map[string][]int), - indexByFieldRef: make(map[int]struct{}), + indexByFieldRef: make(map[int]*pathConfiguration), fragmentPaths: make(map[pathConfiguration]struct{}), nonLeafPaths: make(map[string]struct{}), } @@ -112,7 +117,7 @@ type plannerPathsConfiguration struct { // indexes index map[string][]int - indexByFieldRef map[int]struct{} + indexByFieldRef map[int]*pathConfiguration fragmentPaths map[pathConfiguration]struct{} nonLeafPaths map[string]struct{} } @@ -146,7 +151,7 @@ func (p *plannerPathsConfiguration) AddPath(configuration pathConfiguration) { p.fragmentPaths[configuration] = struct{}{} } if configuration.pathType == PathTypeField { - p.indexByFieldRef[configuration.fieldRef] = struct{}{} + p.indexByFieldRef[configuration.fieldRef] = &configuration } } @@ -166,6 +171,11 @@ func (p *plannerPathsConfiguration) HasPathWithFieldRef(fieldRef int) bool { return ok } +func (p *plannerPathsConfiguration) PathWithFieldRef(fieldRef int) (*pathConfiguration, bool) { + path, ok := p.indexByFieldRef[fieldRef] + return path, ok +} + func (p *plannerPathsConfiguration) HasFragmentPath(fragmentRef int) bool { for path := range p.fragmentPaths { if path.fragmentRef == fragmentRef { @@ -237,6 +247,9 @@ type pathConfiguration struct { dsHash DSHash isRootNode bool pathType PathType + + deferredField bool + deferID int } type PathType int @@ -250,7 +263,7 @@ const ( func (p *pathConfiguration) String() string { switch p.pathType { case PathTypeField: - return fmt.Sprintf(`{"ds":%d,"path":"%s","fieldRef":%3d,"typeName":"%s","shouldWalkFields":%t,"isRootNode":%t,"pathType":"field"}`, p.dsHash, p.path, p.fieldRef, p.typeName, p.shouldWalkFields, p.isRootNode) + return fmt.Sprintf(`{"ds":%d,"path":"%s","fieldRef":%3d,"typeName":"%s","shouldWalkFields":%t,"isRootNode":%t,"pathType":"field","deferID":%d}`, p.dsHash, p.path, p.fieldRef, p.typeName, p.shouldWalkFields, p.isRootNode, p.deferID) case PathTypeFragment: return fmt.Sprintf(`{"ds":%d,"path":"%s","fragmentRef":%3d,"shouldWalkFields":%t,"pathType":"fragment"}`, p.dsHash, p.path, p.fragmentRef, p.shouldWalkFields) case PathTypeParent: diff --git a/v2/pkg/engine/plan/required_fields_visitor.go b/v2/pkg/engine/plan/required_fields_visitor.go index 2123605015..dced78d164 100644 --- a/v2/pkg/engine/plan/required_fields_visitor.go +++ b/v2/pkg/engine/plan/required_fields_visitor.go @@ -8,6 +8,7 @@ import ( "github.com/wundergraph/graphql-go-tools/v2/pkg/astimport" "github.com/wundergraph/graphql-go-tools/v2/pkg/astparser" "github.com/wundergraph/graphql-go-tools/v2/pkg/astvisitor" + "github.com/wundergraph/graphql-go-tools/v2/pkg/lexer/literal" "github.com/wundergraph/graphql-go-tools/v2/pkg/operationreport" ) @@ -58,12 +59,24 @@ type addRequiredFieldsConfiguration struct { allowTypename bool typeName string fieldSet string + deferInfo *DeferInfo + parentFieldDeferID int // addTypenameInNestedSelections controls forced addition of __typename to nested selection sets // used by "requires" keys, not only when fragments are present. addTypenameInNestedSelections bool } +// requiredFieldInfo holds pre-computed field properties shared across +// the deferred and non-deferred handling paths. +type requiredFieldInfo struct { + ref int + fieldName ast.ByteSlice + isTypeName bool + isLeaf bool + selectionSetRef int +} + type AddRequiredFieldsResult struct { skipFieldRefs []int requiredFieldRefs []int @@ -72,7 +85,7 @@ type AddRequiredFieldsResult struct { } func addRequiredFields(config *addRequiredFieldsConfiguration) (out AddRequiredFieldsResult, report *operationreport.Report) { - key, report := RequiredFieldsFragment(config.typeName, config.fieldSet, config.allowTypename) + parsedSelectionSet, report := RequiredFieldsFragment(config.typeName, config.fieldSet, config.allowTypename) if report.HasErrors() { return out, report } @@ -82,7 +95,7 @@ func addRequiredFields(config *addRequiredFieldsConfiguration) (out AddRequiredF visitor := &requiredFieldsVisitor{ Walker: &walker, config: config, - key: key, + key: parsedSelectionSet, importer: &astimport.Importer{}, skipFieldRefs: make([]int, 0, 2), requiredFieldRefs: make([]int, 0, 2), @@ -93,7 +106,7 @@ func addRequiredFields(config *addRequiredFieldsConfiguration) (out AddRequiredF walker.RegisterSelectionSetVisitor(visitor) walker.RegisterInlineFragmentVisitor(visitor) - walker.Walk(key, config.definition, report) + walker.Walk(parsedSelectionSet, config.definition, report) return AddRequiredFieldsResult{ skipFieldRefs: visitor.skipFieldRefs, @@ -156,25 +169,33 @@ func (v *requiredFieldsVisitor) EnterSelectionSet(ref int) { } operationNode := v.OperationNodes[len(v.OperationNodes)-1] - keySelectionSetHasFragments := len(v.key.SelectionSetInlineFragmentSelections(ref)) > 0 - if operationNode.Kind == ast.NodeKindField { + keySelectionSetHasFragments := len(v.key.SelectionSetInlineFragmentSelections(ref)) > 0 enforcedTypename := v.config.addTypenameInNestedSelections && !v.config.isKey + + needTypeName := keySelectionSetHasFragments || enforcedTypename + + // check if the operation already has a selection set for the given operation node if fieldSelectionSetRef, ok := v.config.operation.FieldSelectionSet(operationNode.Ref); ok { selectionSetNode := ast.Node{Kind: ast.NodeKindSelectionSet, Ref: fieldSelectionSetRef} - if (keySelectionSetHasFragments || enforcedTypename) && - !v.selectionSetHasTypeNameSelection(fieldSelectionSetRef) { + // if we need a typename and operation selection set do not yet have a typename field selection + if needTypeName && !v.selectionSetHasTypeNameSelection(fieldSelectionSetRef) { v.addTypenameSelection(fieldSelectionSetRef) } v.OperationNodes = append(v.OperationNodes, selectionSetNode) return } + // if the key/requires fieldSet already contains the __typename, we do not need to add a duplicate, + // as it will be added in the enterFied + keySelectionHasTypeName, _ := v.key.SelectionSetHasFieldSelectionWithExactName(ref, typeNameFieldBytes) + selectionSetNode := v.config.operation.AddSelectionSet() - if keySelectionSetHasFragments || enforcedTypename { + if needTypeName && !keySelectionHasTypeName { v.addTypenameSelection(selectionSetNode.Ref) } + // append a selection set for the field v.config.operation.Fields[operationNode.Ref].HasSelections = true v.config.operation.Fields[operationNode.Ref].SelectionSet = selectionSetNode.Ref v.OperationNodes = append(v.OperationNodes, selectionSetNode) @@ -182,12 +203,94 @@ func (v *requiredFieldsVisitor) EnterSelectionSet(ref int) { } // operation node kind InlineFragment + // append a selection set for the inline fragment selectionSetNode := v.config.operation.AddSelectionSet() v.config.operation.InlineFragments[operationNode.Ref].HasSelections = true v.config.operation.InlineFragments[operationNode.Ref].SelectionSet = selectionSetNode.Ref v.OperationNodes = append(v.OperationNodes, selectionSetNode) } +func (v *requiredFieldsVisitor) fieldHasDeferInternal(fieldRef int) bool { + _, exists := v.config.operation.Fields[fieldRef].Directives.HasDirectiveByNameBytes(v.config.operation, literal.DEFER_INTERNAL) + return exists +} + +// fieldDeferID returns the "id" argument value of the @__defer_internal directive +// on fieldRef, or "" if the directive is not present. +func (v *requiredFieldsVisitor) fieldDeferID(fieldRef int) int { + for _, dirRef := range v.config.operation.Fields[fieldRef].Directives.Refs { + if !bytes.Equal(v.config.operation.DirectiveNameBytes(dirRef), literal.DEFER_INTERNAL) { + continue // not the right directive + } + // found @__defer_internal — extract the "id" argument + val, ok := v.config.operation.DirectiveArgumentValueByName(dirRef, []byte("id")) + if !ok || val.Kind != ast.ValueKindInteger { + continue + } + return int(v.config.operation.IntValueAsInt(val.Ref)) + } + return 0 +} + +type deferAliasResult struct { + addAlias bool + includeDeferID bool + reuseFieldRef int // ast.InvalidRef when not reusing +} + +// effectiveDeferID returns the ID that will actually be written into the +// @__defer_internal directive for the field currently being processed. +// +// For requires fields (isKey=false), this is always deferInfo.ID. +// For key fields (isKey=true), applyDeferInternalDirective writes parentFieldDeferID +// into the directive rather than deferInfo.ID, so we must use the same value when +// looking up or naming aliases — otherwise planners with different deferInfo.ID but +// the same parentFieldDeferID would fail to recognise each other's aliases and +// create redundant copies (e.g. __internal_id, __internal_3_id, __internal_5_id all +// bearing the same @__defer_internal(id: "1")). +func (v *requiredFieldsVisitor) effectiveDeferID() int { + if v.config.isKey && v.config.parentFieldDeferID != 0 { + return v.config.parentFieldDeferID + } + return v.config.deferInfo.ID +} + +// resolveDeferredAlias decides how to alias a deferred required field. +// Precondition: v.config.deferInfo != nil && v.isRootLevel(). +// +// Decision table (using effectiveDeferID as the scope identifier): +// - __internal_{fieldName} absent → addAlias=true, includeDeferID=false +// - __internal_{fieldName} present, same scope → reuseFieldRef set +// - __internal_{fieldName} present, diff scope, __internal_{effectiveID}_{fieldName} absent → addAlias=true, includeDeferID=true +// - __internal_{fieldName} present, diff scope, __internal_{effectiveID}_{fieldName} present → reuseFieldRef set +func (v *requiredFieldsVisitor) resolveDeferredAlias(fieldName ast.ByteSlice, selectionSetRef int) deferAliasResult { + effectiveID := v.effectiveDeferID() + + // --- Level 1: look for __internal_{fieldName} --- + simpleAlias := append([]byte("__internal_"), fieldName...) + exists, existingRef := v.config.operation.SelectionSetHasFieldSelectionWithNameOrAliasBytes(selectionSetRef, simpleAlias) + if !exists { + // no alias yet — create the simple one + return deferAliasResult{addAlias: true, reuseFieldRef: ast.InvalidRef} + } + if v.fieldDeferID(existingRef) == effectiveID { + // simple alias already belongs to this defer scope — reuse it + return deferAliasResult{reuseFieldRef: existingRef} + } + + // --- Level 2: simple alias belongs to a different scope --- + // look for an existing conflict alias __internal_{effectiveID}_{fieldName} + conflictAlias := fmt.Appendf(nil, "__internal_%d_%s", effectiveID, fieldName) + conflictExists, conflictRef := v.config.operation.SelectionSetHasFieldSelectionWithNameOrAliasBytes(selectionSetRef, conflictAlias) + if conflictExists { + // conflict alias already exists for this scope — reuse it + return deferAliasResult{reuseFieldRef: conflictRef} + } + + // no existing conflict alias — create one with the effective ID included + return deferAliasResult{addAlias: true, includeDeferID: true, reuseFieldRef: ast.InvalidRef} +} + func (v *requiredFieldsVisitor) selectionSetHasTypeNameSelection(operationSelectionSetRef int) bool { exists, _ := v.config.operation.SelectionSetHasFieldSelectionWithExactName(operationSelectionSetRef, typeNameFieldBytes) return exists @@ -196,10 +299,12 @@ func (v *requiredFieldsVisitor) selectionSetHasTypeNameSelection(operationSelect // addTypenameSelection adds __typename selection to the operation when the key/requires selection set has inline fragments func (v *requiredFieldsVisitor) addTypenameSelection(operationSelectionSetRef int) { field := v.config.operation.AddField(ast.Field{ - Name: v.config.operation.Input.AppendInputString("__typename"), + Name: v.config.operation.Input.AppendInputString(typeNameField), }) v.skipFieldRefs = append(v.skipFieldRefs, field.Ref) + v.applyDeferInternalDirective(field.Ref) + v.config.operation.AddSelection(operationSelectionSetRef, ast.Selection{ Ref: field.Ref, Kind: ast.SelectionKindField, @@ -215,38 +320,102 @@ func (v *requiredFieldsVisitor) LeaveSelectionSet(ref int) { } func (v *requiredFieldsVisitor) EnterField(ref int) { + fieldName := v.key.FieldNameBytes(ref) + + fi := requiredFieldInfo{ + ref: ref, + fieldName: fieldName, + isTypeName: bytes.Equal(fieldName, typeNameFieldBytes), + isLeaf: !v.key.FieldHasSelections(ref), + selectionSetRef: v.OperationNodes[len(v.OperationNodes)-1].Ref, + } + if v.config.isKey { - v.handleKeyField(ref) + v.handleKeyField(fi) return } - v.handleRequiredField(ref) + v.handleRequiredField(fi) } -func (v *requiredFieldsVisitor) handleRequiredField(ref int) { - fieldName := v.key.FieldNameBytes(ref) - isTypeName := bytes.Equal(fieldName, typeNameFieldBytes) +func (v *requiredFieldsVisitor) isRootLevel() bool { + return len(v.OperationNodes) == 1 +} + +// handleRequiredField is the EnterField entry point for @requires fields. +// It builds requiredFieldInfo and dispatches to the deferred or non-deferred path. +func (v *requiredFieldsVisitor) handleRequiredField(fi requiredFieldInfo) { + // Unlike handleKeyField, __typename IS included in the deferred path here. + // For interface objects (entity interfaces) the planner adds __typename as a + // @requires field (not a key field) so the owning subgraph can return the real + // concrete type. That __typename must travel through the same deferred path as + // the rest of the requires fields, so it must not be excluded from aliasing. + if v.config.deferInfo != nil && v.isRootLevel() { + v.handleRequiredRootFieldInDeferScope(fi) + return + } + v.handleRequiredFieldNonDeferred(fi) +} + +// handleRequiredRootFieldInDeferScope handles @requires fields in a deferred context. +// Uses resolveDeferredAlias to reuse or create __internal_{fieldName} aliases. +func (v *requiredFieldsVisitor) handleRequiredRootFieldInDeferScope(fi requiredFieldInfo) { + aliasResult := v.resolveDeferredAlias(fi.fieldName, fi.selectionSetRef) + + if aliasResult.reuseFieldRef != ast.InvalidRef { + // reuse the existing aliased field from the same defer scope + v.recordRemappedPathIfAliased(aliasResult.reuseFieldRef, fi.fieldName) + if !fi.isTypeName || v.config.isTypeNameForEntityInterface { + v.storeRequiredFieldRef(aliasResult.reuseFieldRef) + } + if !fi.isLeaf { + // push to OperationNodes so nested key fields are traversed, + // but do NOT add to modifiedFieldRefs — the selection set was already + // set up by the prior addRequiredFields call that created this alias + v.OperationNodes = append(v.OperationNodes, ast.Node{Kind: ast.NodeKindField, Ref: aliasResult.reuseFieldRef}) + } + return + } - // we need to add alias if operation has such field and: - // - the field is not a leaf - // - the field has arguments - isLeafField := !v.key.FieldHasSelections(ref) - needAlias := v.key.FieldHasArguments(ref) + fieldNode := v.addRequiredField(fi.ref, fi.fieldName, fi.selectionSetRef, aliasResult.addAlias, aliasResult.includeDeferID) + if !fi.isLeaf { + v.OperationNodes = append(v.OperationNodes, fieldNode) + } +} - selectionSetRef := v.OperationNodes[len(v.OperationNodes)-1].Ref - operationHasField, operationFieldRef := v.config.operation.SelectionSetHasFieldSelectionWithExactName(selectionSetRef, fieldName) +// handleRequiredFieldNonDeferred handles @requires fields outside a deferred context. +func (v *requiredFieldsVisitor) handleRequiredFieldNonDeferred(field requiredFieldInfo) { + operationHasField, operationFieldRef := v.config.operation.SelectionSetHasFieldSelectionWithExactName(field.selectionSetRef, field.fieldName) + + // @requires fields can carry arguments (e.g. price(currency: USD)). + // If the same field already appears in the query with different arguments, + // the two selections cannot share the same field node, so we must alias the + // required copy to avoid clobbering the user's selection. + // Key fields never have arguments, so this check is absent in handleKeyFieldNonDeferred. + needAlias := v.key.FieldHasArguments(field.ref) + + // if the existing field is deferred but we are adding requirements for a non-deferred scope, + // we must not reuse it — add an alias instead. + // When deferInfo is set (deferred context) and we're nested inside a reused deferred field, + // the nested field is already in the correct defer scope — reuse it directly. + if operationHasField && v.config.deferInfo == nil && v.fieldHasDeferInternal(operationFieldRef) { + needAlias = true + } if operationHasField && !needAlias { - // we are skipping adding __typename field to the required fields, - // because we want to depend only on the regular key fields, not the __typename field - // for entity interface we need real typename, so we use this dependency - if !isTypeName || v.config.isTypeNameForEntityInterface { + // Skip storing __typename as a required field — we only want to depend on + // the actual key fields, not __typename. + // Exception: for interface objects the planner adds __typename via @requires + // so we do need it as a real dependency in that case. + // (handleKeyFieldNonDeferred always skips __typename because it handles __typename + // through the representation variables builder instead.) + if !field.isTypeName || v.config.isTypeNameForEntityInterface { v.storeRequiredFieldRef(operationFieldRef) } // do not add required field if the field is already present in the operation with the same name // but add an operation node from operation if the field has selections - if !v.config.operation.FieldHasSelections(operationFieldRef) { + if field.isLeaf { return } @@ -255,30 +424,90 @@ func (v *requiredFieldsVisitor) handleRequiredField(ref int) { return } - fieldNode := v.addRequiredField(ref, fieldName, selectionSetRef, operationHasField && needAlias) - if !isLeafField { + fieldNode := v.addRequiredField(field.ref, field.fieldName, field.selectionSetRef, operationHasField && needAlias, false) + if !field.isLeaf { v.OperationNodes = append(v.OperationNodes, fieldNode) } } -func (v *requiredFieldsVisitor) handleKeyField(ref int) { - fieldName := v.key.FieldNameBytes(ref) - isTypeName := bytes.Equal(fieldName, typeNameFieldBytes) - isLeafField := !v.key.FieldHasSelections(ref) - - selectionSetRef := v.OperationNodes[len(v.OperationNodes)-1].Ref - operationHasField, operationFieldRef := v.config.operation.SelectionSetHasFieldSelectionWithExactName(selectionSetRef, fieldName) - if operationHasField { - // we are skipping adding __typename field to the required fields, - // because we want to depend only on the regular key fields, not the __typename field - // for entity interface we need real typename, so we use this dependency - if !isTypeName { +// handleKeyField is the EnterField entry point for key fields. +// It builds requiredFieldInfo and dispatches to the deferred or non-deferred path. +func (v *requiredFieldsVisitor) handleKeyField(fi requiredFieldInfo) { + // Key fields must never alias __typename, even in a deferred context. + // __typename is not part of the user-visible key field set; instead it is + // always injected by the representation variables builder with the static + // name "__typename". Aliasing it would break that builder. + // (handleRequiredField does NOT exclude __typename here because for + // interface objects __typename is fetched via @requires, not keys.) + if v.config.deferInfo != nil && v.isRootLevel() && !fi.isTypeName { + v.handleKeyRootFieldInDeferScope(fi) + return + } + v.handleKeyFieldNonDeferred(fi) +} + +// handleKeyRootFieldInDeferScope handles key fields in a deferred context. +// Key fields are added to the initial (non-deferred) selection set so they can be +// used as entity representation inputs. The first occurrence of a key field is +// always added unaliased; subsequent callers from different defer scopes reuse it. +// An alias is only needed when an unaliased field already exists but is scoped +// (has @deferInternal) and therefore cannot be shared. +func (v *requiredFieldsVisitor) handleKeyRootFieldInDeferScope(field requiredFieldInfo) { + // check if an unaliased field exists — it may or may not be scoped + exists, existingRef := v.config.operation.SelectionSetHasFieldSelectionWithExactName(field.selectionSetRef, field.fieldName) + // 1. field exists and unscoped + if exists && !v.fieldHasDeferInternal(existingRef) { + // field is unaliased AND unscoped — reuse directly, all defer scopes can share it + v.storeRequiredFieldRef(existingRef) + if !field.isLeaf { + v.modifiedFieldRefs = append(v.modifiedFieldRefs, existingRef) + v.OperationNodes = append(v.OperationNodes, ast.Node{Kind: ast.NodeKindField, Ref: existingRef}) + } + return + } + + aliasResult := v.resolveDeferredAlias(field.fieldName, field.selectionSetRef) + + if aliasResult.reuseFieldRef != ast.InvalidRef { + // reuse the existing aliased field from the same defer scope + v.recordRemappedPathIfAliased(aliasResult.reuseFieldRef, field.fieldName) + v.storeRequiredFieldRef(aliasResult.reuseFieldRef) + if !field.isLeaf { + v.OperationNodes = append(v.OperationNodes, ast.Node{Kind: ast.NodeKindField, Ref: aliasResult.reuseFieldRef}) + } + return + } + + // 2. If the field exists but scoped to a different deferId - an alias is required so the new field doesn't collide with the scope of the existing one. + // 3. If no field exists yet — it could be added unaliased and unscoped, so any scope can reuse it later. + fieldNode := v.addRequiredField(field.ref, field.fieldName, field.selectionSetRef, exists, aliasResult.includeDeferID) + if !field.isLeaf { + v.OperationNodes = append(v.OperationNodes, fieldNode) + } +} + +// handleKeyFieldNonDeferred handles key fields outside a deferred context. +func (v *requiredFieldsVisitor) handleKeyFieldNonDeferred(field requiredFieldInfo) { + operationHasField, operationFieldRef := v.config.operation.SelectionSetHasFieldSelectionWithExactName(field.selectionSetRef, field.fieldName) + + // If the existing field has @deferInternal it belongs to a specific defer scope; + // the non-deferred planner must not reuse it — add an alias instead. + existingFieldIsDeferred := operationHasField && v.config.deferInfo == nil && v.fieldHasDeferInternal(operationFieldRef) + + if operationHasField && !existingFieldIsDeferred { + // Skip storing __typename as a required field. + // Unlike handleRequiredFieldNonDeferred there is no isTypeNameForEntityInterface + // exception here: for interface objects the real __typename is fetched + // via @requires (handled by handleRequiredField), never as a key field. + // Key fields cannot have arguments, so there is no needAlias check here + // (unlike handleRequiredFieldNonDeferred). + if !field.isTypeName { v.storeRequiredFieldRef(operationFieldRef) } - // do not add required field if the field is already present in the operation with the same name + // do not add the required field if the field is already present in the operation with the same name // but add an operation node from operation if the field has selections - if isLeafField { + if field.isLeaf { return } @@ -287,8 +516,8 @@ func (v *requiredFieldsVisitor) handleKeyField(ref int) { return } - fieldNode := v.addRequiredField(ref, fieldName, selectionSetRef, false) - if !isLeafField { + fieldNode := v.addRequiredField(field.ref, field.fieldName, field.selectionSetRef, existingFieldIsDeferred, false) + if !field.isLeaf { v.OperationNodes = append(v.OperationNodes, fieldNode) } } @@ -303,16 +532,30 @@ func (v *requiredFieldsVisitor) storeRequiredFieldRef(fieldRef int) { v.requiredFieldRefs = append(v.requiredFieldRefs, fieldRef) } -func (v *requiredFieldsVisitor) addRequiredField(keyRef int, fieldName ast.ByteSlice, selectionSet int, addAlias bool) ast.Node { +// recordRemappedPathIfAliased records the path → alias mapping when reusing an +// existing aliased field. Each AddRequiredFields call gets a fresh v.mapping, +// so every planner that reuses an alias must record the mapping itself. +func (v *requiredFieldsVisitor) recordRemappedPathIfAliased(fieldRef int, fieldName ast.ByteSlice) { + if !v.config.operation.FieldAliasIsDefined(fieldRef) { + return + } + currentPath := v.Walker.Path.DotDelimitedString() + "." + string(fieldName) + v.mapping[currentPath] = string(v.config.operation.FieldAliasBytes(fieldRef)) +} + +func (v *requiredFieldsVisitor) addRequiredField(keyFieldRef int, fieldName ast.ByteSlice, selectionSet int, addAlias bool, includeDeferIDInAlias bool) ast.Node { field := ast.Field{ Name: v.config.operation.Input.AppendInputBytes(fieldName), SelectionSet: ast.InvalidRef, } if addAlias { - aliasName := bytes.NewBuffer([]byte("__internal_")) - aliasName.Write(fieldName) - fullAliasName := aliasName.Bytes() + var fullAliasName []byte + if includeDeferIDInAlias && v.config.deferInfo != nil { + fullAliasName = fmt.Appendf(nil, "__internal_%d_%s", v.effectiveDeferID(), fieldName) + } else { + fullAliasName = append([]byte("__internal_"), fieldName...) + } field.Alias = ast.Alias{ IsDefined: true, @@ -323,29 +566,57 @@ func (v *requiredFieldsVisitor) addRequiredField(keyRef int, fieldName ast.ByteS v.mapping[currentPath] = string(fullAliasName) } - addedField := v.config.operation.AddField(field) + addedFieldNode := v.config.operation.AddField(field) - if v.key.FieldHasArguments(keyRef) { - importedArgs := v.importer.ImportArguments(v.key.Fields[keyRef].Arguments.Refs, v.key, v.config.operation) + if v.key.FieldHasArguments(keyFieldRef) { + importedArgs := v.importer.ImportArguments(v.key.Fields[keyFieldRef].Arguments.Refs, v.key, v.config.operation) for _, arg := range importedArgs { - v.config.operation.AddArgumentToField(addedField.Ref, arg) + v.config.operation.AddArgumentToField(addedFieldNode.Ref, arg) } } selection := ast.Selection{ Kind: ast.SelectionKindField, - Ref: addedField.Ref, + Ref: addedFieldNode.Ref, } v.config.operation.AddSelection(selectionSet, selection) - v.skipFieldRefs = append(v.skipFieldRefs, addedField.Ref) + v.skipFieldRefs = append(v.skipFieldRefs, addedFieldNode.Ref) // we are skipping adding __typename field to the required fields, // because we want to depend only on the regular key fields, not the __typename field if !bytes.Equal(fieldName, typeNameFieldBytes) || (bytes.Equal(fieldName, typeNameFieldBytes) && v.config.isTypeNameForEntityInterface) { - v.storeRequiredFieldRef(addedField.Ref) + v.storeRequiredFieldRef(addedFieldNode.Ref) + } + + v.applyDeferInternalDirective(addedFieldNode.Ref) + + return addedFieldNode +} + +func (v *requiredFieldsVisitor) applyDeferInternalDirective(fieldRef int) { + if v.config.deferInfo == nil { + return + } + + // when we are adding required fields from the requires directive + if !v.config.isKey { + // required fields should land in the same scope as the current field + // to be fetched in the same defer group, but not in the parent scope + v.config.operation.AddDeferInternalDirectiveToField(fieldRef, v.config.deferInfo.ID, v.config.deferInfo.Label, v.config.deferInfo.ParentID) + return + } + + // when we are adding key fields + // and the parent field has the defer id + if v.config.parentFieldDeferID != 0 { + // for key fields: use parentFieldDeferID as the id + // key should be in scope of the parent defer id, not be the deferred inside the same fragment, + // otherwise it can't be planned properly + v.config.operation.AddDeferInternalDirectiveToField(fieldRef, v.config.parentFieldDeferID, "", 0) } - return addedField + // if the parent field does not have a defer id, + // fields should be unscoped, as is the parent field itself } diff --git a/v2/pkg/engine/plan/required_fields_visitor_test.go b/v2/pkg/engine/plan/required_fields_visitor_test.go index 0d305f4c6e..dd60f3e2dd 100644 --- a/v2/pkg/engine/plan/required_fields_visitor_test.go +++ b/v2/pkg/engine/plan/required_fields_visitor_test.go @@ -23,6 +23,8 @@ func TestAddRequiredFields(t *testing.T) { isTypeNameForEntityInterface bool selectionSetRef int enforceTypenameForRequired bool + deferInfo *DeferInfo + parentFieldDeferID int // output expectedOperation string @@ -484,6 +486,644 @@ func TestAddRequiredFields(t *testing.T) { expectedSkipFieldsCount: 8, // id, account, __typename, id, type, settings, __typename, theme expectedRequiredFieldsCount: 6, }, + { + name: "key with defer id - new field added as plain (no alias needed)", + definition: ` + type Query { user: User } + type User { id: ID! name: String! }`, + operation: `query { user { name } }`, + typeName: "User", + fieldSet: "id", + isKey: true, + deferInfo: &DeferInfo{ID: 1}, + expectedOperation: ` + query { + user { + name + id + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{}, + }, + { + name: "key with defer id - existing plain field is reused (no alias)", + definition: ` + type Query { user: User } + type User { id: ID! name: String! }`, + operation: `query { user { id name } }`, + typeName: "User", + fieldSet: "id", + isKey: true, + deferInfo: &DeferInfo{ID: 1}, + expectedOperation: ` + query { + user { + id + name + } + }`, + expectedSkipFieldsCount: 0, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{}, + }, + { + name: "requires with defer id - new field gets aliased", + definition: ` + type Query { user: User } + type User { id: ID! firstName: String! lastName: String! fullName: String! }`, + operation: `query { user { fullName } }`, + typeName: "User", + fieldSet: "firstName lastName", + isKey: false, + deferInfo: &DeferInfo{ID: 1}, + expectedOperation: ` + query { + user { + fullName + __internal_firstName: firstName @__defer_internal(id: 1) + __internal_lastName: lastName @__defer_internal(id: 1) + } + }`, + expectedSkipFieldsCount: 2, + expectedRequiredFieldsCount: 2, + expectedRemappedPaths: map[string]string{ + "User.firstName": "__internal_firstName", + "User.lastName": "__internal_lastName", + }, + }, + { + name: "requires with defer id - existing field still gets aliased", + definition: ` + type Query { user: User } + type User { id: ID! firstName: String! fullName: String! }`, + operation: `query { user { firstName fullName } }`, + typeName: "User", + fieldSet: "firstName", + isKey: false, + deferInfo: &DeferInfo{ID: 1}, + expectedOperation: ` + query { + user { + firstName + fullName + __internal_firstName: firstName @__defer_internal(id: 1) + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.firstName": "__internal_firstName"}, + }, + { + name: "key with defer id - existing plain nested field is reused, leaf added inside", + definition: ` + type Query { user: User } + type User { id: ID! address: Address! } + type Address { street: String! city: String! }`, + operation: `query { user { address { city } } }`, + typeName: "User", + fieldSet: "address { street }", + isKey: true, + deferInfo: &DeferInfo{ID: 1}, + selectionSetRef: 1, + // existing plain address is reused; street is added into it + expectedOperation: ` + query { + user { + address { + city + street + } + } + }`, + expectedSkipFieldsCount: 1, // street + expectedRequiredFieldsCount: 2, // address (reused) + street + expectedModifiedFieldsCount: 1, // address selection set was modified + expectedRemappedPaths: map[string]string{}, + }, + { + name: "key with defer id and parentId - plain field added with directive", + definition: ` + type Query { user: User } + type User { id: ID! name: String! }`, + operation: `query { user { name } }`, + typeName: "User", + fieldSet: "id", + isKey: true, + deferInfo: &DeferInfo{ID: 2, ParentID: 2}, + parentFieldDeferID: 1, + expectedOperation: ` + query { + user { + name + id @__defer_internal(id: 1) + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{}, + }, + { + name: "requires with defer id and parentId - directive added with all fields", + definition: ` + type Query { user: User } + type User { id: ID! firstName: String! fullName: String! }`, + operation: `query { user { fullName } }`, + typeName: "User", + fieldSet: "firstName", + isKey: false, + deferInfo: &DeferInfo{ID: 2, Label: "myLabel", ParentID: 1}, + expectedOperation: ` + query { + user { + fullName + __internal_firstName: firstName @__defer_internal(id: 2, label: "myLabel", parentDeferId: 1) + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.firstName": "__internal_firstName"}, + }, + { + name: "key with defer id and parentId - existing plain nested reused, leaf gets directive", + definition: ` + type Query { user: User } + type User { id: ID! address: Address! } + type Address { street: String! city: String! }`, + operation: `query { user { address { city } } }`, + typeName: "User", + fieldSet: "address { street }", + isKey: true, + deferInfo: &DeferInfo{ID: 2, ParentID: 1}, + parentFieldDeferID: 1, + selectionSetRef: 1, + // existing plain address reused; street added with @deferInternal + expectedOperation: ` + query { + user { + address { + city + street @__defer_internal(id: 1) + } + } + }`, + expectedSkipFieldsCount: 1, // street + expectedRequiredFieldsCount: 2, // address (reused) + street + expectedModifiedFieldsCount: 1, // address modified + expectedRemappedPaths: map[string]string{}, + }, + { + name: "requires with defer id and parentId - directive added to nested fields too", + definition: ` + type Query { user: User } + type User { id: ID! address: Address! fullAddress: String! } + type Address { street: String! city: String! }`, + operation: `query { user { fullAddress } }`, + typeName: "User", + fieldSet: "address { street }", + isKey: false, + deferInfo: &DeferInfo{ID: 2, ParentID: 1}, + expectedOperation: ` + query { + user { + fullAddress + __internal_address: address @__defer_internal(id: 2, parentDeferId: 1) { + street @__defer_internal(id: 2, parentDeferId: 1) + } + } + }`, + expectedSkipFieldsCount: 2, + expectedRequiredFieldsCount: 2, + expectedModifiedFieldsCount: 0, + expectedRemappedPaths: map[string]string{"User.address": "__internal_address"}, + }, + { + name: "key - existing field has defer_internal, non-deferred requirement gets aliased", + definition: ` + type Query { user: User } + type User { id: ID! name: String! }`, + operation: `query { user { id @__defer_internal(id: 1) name } }`, + typeName: "User", + fieldSet: "id", + isKey: true, + deferInfo: nil, + expectedOperation: ` + query { + user { + id @__defer_internal(id: 1) + name + __internal_id: id + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.id": "__internal_id"}, + }, + { + name: "requires - existing field has defer_internal, non-deferred requirement gets aliased", + definition: ` + type Query { user: User } + type User { id: ID! firstName: String! fullName: String! }`, + operation: `query { user { firstName @__defer_internal(id: 1) fullName } }`, + typeName: "User", + fieldSet: "firstName", + isKey: false, + deferInfo: nil, + expectedOperation: ` + query { + user { + firstName @__defer_internal(id: 1) + fullName + __internal_firstName: firstName + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.firstName": "__internal_firstName"}, + }, + { + name: "key - nested field has defer_internal, non-deferred requirement gets aliased", + definition: ` + type Query { user: User } + type User { id: ID! address: Address! } + type Address { street: String! city: String! }`, + operation: `query { user { address { street @__defer_internal(id: 1) city } } }`, + typeName: "User", + fieldSet: "address { street }", + isKey: true, + deferInfo: nil, + selectionSetRef: 1, + expectedOperation: ` + query { + user { + address { + street @__defer_internal(id: 1) + city + __internal_street: street + } + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 2, // address (reused) + __internal_street + expectedModifiedFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.address.street": "__internal_street"}, + }, + { + name: "requires - nested field has defer_internal, non-deferred requirement gets aliased", + definition: ` + type Query { user: User } + type User { id: ID! address: Address! fullAddress: String! } + type Address { street: String! city: String! }`, + operation: `query { user { address { street @__defer_internal(id: 1) city } fullAddress } }`, + typeName: "User", + fieldSet: "address { street }", + isKey: false, + deferInfo: nil, + selectionSetRef: 1, + expectedOperation: ` + query { + user { + address { + street @__defer_internal(id: 1) + city + __internal_street: street + } + fullAddress + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 2, // address (reused) + __internal_street + expectedModifiedFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.address.street": "__internal_street"}, + }, + { + name: "requires with defer id - second call with same defer id reuses existing alias", + definition: ` + type Query { user: User } + type User { id: ID! settings: Settings! fullName: String! account: Account! } + type Settings { region: String! } + type Account { type: String! }`, + // operation already has __internal_settings from a prior addRequiredFields call; + // nested region also carries the defer directive + operation: `query { user { fullName __internal_settings: settings @__defer_internal(id: 1) { region @__defer_internal(id: 1) } account } }`, + typeName: "User", + fieldSet: "settings { region }", + isKey: false, + selectionSetRef: 1, + deferInfo: &DeferInfo{ID: 1}, + // __internal_settings already exists with same defer scope — reuse it; no new field added + expectedOperation: ` + query { + user { + fullName + __internal_settings: settings @__defer_internal(id: 1) { region @__defer_internal(id: 1) } + account + } + }`, + expectedSkipFieldsCount: 0, + expectedRequiredFieldsCount: 2, // reused settings ref + reused region ref (nested non-deferred path) + expectedRemappedPaths: map[string]string{"User.settings": "__internal_settings"}, + }, + { + name: "requires with defer id - existing alias from different defer scope gets defer-id alias", + definition: ` + type Query { user: User } + type User { id: ID! settings: Settings! fullName: String! account: Account! } + type Settings { region: String! } + type Account { type: String! }`, + // operation has __internal_settings belonging to defer scope "1" with directive on nested field too + operation: `query { user { fullName __internal_settings: settings @__defer_internal(id: 1) { region @__defer_internal(id: 1) } account } }`, + typeName: "User", + fieldSet: "settings { region }", + isKey: false, + selectionSetRef: 1, // user's inner selection set; ref 0 is the pre-seeded settings' inner selection set + deferInfo: &DeferInfo{ID: 2}, + // __internal_settings exists but belongs to defer "1"; no __internal_2_settings yet — create it + expectedOperation: ` + query { + user { + fullName + __internal_settings: settings @__defer_internal(id: 1) { region @__defer_internal(id: 1) } + account + __internal_2_settings: settings @__defer_internal(id: 2) { region @__defer_internal(id: 2) } + } + }`, + expectedSkipFieldsCount: 2, // __internal_2_settings + nested region + expectedRequiredFieldsCount: 2, + expectedRemappedPaths: map[string]string{"User.settings": "__internal_2_settings"}, + }, + { + name: "requires with inline fragments in deferred context - enforce typename and assign defer directive", + definition: ` + type Query { + account: Account + } + type Account { + id: ID! + node: Node! + } + interface Node { + id: ID! + } + type User implements Node { + id: ID! + name: String! + } + type Admin implements Node { + id: ID! + role: String! + }`, + operation: ` + query { + account { + id + } + }`, + typeName: "Account", + fieldSet: "node { ... on User { name } ... on Admin { role } }", + isKey: false, + deferInfo: &DeferInfo{ID: 1}, + // addTypenameSelection now calls applyDeferInternalDirective so the + // auto-added __typename (triggered by inline fragments) carries @__defer_internal + expectedOperation: ` + query { + account { + id + __internal_node: node @__defer_internal(id: 1) { + __typename @__defer_internal(id: 1) + ... on User { + name @__defer_internal(id: 1) + } + ... on Admin { + role @__defer_internal(id: 1) + } + } + } + }`, + expectedSkipFieldsCount: 4, // __internal_node alias, __typename, name, role + expectedRequiredFieldsCount: 3, // node, name, role (__typename not stored) + expectedRemappedPaths: map[string]string{"Account.node": "__internal_node"}, + }, + { + name: "requires with addTypenameInNestedSelections no fragments, but typenames are enforced", + definition: ` + type Query { + user: User + } + type User { + id: ID! + account: Account! + fullAccount: String! + } + type Account { + id: ID! + type: String! + }`, + operation: ` + query { + user { + fullAccount + } + }`, + typeName: "User", + fieldSet: "account { id }", + isKey: false, + deferInfo: &DeferInfo{ID: 1}, + enforceTypenameForRequired: true, + expectedOperation: ` + query { + user { + fullAccount + __internal_account: account @__defer_internal(id: 1) { + __typename @__defer_internal(id: 1) + id @__defer_internal(id: 1) + } + } + }`, + expectedSkipFieldsCount: 3, // __internal_account alias, __typename, id + expectedRequiredFieldsCount: 2, // account, id (__typename not stored) + expectedRemappedPaths: map[string]string{"User.account": "__internal_account"}, + }, + { + name: "requires with addTypenameInNestedSelections no fragments, typenames are not enforced", + definition: ` + type Query { + user: User + } + type User { + id: ID! + account: Account! + fullAccount: String! + } + type Account { + id: ID! + type: String! + }`, + operation: ` + query { + user { + fullAccount + } + }`, + typeName: "User", + fieldSet: "account { id }", + isKey: false, + deferInfo: &DeferInfo{ID: 1}, + expectedOperation: ` + query { + user { + fullAccount + __internal_account: account @__defer_internal(id: 1) { + id @__defer_internal(id: 1) + } + } + }`, + expectedSkipFieldsCount: 2, // __internal_account alias, id + expectedRequiredFieldsCount: 2, // account, id (__typename not stored) + expectedRemappedPaths: map[string]string{"User.account": "__internal_account"}, + }, + { + name: "requires with inline fragments and explicit typename in fieldSet in deferred context - do not add duplicated typename", + definition: ` + type Query { + account: Account + } + type Account { + id: ID! + node: Node! + } + interface Node { + id: ID! + } + type User implements Node { + id: ID! + name: String! + } + type Admin implements Node { + id: ID! + role: String! + }`, + operation: ` + query { + account { + id + } + }`, + typeName: "Account", + fieldSet: "node { __typename ... on User { name } ... on Admin { role } }", + isKey: false, + deferInfo: &DeferInfo{ID: 1}, + expectedOperation: ` + query { + account { + id + __internal_node: node @__defer_internal(id: 1) { + __typename @__defer_internal(id: 1) + ... on User { + name @__defer_internal(id: 1) + } + ... on Admin { + role @__defer_internal(id: 1) + } + } + } + }`, + expectedSkipFieldsCount: 4, // __internal_node alias, __typename, name, role + expectedRequiredFieldsCount: 3, // node, name, role (__typename not stored) + expectedRemappedPaths: map[string]string{"Account.node": "__internal_node"}, + }, + { + name: "key with defer id - second planner with different defer id but same parent defer id reuses existing alias", + definition: ` + type Query { user: User } + type User { id: ID! name: String! }`, + // operation pre-seeded: plain id is deferred (from prior entity planner), + // plus __internal_id already created by a prior key planner + // (deferInfo.ID="1", parentFieldDeferID="1") + operation: `query { user { id @__defer_internal(id: 1) name __internal_id: id @__defer_internal(id: 1) } }`, + typeName: "User", + fieldSet: "id", + isKey: true, + deferInfo: &DeferInfo{ID: 3}, + parentFieldDeferID: 1, + // effectiveDeferID = parentFieldDeferID = "1" matches __internal_id's directive → reuse it + expectedOperation: `query { user { id @__defer_internal(id: 1) name __internal_id: id @__defer_internal(id: 1) } }`, + expectedSkipFieldsCount: 0, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.id": "__internal_id"}, + }, + { + name: "key with defer id - third planner with yet another defer id but same parent defer id reuses existing alias", + definition: ` + type Query { user: User } + type User { id: ID! name: String! }`, + operation: `query { user { id @__defer_internal(id: 1) name __internal_id: id @__defer_internal(id: 1) } }`, + typeName: "User", + fieldSet: "id", + isKey: true, + deferInfo: &DeferInfo{ID: 5}, + parentFieldDeferID: 1, + expectedOperation: `query { user { id @__defer_internal(id: 1) name __internal_id: id @__defer_internal(id: 1) } }`, + expectedSkipFieldsCount: 0, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.id": "__internal_id"}, + }, + { + name: "key with defer id - different parent defer id still creates separate alias", + definition: ` + type Query { user: User } + type User { id: ID! name: String! }`, + // __internal_id belongs to parent scope "1"; new planner has parentFieldDeferID="2" + operation: `query { user { id @__defer_internal(id: 1) name __internal_id: id @__defer_internal(id: 1) } }`, + typeName: "User", + fieldSet: "id", + isKey: true, + deferInfo: &DeferInfo{ID: 3}, + parentFieldDeferID: 2, + // effectiveDeferID = "2" != "1" → Level 2 → creates __internal_2_id + expectedOperation: ` + query { + user { + id @__defer_internal(id: 1) + name + __internal_id: id @__defer_internal(id: 1) + __internal_2_id: id @__defer_internal(id: 2) + } + }`, + expectedSkipFieldsCount: 1, + expectedRequiredFieldsCount: 1, + expectedRemappedPaths: map[string]string{"User.id": "__internal_2_id"}, + }, + { + name: "requires with defer id - third call with same conflict defer id reuses conflict alias", + definition: ` + type Query { user: User } + type User { id: ID! settings: Settings! fullName: String! account: Account! } + type Settings { region: String! } + type Account { type: String! }`, + operation: `query { user { + fullName + __internal_settings: settings @__defer_internal(id: 1) { region @__defer_internal(id: 1) } + __internal_2_settings: settings @__defer_internal(id: 2) { region @__defer_internal(id: 2) } + account + } }`, + typeName: "User", + fieldSet: "settings { region }", + isKey: false, + selectionSetRef: 2, // user's inner selection set; refs 0 and 1 are the two pre-seeded settings' inner selection sets + deferInfo: &DeferInfo{ID: 2}, + // __internal_settings exists but defer "1" != "2"; __internal_2_settings exists with defer "2" — reuse it + expectedOperation: `query { user { + fullName + __internal_settings: settings @__defer_internal(id: 1) { region @__defer_internal(id: 1) } + __internal_2_settings: settings @__defer_internal(id: 2) { region @__defer_internal(id: 2) } + account + } }`, + expectedSkipFieldsCount: 0, + expectedRequiredFieldsCount: 2, // reused __internal_2_settings ref + reused nested region ref + expectedRemappedPaths: map[string]string{"User.settings": "__internal_2_settings"}, + }, } for _, tt := range tests { @@ -500,6 +1140,8 @@ func TestAddRequiredFields(t *testing.T) { allowTypename: tt.allowTypename, typeName: tt.typeName, fieldSet: tt.fieldSet, + deferInfo: tt.deferInfo, + parentFieldDeferID: tt.parentFieldDeferID, addTypenameInNestedSelections: tt.enforceTypenameForRequired, } diff --git a/v2/pkg/engine/plan/visitor.go b/v2/pkg/engine/plan/visitor.go index 69faf9ecd4..87e11a67fa 100644 --- a/v2/pkg/engine/plan/visitor.go +++ b/v2/pkg/engine/plan/visitor.go @@ -41,8 +41,9 @@ type Visitor struct { OperationName string operationDefinitionRef int objects []*resolve.Object - currentFields []objectFields + currentObjectFields []objectFields currentField *resolve.Field + currentFields []*resolve.Field planners []PlannerConfiguration skipFieldsRefs []int fieldRefDependsOnFieldRefs map[int][]int @@ -50,7 +51,6 @@ type Visitor struct { fieldRefDependants map[int][]int // inverse of fieldRefDependsOnFieldRefs fieldConfigs map[int]*FieldConfiguration exportedVariables map[string]struct{} - skipIncludeOnFragments map[int]skipIncludeInfo disableResolveFieldPositions bool includeQueryPlans bool indirectInterfaceFields map[int]indirectInterfaceField @@ -73,7 +73,6 @@ func NewVisitor(w *astvisitor.Walker) *Visitor { Walker: w, fieldConfigs: map[int]*FieldConfiguration{}, exportedVariables: map[string]struct{}{}, - skipIncludeOnFragments: map[int]skipIncludeInfo{}, indirectInterfaceFields: map[int]indirectInterfaceField{}, pathCache: map[astvisitor.VisitorKind]map[int]string{}, plannerFields: map[int][]int{}, @@ -133,13 +132,6 @@ func (v *Visitor) debugPrint(args ...interface{}) { fmt.Println(allArgs...) } -type skipIncludeInfo struct { - skip bool - skipVariableName string - include bool - includeVariableName string -} - type objectFields struct { popOnField int fields *[]*resolve.Field @@ -225,7 +217,7 @@ func (v *Visitor) AllowVisitor(kind astvisitor.VisitorKind, ref int, visitor any } } - if !v.Config.DisableIncludeFieldDependencies && kind == astvisitor.LeaveField { + if !v.Config.DisableCalculateFieldDependencies && kind == astvisitor.LeaveField { // we don't need to do this twice, so we only do it on leave // store which fields are planned on which planners @@ -286,34 +278,6 @@ func (v *Visitor) currentFullPath(skipFragments bool) string { } func (v *Visitor) EnterDirective(ref int) { - directiveName := v.Operation.DirectiveNameString(ref) - ancestor := v.Walker.Ancestors[len(v.Walker.Ancestors)-1] - switch ancestor.Kind { - case ast.NodeKindOperationDefinition: - switch directiveName { - case "flushInterval": - if value, ok := v.Operation.DirectiveArgumentValueByName(ref, literal.MILLISECONDS); ok { - if value.Kind == ast.ValueKindInteger { - v.plan.SetFlushInterval(v.Operation.IntValueAsInt(value.Ref)) - } - } - } - case ast.NodeKindField: - switch directiveName { - case "stream": - initialBatchSize := 0 - if value, ok := v.Operation.DirectiveArgumentValueByName(ref, literal.INITIAL_BATCH_SIZE); ok { - if value.Kind == ast.ValueKindInteger { - initialBatchSize = int(v.Operation.IntValueAsInt32(value.Ref)) - } - } - v.currentField.Stream = &resolve.StreamField{ - InitialBatchSize: initialBatchSize, - } - case "defer": - v.currentField.Defer = &resolve.DeferField{} - } - } } func (v *Visitor) EnterInlineFragment(ref int) { @@ -326,23 +290,6 @@ func (v *Visitor) EnterInlineFragment(ref int) { } v.indirectInterfaceFields[v.Operation.InlineFragments[ref].SelectionSet] = field } - - directives := v.Operation.InlineFragments[ref].Directives.Refs - skipVariableName, skip := v.Operation.ResolveSkipDirectiveVariable(directives) - includeVariableName, include := v.Operation.ResolveIncludeDirectiveVariable(directives) - setRef := v.Operation.InlineFragments[ref].SelectionSet - if setRef == ast.InvalidRef { - return - } - - if skip || include { - v.skipIncludeOnFragments[ref] = skipIncludeInfo{ - skip: skip, - skipVariableName: skipVariableName, - include: include, - includeVariableName: includeVariableName, - } - } } func (v *Visitor) LeaveInlineFragment(ref int) { @@ -372,11 +319,6 @@ func (v *Visitor) EnterField(ref int) { fieldName := v.Operation.FieldNameBytes(ref) fieldAliasOrName := v.Operation.FieldAliasOrNameBytes(ref) - if bytes.Equal(fieldAliasOrName, []byte("__internal__typename_placeholder")) { - // we should skip such typename as it was added as a placeholder to keep query valid - return - } - fieldDefinition, ok := v.Walker.FieldDefinition(ref) if !ok { return @@ -416,7 +358,10 @@ func (v *Visitor) EnterField(ref int) { } // append the field to the current object - *v.currentFields[len(v.currentFields)-1].fields = append(*v.currentFields[len(v.currentFields)-1].fields, v.currentField) + *v.currentObjectFields[len(v.currentObjectFields)-1].fields = append(*v.currentObjectFields[len(v.currentObjectFields)-1].fields, v.currentField) + + // append the current field to the list of current fields + v.currentFields = append(v.currentFields, v.currentField) v.mapFieldConfig(ref) } @@ -476,6 +421,12 @@ func (v *Visitor) resolveFieldInfo(ref, typeRef int, onTypeNames [][]byte) *reso sourceNames = append(sourceNames, v.planners[i].DataSourceConfiguration().Name()) } } + // deduplicate + slices.Sort(sourceIDs) + sourceIDs = slices.Compact(sourceIDs) + slices.Sort(sourceNames) + sourceNames = slices.Compact(sourceNames) + fieldInfo := &resolve.FieldInfo{ Name: fieldName, NamedType: typeName, @@ -513,24 +464,6 @@ func (v *Visitor) resolveFieldPosition(ref int) resolve.Position { } } -func (v *Visitor) resolveSkipIncludeOnParent() (info skipIncludeInfo, ok bool) { - if len(v.skipIncludeOnFragments) == 0 { - return skipIncludeInfo{}, false - } - - for i := len(v.Walker.Ancestors) - 1; i >= 0; i-- { - ancestor := v.Walker.Ancestors[i] - if ancestor.Kind != ast.NodeKindInlineFragment { - continue - } - if info, ok := v.skipIncludeOnFragments[ancestor.Ref]; ok { - return info, true - } - } - - return skipIncludeInfo{}, false -} - func (v *Visitor) resolveOnTypeNames(fieldRef int, fieldName ast.ByteSlice) (onTypeNames [][]byte) { if len(v.Walker.Ancestors) < 2 { return nil @@ -640,8 +573,14 @@ func (v *Visitor) LeaveField(fieldRef int) { return } - if v.currentFields[len(v.currentFields)-1].popOnField == fieldRef { - v.currentFields = v.currentFields[:len(v.currentFields)-1] + v.assignDefer(fieldRef) + + // remove the current field from the current fields stack + v.currentFields = v.currentFields[:len(v.currentFields)-1] + + // remove the current field from the list of current object fields if they belong to this field + if v.currentObjectFields[len(v.currentObjectFields)-1].popOnField == fieldRef { + v.currentObjectFields = v.currentObjectFields[:len(v.currentObjectFields)-1] } fieldDefinitionRef, ok := v.Walker.FieldDefinition(fieldRef) if !ok { @@ -654,6 +593,31 @@ func (v *Visitor) LeaveField(fieldRef int) { } } +func (v *Visitor) assignDefer(fieldRef int) { + currentField := v.currentFields[len(v.currentFields)-1] + + // ignore existence check - we should always have planners for the field + plannerIds := v.fieldPlanners[fieldRef] + + for _, plannerId := range plannerIds { + planner := v.planners[plannerId] + + fieldPathConfiguration, ok := planner.PathWithFieldRef(fieldRef) + if !ok { + continue + } + + if fieldPathConfiguration.deferredField { + currentField.Defer = &resolve.DeferField{ + DeferID: fieldPathConfiguration.deferID, + } + + // after the normalization we should have only one planner per deferred field + break + } + } +} + // skipField returns true if the field was added by the query planner as a dependency. // For another field and should not be included in the response. // If it returns false, the user requests the field. @@ -881,8 +845,13 @@ func (v *Visitor) resolveFieldValue(fieldRef, typeRef int, nullable bool, path [ } v.objects = append(v.objects, object) + + // When the current field has an object type, we need to push its fields slice to the stack. + // However, we can do that only after the field, which we are currently creating, will be added to the parent object fields. + // So we defer this action to be executed right after the current field is added to the parent object fields slice. + // This is more simple than analyzing resolve.Node, because this object could be nested in a list. v.Walker.DefferOnEnterField(func() { - v.currentFields = append(v.currentFields, objectFields{ + v.currentObjectFields = append(v.currentObjectFields, objectFields{ popOnField: fieldRef, fields: &object.Fields, }) @@ -1003,28 +972,40 @@ func (v *Visitor) EnterOperationDefinition(opRef int) { } v.objects = append(v.objects, rootObject) - v.currentFields = append(v.currentFields, objectFields{ + v.currentObjectFields = append(v.currentObjectFields, objectFields{ fields: &rootObject.Fields, popOnField: -1, }) - operationKind, _, err := AnalyzePlanKind(v.Operation, v.Definition, v.OperationName) - if err != nil { - v.Walker.StopWithInternalErr(err) - return + isSubscription := false + isDefer := false + + for i := range v.planners { + if v.planners[i].ObjectFetchConfiguration().isSubscription { + isSubscription = true + break + } + + if v.planners[i].DeferID() != 0 { + isDefer = true + break + } } v.response = &resolve.GraphQLResponse{ Data: rootObject, RawFetches: make([]*resolve.FetchItem, 0, len(v.planners)), } + if !v.Config.DisableIncludeInfo { + operationType := v.Operation.OperationDefinitions[0].OperationType v.response.Info = &resolve.GraphQLResponseInfo{ - OperationType: operationKind, + OperationType: operationType, } } - if operationKind == ast.OperationTypeSubscription { + switch { + case isSubscription: v.subscription = &resolve.GraphQLSubscription{ Response: v.response, } @@ -1032,52 +1013,30 @@ func (v *Visitor) EnterOperationDefinition(opRef int) { FlushInterval: v.Config.DefaultFlushIntervalMillis, Response: v.subscription, } - return - } - - v.plan = &SynchronousResponsePlan{ - Response: v.response, - } -} - -// TODO: cleanup - field alias override logic is disabled -func (v *Visitor) resolveFieldPath(ref int) []string { - typeName := v.Walker.EnclosingTypeDefinition.NameString(v.Definition) - fieldName := v.Operation.FieldNameUnsafeString(ref) - plannerConfig := v.currentOrParentPlannerConfiguration(ref) - - aliasOverride := false - if plannerConfig != nil && plannerConfig.Planner() != nil { - behavior := plannerConfig.DataSourceConfiguration().PlanningBehavior() - aliasOverride = behavior.OverrideFieldPathFromAlias - } - - for i := range v.Config.Fields { - if v.Config.Fields[i].TypeName == typeName && v.Config.Fields[i].FieldName == fieldName { - if aliasOverride { - override, exists := plannerConfig.DownstreamResponseFieldAlias(ref) - if exists { - return []string{override} - } + case isDefer: + if !v.Config.DisableIncludeInfo { + v.response.Info = &resolve.GraphQLResponseInfo{ + OperationType: ast.OperationTypeQuery, } - if aliasOverride && v.Operation.FieldAliasIsDefined(ref) { - return []string{v.Operation.FieldAliasString(ref)} - } - if v.Config.Fields[i].DisableDefaultMapping { - return nil - } - if len(v.Config.Fields[i].Path) != 0 { - return v.Config.Fields[i].Path + } + + v.plan = &DeferResponsePlan{ + Response: &resolve.GraphQLDeferResponse{ + Response: v.response, + }, + } + default: + if !v.Config.DisableIncludeInfo { + v.response.Info = &resolve.GraphQLResponseInfo{ + OperationType: ast.OperationTypeQuery, } - return []string{fieldName} } - } - if aliasOverride { - return []string{v.Operation.FieldAliasOrNameString(ref)} - } + v.plan = &SynchronousResponsePlan{ + Response: v.response, + } - return []string{fieldName} + } } func (v *Visitor) EnterDocument(operation, definition *ast.Document) { @@ -1099,43 +1058,6 @@ var ( selectorRegex = regexp.MustCompile(`{{\s*\.(.*?)\s*}}`) ) -func (v *Visitor) currentOrParentPlannerConfiguration(fieldRef int) PlannerConfiguration { - // TODO: this method should be dropped it is unnecessary expensive - - const none = -1 - currentPath := v.currentFullPath(false) - plannerIndex := none - plannerPathDeepness := none - - for i := range v.planners { - v.planners[i].ForEachPath(func(plannerPath *pathConfiguration) bool { - if v.isCurrentOrParentPath(currentPath, plannerPath.path) { - currentPlannerPathDeepness := v.pathDeepness(plannerPath.path) - if currentPlannerPathDeepness > plannerPathDeepness { - plannerPathDeepness = currentPlannerPathDeepness - plannerIndex = i - return true - } - } - return false - }) - } - - if plannerIndex != none { - return v.planners[plannerIndex] - } - - return nil -} - -func (v *Visitor) isCurrentOrParentPath(currentPath string, parentPath string) bool { - return strings.HasPrefix(currentPath, parentPath) -} - -func (v *Visitor) pathDeepness(path string) int { - return strings.Count(path, ".") -} - func (v *Visitor) resolveInputTemplates(config *objectFetchConfiguration, input *string, variables *resolve.Variables) { *input = templateRegex.ReplaceAllStringFunc(*input, func(s string) string { selectors := selectorRegex.FindStringSubmatch(s) @@ -1337,6 +1259,7 @@ func (v *Visitor) configureFetch(internal *objectFetchConfiguration, external re FetchDependencies: resolve.FetchDependencies{ FetchID: internal.fetchID, DependsOnFetchIDs: internal.dependsOnFetchIDs, + DeferID: internal.deferID, }, DataSourceIdentifier: []byte(dataSourceType), } @@ -1439,7 +1362,7 @@ func (v *Visitor) buildFetchReasons(fetchID int) []resolve.FetchReason { for _, fieldRef := range fields { fieldName := v.Operation.FieldNameString(fieldRef) - if fieldName == "__typename" { + if fieldName == typeNameField { continue } typeName := v.fieldEnclosingTypeNames[fieldRef] diff --git a/v2/pkg/engine/postprocess/extract_defer_fetches.go b/v2/pkg/engine/postprocess/extract_defer_fetches.go new file mode 100644 index 0000000000..1e91454cb5 --- /dev/null +++ b/v2/pkg/engine/postprocess/extract_defer_fetches.go @@ -0,0 +1,58 @@ +package postprocess + +import ( + "maps" + "slices" + + "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/plan" + "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/resolve" +) + +type extractDeferFetches struct { + disable bool +} + +func (d *extractDeferFetches) Process(deferPlan *plan.DeferResponsePlan) { + if d.disable { + return + } + + root, fetchGroups := d.fetchGroups(deferPlan) + + deferPlan.Response.Response.Fetches = &resolve.FetchTreeNode{ + Kind: resolve.FetchTreeNodeKindSequence, + ChildNodes: root, + } + + // sort defer ids in direct natural order + deferIds := slices.Sorted(maps.Keys(fetchGroups)) + + for _, deferID := range deferIds { + fetches := fetchGroups[deferID] + deferResponse := &resolve.DeferFetchGroup{ + DeferID: deferID, + + Fetches: &resolve.FetchTreeNode{ + Kind: resolve.FetchTreeNodeKindSequence, + ChildNodes: fetches, + }, + } + deferPlan.Response.Defers = append(deferPlan.Response.Defers, deferResponse) + } +} + +func (d *extractDeferFetches) fetchGroups(deferPlan *plan.DeferResponsePlan) (root []*resolve.FetchTreeNode, deffered map[int][]*resolve.FetchTreeNode) { + fetchGroups := make(map[int][]*resolve.FetchTreeNode) + + for _, fetch := range deferPlan.Response.Response.Fetches.ChildNodes { + deferID := fetch.Item.Fetch.Dependencies().DeferID + if deferID == 0 { + root = append(root, fetch) + continue + } + + fetchGroups[deferID] = append(fetchGroups[deferID], fetch) + } + + return root, fetchGroups +} diff --git a/v2/pkg/engine/postprocess/postprocess.go b/v2/pkg/engine/postprocess/postprocess.go index a98f9f16a5..52830cb9e3 100644 --- a/v2/pkg/engine/postprocess/postprocess.go +++ b/v2/pkg/engine/postprocess/postprocess.go @@ -19,13 +19,42 @@ type FetchTreeProcessor interface { // Processor transforms and optimizes the query plan after // it's been created by the planner but before execution. type Processor struct { - disableExtractFetches bool - collectDataSourceInfo bool - resolveInputTemplates *resolveInputTemplates - appendFetchID *fetchIDAppender - dedupe *deduplicateSingleFetches - processResponseTree []ResponseTreeProcessor - processFetchTree []FetchTreeProcessor + disableExtractFetches bool + collectDataSourceInfo bool + fetchTreeProcessors *FetchTreeProcessors + responseTreeProcessors *ResponseTreeProcessors + extractDeferFetches *extractDeferFetches +} + +type FetchTreeProcessors struct { + resolveInputTemplates *resolveInputTemplates + appendFetchID *fetchIDAppender + dedupe *deduplicateSingleFetches + addMissingNestedDependencies *addMissingNestedDependencies + createConcreteSingleFetchTypes *createConcreteSingleFetchTypes + orderSequenceByDependencies *orderSequenceByDependencies + createParallelNodes *createParallelNodes +} + +// processFlatFetchTree - process a flat fetch tree - single serial fetch with flat list of child fetches +func (p *FetchTreeProcessors) processFlatFetchTree(fetches *resolve.FetchTreeNode) { + p.dedupe.ProcessFetchTree(fetches) + // Appending fetchIDs makes query content unique, thus it should happen after "dedupe". + p.appendFetchID.ProcessFetchTree(fetches) + p.resolveInputTemplates.ProcessFetchTree(fetches) + p.addMissingNestedDependencies.ProcessFetchTree(fetches) + p.createConcreteSingleFetchTypes.ProcessFetchTree(fetches) +} + +// organizeFetchTree organizes the fetch tree by ordering sequence nodes by dependencies and creating parallel nodes. +// after this step fetches have tree structure of serial and parallel nodes. +func (p *FetchTreeProcessors) organizeFetchTree(fetches *resolve.FetchTreeNode) { + p.orderSequenceByDependencies.ProcessFetchTree(fetches) + p.createParallelNodes.ProcessFetchTree(fetches) +} + +type ResponseTreeProcessors struct { + mergeFields *mergeFields } type processorOptions struct { @@ -39,6 +68,7 @@ type processorOptions struct { disableCreateParallelNodes bool disableAddMissingNestedDependencies bool collectDataSourceInfo bool + disableExtractDeferFetches bool } type ProcessorOption func(*processorOptions) @@ -92,6 +122,12 @@ func DisableAddMissingNestedDependencies() ProcessorOption { } } +func DisableExtractDeferFetches() ProcessorOption { + return func(o *processorOptions) { + o.disableExtractDeferFetches = true + } +} + func NewProcessor(options ...ProcessorOption) *Processor { opts := &processorOptions{} for _, o := range options { @@ -100,36 +136,39 @@ func NewProcessor(options ...ProcessorOption) *Processor { return &Processor{ collectDataSourceInfo: opts.collectDataSourceInfo, disableExtractFetches: opts.disableExtractFetches, - resolveInputTemplates: &resolveInputTemplates{ - disable: opts.disableResolveInputTemplates, - }, - appendFetchID: &fetchIDAppender{ - disable: opts.disableRewriteOpNames, - }, - dedupe: &deduplicateSingleFetches{ - disable: opts.disableDeduplicateSingleFetches, - }, - processFetchTree: []FetchTreeProcessor{ + fetchTreeProcessors: &FetchTreeProcessors{ + resolveInputTemplates: &resolveInputTemplates{ + disable: opts.disableResolveInputTemplates, + }, + appendFetchID: &fetchIDAppender{ + disable: opts.disableRewriteOpNames, + }, + dedupe: &deduplicateSingleFetches{ + disable: opts.disableDeduplicateSingleFetches, + }, // this must go first, as we need to deduplicate fetches so that subsequent processors can work correctly - &addMissingNestedDependencies{ + addMissingNestedDependencies: &addMissingNestedDependencies{ disable: opts.disableAddMissingNestedDependencies, }, // this must go after deduplication because it relies on the existence of a "sequence" fetch node in the root - &createConcreteSingleFetchTypes{ + createConcreteSingleFetchTypes: &createConcreteSingleFetchTypes{ disable: opts.disableCreateConcreteSingleFetchTypes, }, - &orderSequenceByDependencies{ + orderSequenceByDependencies: &orderSequenceByDependencies{ disable: opts.disableOrderSequenceByDependencies, }, - &createParallelNodes{ + createParallelNodes: &createParallelNodes{ disable: opts.disableCreateParallelNodes, }, }, - processResponseTree: []ResponseTreeProcessor{ - &mergeFields{ + responseTreeProcessors: &ResponseTreeProcessors{ + mergeFields: &mergeFields{ disable: opts.disableMergeFields, }, }, + extractDeferFetches: &extractDeferFetches{ + disable: opts.disableExtractDeferFetches, + }, } } @@ -140,33 +179,39 @@ func NewProcessor(options ...ProcessorOption) *Processor { func (p *Processor) Process(pre plan.Plan) { switch t := pre.(type) { case *plan.SynchronousResponsePlan: - for i := range p.processResponseTree { - p.processResponseTree[i].Process(t.Response.Data) - } + p.responseTreeProcessors.mergeFields.Process(t.Response.Data) // initialize the fetch tree p.createFetchTree(t.Response) - // NOTE: deduplication relies on the fact that the fetch tree - // have flat structure of child fetches - p.dedupe.ProcessFetchTree(t.Response.Fetches) - // Appending fetchIDs makes query content unique, thus it should happen after "dedupe". - p.appendFetchID.ProcessFetchTree(t.Response.Fetches) - p.resolveInputTemplates.ProcessFetchTree(t.Response.Fetches) - for i := range p.processFetchTree { - p.processFetchTree[i].ProcessFetchTree(t.Response.Fetches) + p.fetchTreeProcessors.processFlatFetchTree(t.Response.Fetches) + p.fetchTreeProcessors.organizeFetchTree(t.Response.Fetches) + + case *plan.DeferResponsePlan: + p.responseTreeProcessors.mergeFields.Process(t.Response.Response.Data) + p.createFetchTree(t.Response.Response) + p.fetchTreeProcessors.processFlatFetchTree(t.Response.Response.Fetches) + + // extract deferred fetches into their own fetch trees + p.extractDeferFetches.Process(t) + + // process the initial response fetch tree + p.fetchTreeProcessors.organizeFetchTree(t.Response.Response.Fetches) + + // process each deferred response fetch tree + for _, deferResp := range t.Response.Defers { + p.fetchTreeProcessors.organizeFetchTree(deferResp.Fetches) } + case *plan.SubscriptionResponsePlan: - for i := range p.processResponseTree { - p.processResponseTree[i].ProcessSubscription(t.Response.Response.Data) - } + p.responseTreeProcessors.mergeFields.Process(t.Response.Response.Data) p.createFetchTree(t.Response.Response) p.appendTriggerToFetchTree(t.Response) - p.dedupe.ProcessFetchTree(t.Response.Response.Fetches) - p.appendFetchID.ProcessFetchTree(t.Response.Response.Fetches) - p.resolveInputTemplates.ProcessFetchTree(t.Response.Response.Fetches) - p.resolveInputTemplates.ProcessTrigger(&t.Response.Trigger) - for i := range p.processFetchTree { - p.processFetchTree[i].ProcessFetchTree(t.Response.Response.Fetches) - } + + p.fetchTreeProcessors.processFlatFetchTree(t.Response.Response.Fetches) + + // resolve input template for the root query in the subscription trigger + p.fetchTreeProcessors.resolveInputTemplates.ProcessTrigger(&t.Response.Trigger) + + p.fetchTreeProcessors.organizeFetchTree(t.Response.Response.Fetches) } } diff --git a/v2/pkg/engine/resolve/const.go b/v2/pkg/engine/resolve/const.go index 8702e93a06..8fe77c1aa1 100644 --- a/v2/pkg/engine/resolve/const.go +++ b/v2/pkg/engine/resolve/const.go @@ -31,6 +31,8 @@ var ( literalValueCompletion = []byte("valueCompletion") literalRateLimit = []byte("rateLimit") literalAuthorization = []byte("authorization") + literalIncremental = []byte("incremental") + literalHasNext = []byte("hasNext") emptyArray = []byte("[]") emptyObject = []byte("{}") diff --git a/v2/pkg/engine/resolve/fetch.go b/v2/pkg/engine/resolve/fetch.go index 622e731c4b..512de1ce28 100644 --- a/v2/pkg/engine/resolve/fetch.go +++ b/v2/pkg/engine/resolve/fetch.go @@ -110,6 +110,7 @@ func (s *SingleFetch) FetchInfo() *FetchInfo { type FetchDependencies struct { FetchID int DependsOnFetchIDs []int + DeferID int } type PostProcessingConfiguration struct { diff --git a/v2/pkg/engine/resolve/loader.go b/v2/pkg/engine/resolve/loader.go index 8c6fbed84f..2196ae32c9 100644 --- a/v2/pkg/engine/resolve/loader.go +++ b/v2/pkg/engine/resolve/loader.go @@ -200,14 +200,19 @@ func (l *Loader) Free() { } func (l *Loader) LoadGraphQLResponseData(ctx *Context, response *GraphQLResponse, resolvable *Resolvable) (err error) { + l.Init(ctx, response.Info, resolvable) + + return l.ResolveFetchNode(response.Fetches) +} + +func (l *Loader) Init(ctx *Context, responseInfo *GraphQLResponseInfo, resolvable *Resolvable) { l.resolvable = resolvable l.ctx = ctx - l.info = response.Info + l.info = responseInfo l.taintedObjs = make(taintedObjects) - return l.resolveFetchNode(response.Fetches) } -func (l *Loader) resolveFetchNode(node *FetchTreeNode) error { +func (l *Loader) ResolveFetchNode(node *FetchTreeNode) error { if node == nil { return nil } @@ -274,7 +279,7 @@ func (l *Loader) resolveParallel(nodes []*FetchTreeNode) error { func (l *Loader) resolveSerial(nodes []*FetchTreeNode) error { for i := range nodes { - err := l.resolveFetchNode(nodes[i]) + err := l.ResolveFetchNode(nodes[i]) if err != nil { return errors.WithStack(err) } @@ -567,6 +572,7 @@ func (l *Loader) mergeResult(fetchItem *FetchItem, res *result, items []*astjson if responseData.Type() != astjson.TypeObject { return l.renderErrorsFailedToFetch(fetchItem, res, invalidGraphQLResponseShape) } + // TODO: unclear why we doing this l.resolvable.data = responseData return nil } diff --git a/v2/pkg/engine/resolve/node_object.go b/v2/pkg/engine/resolve/node_object.go index 7f5e94a4c6..e4afee2c90 100644 --- a/v2/pkg/engine/resolve/node_object.go +++ b/v2/pkg/engine/resolve/node_object.go @@ -179,4 +179,6 @@ type StreamField struct { InitialBatchSize int } -type DeferField struct{} +type DeferField struct { + DeferID int +} diff --git a/v2/pkg/engine/resolve/resolvable.go b/v2/pkg/engine/resolve/resolvable.go index 6eb3395327..f30728722f 100644 --- a/v2/pkg/engine/resolve/resolvable.go +++ b/v2/pkg/engine/resolve/resolvable.go @@ -37,7 +37,8 @@ type Resolvable struct { astjsonArena arena.Arena parsers []*astjson.Parser - print bool + enableRender bool + enableDeferRender bool out io.Writer printErr error path []fastjsonext.PathElement @@ -53,6 +54,8 @@ type Resolvable struct { wroteErrors bool wroteData bool skipValueCompletion bool + deferMode bool + deferID int typeNames [][]byte @@ -65,6 +68,9 @@ type Resolvable struct { // actualListSizes maps the JSON path to the list size in the final response. // Used to compute the actual cost of the operation. actualListSizes map[string]int + + incrementalItemWritten bool + deferItemDataNull bool } type ResolvableOptions struct { @@ -96,7 +102,7 @@ func (r *Resolvable) Reset() { r.errors = nil r.valueCompletion = nil r.depth = 0 - r.print = false + r.enableRender = false r.out = nil r.printErr = nil r.path = r.path[:0] @@ -114,6 +120,11 @@ func (r *Resolvable) Reset() { for k := range r.actualListSizes { delete(r.actualListSizes, k) } + r.deferMode = false + r.deferID = 0 + r.enableDeferRender = false + r.incrementalItemWritten = false + r.deferItemDataNull = false } func (r *Resolvable) Init(ctx *Context, initialData []byte, operationType ast.OperationType) (err error) { @@ -176,7 +187,7 @@ func (r *Resolvable) InitSubscription(ctx *Context, initialData []byte, postProc func (r *Resolvable) ResolveNode(node Node, data *astjson.Value, out io.Writer) error { r.out = out - r.print = false + r.enableRender = false r.printErr = nil r.authorizationError = nil // don't init errors! It will heavily increase memory usage @@ -187,7 +198,7 @@ func (r *Resolvable) ResolveNode(node Node, data *astjson.Value, out io.Writer) return fmt.Errorf("error resolving node") } - r.print = true + r.enableRender = true hasErrors = r.walkNode(node, data) if hasErrors { return fmt.Errorf("error resolving node: %w", r.printErr) @@ -197,7 +208,7 @@ func (r *Resolvable) ResolveNode(node Node, data *astjson.Value, out io.Writer) func (r *Resolvable) Resolve(ctx context.Context, rootData *Object, fetchTree *FetchTreeNode, out io.Writer) error { r.out = out - r.print = false + r.enableRender = false r.printErr = nil r.authorizationError = nil @@ -242,10 +253,148 @@ func (r *Resolvable) Resolve(ctx context.Context, rootData *Object, fetchTree *F r.printBytes(comma) r.printErr = r.printExtensions(ctx, fetchTree) } + + if r.deferMode && !r.hasErrors() { + r.printHasNext(true) + } + r.printBytes(rBrace) + return r.printErr } +func (r *Resolvable) ResolveDefer(rootData *Object, out io.Writer, hasNext bool) error { + r.out = out + r.printErr = nil + r.authorizationError = nil + + // This method acts as a generator for the incremental response + // It will print the incremental response envelope and then use walkObject to find and render the deferred fields + + // First pass: validate and check for authorization errors + r.enableRender = false + r.deferMode = true + r.enableDeferRender = false + r.incrementalItemWritten = false + r.deferItemDataNull = false + + _ = r.walkObject(rootData, r.data) + if r.authorizationError != nil { + return r.authorizationError + } + + // Second pass: render the incremental response + r.enableRender = true + r.incrementalItemWritten = false + r.enableDeferRender = false // reset: first pass may have left it true on early return + + r.printBytes(lBrace) + r.printBytes(quote) + r.printBytes(literalIncremental) + r.printBytes(quote) + r.printBytes(colon) + r.printBytes(lBrack) + + _ = r.walkObject(rootData, r.data) + + r.printBytes(rBrack) + + r.printHasNext(hasNext && !r.hasErrors()) + + r.printBytes(rBrace) + + return r.printErr +} + +func (r *Resolvable) renderPath() { + r.printBytes(lBrack) + for i, p := range r.path { + if i > 0 { + r.printBytes(comma) + } + if p.Name != "" { + r.printBytes(quote) + r.printBytes(unsafebytes.StringToBytes(p.Name)) + r.printBytes(quote) + } else { + r.printBytes(unsafebytes.StringToBytes(strconv.Itoa(p.Idx))) + } + } + r.printBytes(rBrack) +} + +func (r *Resolvable) printHasNext(hasNext bool) { + if r.printErr != nil { + return + } + r.printBytes(comma) + r.printBytes(quote) + r.printBytes(literalHasNext) + r.printBytes(quote) + r.printBytes(colon) + if hasNext { + r.printBytes(literalTrue) + } else { + r.printBytes(literalFalse) + } +} + +func (r *Resolvable) printDeferEnvelopeOpen() { + if !r.render() { + return + } + + // Render Incremental Item Envelope: {"data":{...},"path":[...]} + r.printBytes(lBrace) + r.printBytes(quote) + r.printBytes(literalData) + r.printBytes(quote) + r.printBytes(colon) + r.printBytes(lBrace) +} + +func (r *Resolvable) printDeferPathAndErrors() { + r.printBytes(quote) + r.printBytes(literalPath) + r.printBytes(quote) + r.printBytes(colon) + r.renderPath() + if r.hasErrors() { + r.printBytes(comma) + r.printBytes(quote) + r.printBytes(literalErrors) + r.printBytes(quote) + r.printBytes(colon) + r.printNode(r.errors) + } +} + +func (r *Resolvable) printDeferEnvelopeClose() { + if !r.render() { + return + } + + r.printBytes(rBrace) + r.printBytes(comma) + r.printDeferPathAndErrors() + r.printBytes(rBrace) +} + +func (r *Resolvable) printDeferEnvelopeNullData() { + if !r.render() { + return + } + r.printBytes(lBrace) + r.printBytes(quote) + r.printBytes(literalData) + r.printBytes(quote) + r.printBytes(colon) + r.printBytes(null) + r.printBytes(comma) + r.printDeferPathAndErrors() + r.printBytes(rBrace) +} + // ensureErrorsInitialized is used to lazily init r.errors if needed func (r *Resolvable) ensureErrorsInitialized() { if r.errors == nil { @@ -264,6 +413,14 @@ func (r *Resolvable) err() bool { return true } +func (r *Resolvable) render() bool { + if !r.deferMode { + return r.enableRender + } + + return r.enableRender && r.enableDeferRender +} + func (r *Resolvable) printErrors() { r.printBytes(quote) r.printBytes(literalErrors) @@ -280,9 +437,9 @@ func (r *Resolvable) printData(root *Object) { r.printBytes(quote) r.printBytes(colon) r.printBytes(lBrace) - r.print = true + r.enableRender = true _ = r.walkObject(root, r.data) - r.print = false + r.enableRender = false r.printBytes(rBrace) r.wroteData = true } @@ -609,7 +766,7 @@ func (r *Resolvable) walkObject(obj *Object, parent *astjson.Value) bool { // when we have a typename field present in a json object, we need to check if the type is valid if _, ok := obj.PossibleTypes[string(typeName)]; !ok { - if !r.print { + if !r.render() { // during pre-walk we need to add an error when the typename do not match a possible type if r.options.ApolloCompatibilityValueCompletionInExtensions { r.addValueCompletion(fmt.Sprintf("Invalid __typename found for object at %s.", r.pathLastElementDescription(obj.TypeName)), errorcodes.InvalidGraphql) @@ -634,27 +791,192 @@ func (r *Resolvable) walkObject(obj *Object, parent *astjson.Value) bool { } } - if r.print && !isRoot { + if r.render() && !isRoot { r.printBytes(lBrace) } - addComma := false r.typeNames = append(r.typeNames, typeName) defer func() { r.typeNames = r.typeNames[:len(r.typeNames)-1] }() + + if r.deferMode { + deferFields, seekFiels := r.collectDeferFields(obj) + + if len(deferFields) > 0 { + startedRender := false + + if !r.enableDeferRender { + r.enableDeferRender = true + startedRender = true + + if r.enableRender && r.incrementalItemWritten { + r.printBytes(comma) + } + + if r.deferID != 0 { + if r.deferItemDataNull { + // Pre-walk detected null propagating through non-nullable chain; + // render {"data":null,"path":[...],"errors":[...]} without walking fields. + r.printDeferEnvelopeNullData() + r.incrementalItemWritten = true + r.enableDeferRender = false + return true + } + r.printDeferEnvelopeOpen() + } + } + + // render initial batch of fields + hasErrors := r.walkFields(obj, value, parent, walkFieldsFilter{deferFields: deferFields, seek: false, enabled: true}) + + if startedRender { + if r.deferID != 0 { + if !r.enableRender && hasErrors { + // Pre-walk: null propagated through non-nullable chain; signal render pass. + r.deferItemDataNull = true + } + r.printDeferEnvelopeClose() + r.incrementalItemWritten = true + } + r.enableDeferRender = false + } + + if hasErrors { + return true + } + } + + if r.deferID != 0 && len(seekFiels) > 0 { + // seek for additional nested defer fields + if r.walkFields(obj, value, parent, walkFieldsFilter{seekFields: seekFiels, seek: true, enabled: true}) { + return true + } + } + + } else { + if r.walkFields(obj, value, parent, walkFieldsFilter{}) { + return true + } + } + + if r.render() && !isRoot { + r.printBytes(rBrace) + } + return false +} + +func (r *Resolvable) collectDeferFields(obj *Object) (deferFields map[int]struct{}, seekFields map[int]struct{}) { + deferFields = make(map[int]struct{}) + seekFields = make(map[int]struct{}) + for i := range obj.Fields { - if obj.Fields[i].ParentOnTypeNames != nil { - if r.skipFieldOnParentTypeNames(obj.Fields[i]) { + if r.shoulSkipObjectFieldByTypenames(obj.Fields[i]) { + continue + } + + if r.deferID == 0 { + // we are rendering the initial response + + // skip all fields with defer + if obj.Fields[i].Defer != nil { + continue + } + + // collect object fields without defer + deferFields[i] = struct{}{} + } + + // we are rendering defer response + + // collect fields without defer into seek fields + if obj.Fields[i].Defer == nil { + if !r.fieldNodeKindAllowsSeek(obj.Fields[i]) { + continue + } + + seekFields[i] = struct{}{} + continue + } + + // allow to seek fields with other defer ids + if obj.Fields[i].Defer.DeferID != r.deferID { + // but only if their id is smaller than current, + // which means this nodes already was fetched, + // as defers ordered by id + + // TODO: it is a temporary solution, + // because defer could be parallel + if r.deferID < obj.Fields[i].Defer.DeferID { + continue + } + + if !r.fieldNodeKindAllowsSeek(obj.Fields[i]) { continue } + + seekFields[i] = struct{}{} + continue + } + + // store fields with matching defer id + deferFields[i] = struct{}{} + } + + return +} + +func (r *Resolvable) fieldNodeKindAllowsSeek(field *Field) bool { + kind := field.Value.NodeKind() + if kind != NodeKindObject { + if kind != NodeKindArray { + // skip scalar fields + return false } - if obj.Fields[i].OnTypeNames != nil { - if r.skipFieldOnTypeNames(obj.Fields[i]) { + + // skip array if it's item do not have an object kind + if field.Value.(*Array).Item.NodeKind() != NodeKindObject { + // we could have a nested array, + // but we do not care for now + return false + } + } + + return true +} + +type walkFieldsFilter struct { + deferFields map[int]struct{} + seekFields map[int]struct{} + seek bool + enabled bool +} + +func (r *Resolvable) walkFields(obj *Object, value *astjson.Value, parent *astjson.Value, filter walkFieldsFilter) (hasErrors bool) { + addComma := false + + for i := range obj.Fields { + if filter.enabled { + // if mode is seek + if filter.seek { + // skip all fields to which we should not go into + if _, ok := filter.seekFields[i]; !ok { + continue + } + } else { + // if mode is render + // skip all fields that we should not render + if _, ok := filter.deferFields[i]; !ok { + continue + } + } + } else { + if r.shoulSkipObjectFieldByTypenames(obj.Fields[i]) { continue } } - if !r.print { + + if !r.render() { skip := r.authorizeField(value, obj.Fields[i]) if skip { if obj.Fields[i].Value.NodeNullable() { @@ -665,20 +987,21 @@ func (r *Resolvable) walkObject(obj *Object, parent *astjson.Value) bool { if field != nil { astjson.SetNull(r.astjsonArena, value, path...) } + + continue } else if obj.Nullable && len(obj.Path) > 0 { // if the field value is not nullable, but the object is nullable // we can just set the whole object to null astjson.SetNull(r.astjsonArena, parent, obj.Path...) return false - } else { - // if the field value is not nullable and the object is not nullable - // we return true to indicate an error - return true } - continue + + // if the field value is not nullable and the object is not nullable + // we return true to indicate an error + return true } } - if r.print { + if r.render() { if addComma { r.printBytes(comma) } @@ -690,6 +1013,17 @@ func (r *Resolvable) walkObject(obj *Object, parent *astjson.Value) bool { r.currentFieldInfo = obj.Fields[i].Info err := r.walkNode(obj.Fields[i].Value, value) if err { + if r.render() { + // Field key already written; complete with null to produce valid JSON. + r.printBytes(null) + if obj.Nullable { + // Nullable parent: absorb the error, render null, continue to next field. + addComma = true + continue + } + // Non-nullable parent: propagate error; caller closes the envelope. + return err + } if obj.Nullable { if len(obj.Path) > 0 { astjson.SetNull(r.astjsonArena, parent, obj.Path...) @@ -700,9 +1034,19 @@ func (r *Resolvable) walkObject(obj *Object, parent *astjson.Value) bool { } addComma = true } - if r.print && !isRoot { - r.printBytes(rBrace) + + return false +} + +func (r *Resolvable) shoulSkipObjectFieldByTypenames(field *Field) bool { + if field.ParentOnTypeNames != nil && r.skipFieldOnParentTypeNames(field) { + return true + } + + if field.OnTypeNames != nil && r.skipFieldOnTypeNames(field) { + return true } + return false } @@ -846,12 +1190,12 @@ func (r *Resolvable) walkArray(arr *Array, value *astjson.Value) bool { r.addError("Array cannot represent non-array value.", arr.Path) return r.err() } - if r.print { + if r.render() { r.printBytes(lBrack) } values := value.GetArray() - if !r.print { + if !r.render() { pathKey := r.currentFieldPath() r.actualListSizes[pathKey] += len(values) } @@ -859,7 +1203,7 @@ func (r *Resolvable) walkArray(arr *Array, value *astjson.Value) bool { hasPrintedValue := false for i, arrayValue := range values { skip := false - if r.print && arr.SkipItem != nil { + if r.render() && arr.SkipItem != nil { skip = arr.SkipItem(r.ctx, arrayValue) } @@ -867,7 +1211,7 @@ func (r *Resolvable) walkArray(arr *Array, value *astjson.Value) bool { continue } - if r.print && i != 0 && hasPrintedValue { + if r.render() && i != 0 && hasPrintedValue { r.printBytes(comma) } @@ -888,7 +1232,7 @@ func (r *Resolvable) walkArray(arr *Array, value *astjson.Value) bool { return err } } - if r.print { + if r.render() { r.printBytes(rBrack) } return false @@ -906,14 +1250,14 @@ func (r *Resolvable) currentFieldPath() string { } func (r *Resolvable) walkNull() bool { - if r.print { + if r.render() { r.printBytes(null) } return false } func (r *Resolvable) walkStaticString(str *StaticString) bool { - if r.print { + if r.render() { r.printBytes(quote) r.printBytes([]byte(str.Value)) r.printBytes(quote) @@ -936,7 +1280,7 @@ func (r *Resolvable) walkString(s *String, value *astjson.Value) bool { r.addError(fmt.Sprintf("String cannot represent non-string value: \"%s\"", string(r.marshalBuf)), s.Path) return r.err() } - if r.print { + if r.render() { if s.IsTypeName { content := value.GetStringBytes() for i := range r.renameTypeNames { @@ -982,7 +1326,7 @@ func (r *Resolvable) walkBoolean(b *Boolean, value *astjson.Value) bool { r.addError(fmt.Sprintf("Bool cannot represent non-boolean value: \"%s\"", string(r.marshalBuf)), b.Path) return r.err() } - if r.print { + if r.render() { r.renderScalarFieldValue(value, b.Nullable) } return false @@ -1003,7 +1347,7 @@ func (r *Resolvable) walkInteger(i *Integer, value *astjson.Value) bool { r.addError(fmt.Sprintf("Int cannot represent non-integer value: \"%s\"", string(r.marshalBuf)), i.Path) return r.err() } - if r.print { + if r.render() { r.renderScalarFieldValue(value, i.Nullable) } return false @@ -1019,14 +1363,14 @@ func (r *Resolvable) walkFloat(f *Float, value *astjson.Value) bool { r.addNonNullableFieldError(f.Path, parent) return r.err() } - if !r.print { + if !r.render() { if value.Type() != astjson.TypeNumber { r.marshalBuf = value.MarshalTo(r.marshalBuf[:0]) r.addError(fmt.Sprintf("Float cannot represent non-float value: \"%s\"", string(r.marshalBuf)), f.Path) return r.err() } } - if r.print { + if r.render() { if r.options.ApolloCompatibilityTruncateFloatValues { floatValue := value.GetFloat64() if floatValue == float64(int64(floatValue)) { @@ -1049,7 +1393,7 @@ func (r *Resolvable) walkBigInt(b *BigInt, value *astjson.Value) bool { r.addNonNullableFieldError(b.Path, parent) return r.err() } - if r.print { + if r.render() { r.renderScalarFieldValue(value, b.Nullable) } return false @@ -1065,14 +1409,14 @@ func (r *Resolvable) walkScalar(s *Scalar, value *astjson.Value) bool { r.addNonNullableFieldError(s.Path, parent) return r.err() } - if r.print { + if r.render() { r.renderScalarFieldValue(value, s.Nullable) } return false } func (r *Resolvable) walkEmptyObject(_ *EmptyObject) bool { - if r.print { + if r.render() { r.printBytes(lBrace) r.printBytes(rBrace) } @@ -1080,7 +1424,7 @@ func (r *Resolvable) walkEmptyObject(_ *EmptyObject) bool { } func (r *Resolvable) walkEmptyArray(_ *EmptyArray) bool { - if r.print { + if r.render() { r.printBytes(lBrack) r.printBytes(rBrack) } @@ -1103,7 +1447,7 @@ func (r *Resolvable) walkCustom(c *CustomNode, value *astjson.Value) bool { r.addError(err.Error(), c.Path) return r.err() } - if r.print { + if r.render() { r.renderScalarFieldBytes(resolved, c.Nullable) } return false @@ -1188,7 +1532,7 @@ func (r *Resolvable) walkEnum(e *Enum, value *astjson.Value) bool { * To avoid appending an error twice, the appending only happens on the first walk * and not the second walk (which prints the data). */ - if !r.print { + if !r.render() { if r.options.ApolloCompatibilityValueCompletionInExtensions { r.renderInaccessibleEnumValueError(e) } else { @@ -1206,7 +1550,7 @@ func (r *Resolvable) walkEnum(e *Enum, value *astjson.Value) bool { * To avoid appending an error/value completion twice, the appending only happens on the first walk * and not the second walk (which prints the data). */ - if !r.print { + if !r.render() { r.renderInaccessibleEnumValueError(e) } // Inaccessible enum values are always converted to null @@ -1215,7 +1559,7 @@ func (r *Resolvable) walkEnum(e *Enum, value *astjson.Value) bool { } return r.err() } - if r.print { + if r.render() { r.renderEnumValue(value, e.Nullable) } return false diff --git a/v2/pkg/engine/resolve/resolve.go b/v2/pkg/engine/resolve/resolve.go index f735752ef9..c8bee471df 100644 --- a/v2/pkg/engine/resolve/resolve.go +++ b/v2/pkg/engine/resolve/resolve.go @@ -436,6 +436,81 @@ func (r *Resolver) ArenaResolveGraphQLResponse(ctx *Context, response *GraphQLRe return resp, err } +func (r *Resolver) ResolveGraphQLDeferResponse(ctx *Context, response *GraphQLDeferResponse, writer DeferResponseWriter) (*GraphQLResolveInfo, error) { + resolveInfo := &GraphQLResolveInfo{} + + start := time.Now() + <-r.maxConcurrency + resolveInfo.ResolveAcquireWaitTime = time.Since(start) + defer func() { + r.maxConcurrency <- struct{}{} + }() + + t := newTools(r.options, r.allowedErrorExtensionFields, r.allowedErrorFields, r.subgraphRequestSingleFlight, nil) + + err := t.resolvable.Init(ctx, nil, response.Response.Info.OperationType) + if err != nil { + return nil, err + } + + if !ctx.ExecutionOptions.SkipLoader { + t.loader.Init(ctx, response.Response.Info, t.resolvable) + + // fetch initial response + if err := t.loader.ResolveFetchNode(response.Response.Fetches); err != nil { + return nil, err + } + + t.resolvable.deferMode = true + t.resolvable.deferID = 0 + + // render initial response + err = t.resolvable.Resolve(ctx.ctx, response.Response.Data, response.Response.Fetches, writer) + if err != nil { + return nil, err + } + + err = writer.Flush() + if err != nil { + return nil, err + } + + if t.resolvable.hasErrors() { + return resolveInfo, nil + } + + // fetch deferred responses + + for i, deferGroup := range response.Defers { + if err := t.loader.ResolveFetchNode(deferGroup.Fetches); err != nil { + return nil, err + } + + t.resolvable.deferID = deferGroup.DeferID + + err = t.resolvable.ResolveDefer(response.Response.Data, writer, i < len(response.Defers)-1) + if err != nil { + return nil, err + } + + // flush after each deferred response + + err = writer.Flush() + if err != nil { + return nil, err + } + + if t.resolvable.hasErrors() { + return resolveInfo, nil + } + } + + writer.Complete() + } + + return resolveInfo, err +} + type trigger struct { id uint64 cancel context.CancelFunc diff --git a/v2/pkg/engine/resolve/response.go b/v2/pkg/engine/resolve/response.go index d8af8d017b..7286d54782 100644 --- a/v2/pkg/engine/resolve/response.go +++ b/v2/pkg/engine/resolve/response.go @@ -1,7 +1,9 @@ package resolve import ( + "fmt" "io" + "strings" "github.com/gobwas/ws" @@ -56,6 +58,40 @@ func (g *GraphQLResponse) SingleFlightAllowed() bool { return false } +type GraphQLDeferResponse struct { + Response *GraphQLResponse + Defers []*DeferFetchGroup +} + +func (r *GraphQLDeferResponse) QueryPlanString() string { + indent := func(s string) string { + return strings.ReplaceAll(s, "\n", "\n ") + } + + primary := indent(r.Response.Fetches.QueryPlan().PrettyPrint()) + var secondary []string + + for _, g := range r.Defers { + secondary = append(secondary, strings.ReplaceAll(g.Fetches.QueryPlan().PrettyPrint(), "\n", "\n ")) + } + + return fmt.Sprintf(` +QueryPlan { + Primary { + %s + } + Deferred [ + %s + ] +} +`, primary, strings.Join(secondary, "\n")) +} + +type DeferFetchGroup struct { + DeferID int + Fetches *FetchTreeNode +} + type GraphQLResponseInfo struct { OperationType ast.OperationType } @@ -68,6 +104,12 @@ type ResponseWriter interface { io.Writer } +type DeferResponseWriter interface { + ResponseWriter + Flush() error + Complete() +} + type SubscriptionCloseKind struct { WSCode ws.StatusCode Reason string diff --git a/v2/pkg/federation/fixtures/federated_schema.golden b/v2/pkg/federation/fixtures/federated_schema.golden index 40ac93d20d..48d4c08354 100644 --- a/v2/pkg/federation/fixtures/federated_schema.golden +++ b/v2/pkg/federation/fixtures/federated_schema.golden @@ -56,19 +56,21 @@ type User { __typename: String! } -"The 'Int' scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." +"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." scalar Int -"The 'Float' scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." +"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." scalar Float -"The 'String' scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." +"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." scalar String -"The 'Boolean' scalar type represents 'true' or 'false' ." +"The `Boolean` scalar type represents `true` or `false`." scalar Boolean -"The 'ID' scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as '4') or integer (such as 4) input value will be accepted as an ID." +""" +The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as "4") or integer (such as 4) input value will be accepted as an ID. +""" scalar ID "Directs the executor to include this field or fragment only when the argument is true." @@ -93,7 +95,9 @@ directive @deprecated( reason: String = "No longer supported" ) on FIELD_DEFINITION | ARGUMENT_DEFINITION | ENUM_VALUE | INPUT_FIELD_DEFINITION +"Exposes a URL that specifies the behavior of this scalar" directive @specifiedBy( + "The URL that specifies the behavior of this scalar." url: String! ) on SCALAR @@ -104,6 +108,14 @@ All fields defined within a @oneOf input must be nullable in the schema. """ directive @oneOf on INPUT_OBJECT +"Directs the executor to defer this fragment when the if argument is true or undefined." +directive @defer( + "A unique identifier for the results." + label: String + "Controls whether the fragment will be deferred, usually via a variable." + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT + """ A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. In some cases, you need to provide options to alter GraphQL's execution behavior diff --git a/v2/pkg/federation/schema.go b/v2/pkg/federation/schema.go index 010c19e311..a4c19d2ad5 100644 --- a/v2/pkg/federation/schema.go +++ b/v2/pkg/federation/schema.go @@ -48,7 +48,7 @@ func (s *schemaBuilder) extendQueryTypeWithFederationFields(schema string, hasEn return schema } - if err := asttransform.MergeDefinitionWithBaseSchema(doc); err != nil { + if err := asttransform.MergeDefinitionWithBaseSchemaWithInternal(doc, false); err != nil { return schema } diff --git a/v2/pkg/introspection/fixtures/starwars_introspected.golden b/v2/pkg/introspection/fixtures/starwars_introspected.golden index 5bb9e05621..5cc12293e5 100644 --- a/v2/pkg/introspection/fixtures/starwars_introspected.golden +++ b/v2/pkg/introspection/fixtures/starwars_introspected.golden @@ -1817,7 +1817,7 @@ { "kind": "SCALAR", "name": "Boolean", - "description": "The `Boolean` scalar type represents `true` or `false` .", + "description": "The `Boolean` scalar type represents `true` or `false`.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -1826,7 +1826,7 @@ { "kind": "SCALAR", "name": "ID", - "description": "The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as `4`) or integer (such as 4) input value will be accepted as an ID.", + "description": "The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as \"4\") or integer (such as 4) input value will be accepted as an ID.", "inputFields": [], "interfaces": [], "possibleTypes": [], @@ -1918,8 +1918,8 @@ "__typename": "__Type" }, "defaultValue": "\"No longer supported\"", - "isDeprecated": true, - "deprecationReason": "No longer supported", + "isDeprecated": false, + "deprecationReason": null, "__typename": "__InputValue" } ], @@ -1927,16 +1927,15 @@ "__typename": "__Directive" }, { - "name": "delegateField", - "description": "", + "name": "specifiedBy", + "description": "Exposes a URL that specifies the behavior of this scalar", "locations": [ - "OBJECT", - "INTERFACE" + "SCALAR" ], "args": [ { - "name": "name", - "description": "", + "name": "url", + "description": "The URL that specifies the behavior of this scalar.", "type": { "kind": "NON_NULL", "name": null, @@ -1954,7 +1953,62 @@ "__typename": "__InputValue" } ], - "isRepeatable": true, + "isRepeatable": false, + "__typename": "__Directive" + }, + { + "name": "oneOf", + "description": "The @oneOf built-in directive marks an input object as a OneOf Input Object.\nExactly one field must be provided and its value must be non-null at runtime.\nAll fields defined within a @oneOf input must be nullable in the schema.", + "locations": [ + "INPUT_OBJECT" + ], + "args": [], + "isRepeatable": false, + "__typename": "__Directive" + }, + { + "name": "defer", + "description": "Directs the executor to defer this fragment when the if argument is true or undefined.", + "locations": [ + "FRAGMENT_SPREAD", + "INLINE_FRAGMENT" + ], + "args": [ + { + "name": "label", + "description": "A unique identifier for the results.", + "type": { + "kind": "SCALAR", + "name": "String", + "ofType": null, + "__typename": "__Type" + }, + "defaultValue": null, + "isDeprecated": false, + "deprecationReason": null, + "__typename": "__InputValue" + }, + { + "name": "if", + "description": "Controls whether the fragment will be deferred, usually via a variable.", + "type": { + "kind": "NON_NULL", + "name": null, + "ofType": { + "kind": "SCALAR", + "name": "Boolean", + "ofType": null, + "__typename": "__Type" + }, + "__typename": "__Type" + }, + "defaultValue": "true", + "isDeprecated": false, + "deprecationReason": null, + "__typename": "__InputValue" + } + ], + "isRepeatable": false, "__typename": "__Directive" } ], diff --git a/v2/pkg/introspection/generator.go b/v2/pkg/introspection/generator.go index 820483f13f..9c9589e7aa 100644 --- a/v2/pkg/introspection/generator.go +++ b/v2/pkg/introspection/generator.go @@ -329,6 +329,9 @@ func (i *introspectionVisitor) EnterDirectiveDefinition(ref int) { } func (i *introspectionVisitor) LeaveDirectiveDefinition(ref int) { + if strings.HasPrefix(i.currentDirective.Name, "__") { + return + } i.data.Schema.Directives = append(i.data.Schema.Directives, i.currentDirective) } diff --git a/v2/pkg/introspection/generator_test.go b/v2/pkg/introspection/generator_test.go index 690e849a87..8818c63e90 100644 --- a/v2/pkg/introspection/generator_test.go +++ b/v2/pkg/introspection/generator_test.go @@ -6,22 +6,24 @@ import ( "testing" "github.com/jensneuse/diffview" + "github.com/stretchr/testify/require" "github.com/wundergraph/graphql-go-tools/v2/pkg/astparser" + "github.com/wundergraph/graphql-go-tools/v2/pkg/asttransform" "github.com/wundergraph/graphql-go-tools/v2/pkg/testing/goldie" ) func TestGenerator_Generate(t *testing.T) { starwarsSchemaBytes, err := os.ReadFile("./testdata/starwars.schema.graphql") - if err != nil { - panic(err) - } + require.NoError(t, err) definition, report := astparser.ParseGraphqlDocumentBytes(starwarsSchemaBytes) if report.HasErrors() { t.Fatal(report) } + require.NoError(t, asttransform.MergeDefinitionWithBaseSchema(&definition)) + gen := NewGenerator() var data Data gen.Generate(&definition, &report, &data) @@ -30,16 +32,12 @@ func TestGenerator_Generate(t *testing.T) { } outputPretty, err := json.MarshalIndent(data, "", " ") - if err != nil { - t.Fatal(err) - } + require.NoError(t, err) goldie.Assert(t, "starwars_introspected", outputPretty) if t.Failed() { fixture, err := os.ReadFile("./fixtures/starwars_introspected.golden") - if err != nil { - t.Fatal(err) - } + require.NoError(t, err) diffview.NewGoland().DiffViewBytes("startwars_introspected", fixture, outputPretty) } diff --git a/v2/pkg/introspection/testdata/starwars.schema.graphql b/v2/pkg/introspection/testdata/starwars.schema.graphql index e777756cad..d59f1a427b 100644 --- a/v2/pkg/introspection/testdata/starwars.schema.graphql +++ b/v2/pkg/introspection/testdata/starwars.schema.graphql @@ -164,191 +164,4 @@ type Starship { "The union represents combined return result which could be on of the types: Human, Droid, Starship" union SearchResult = Human | Droid | Starship -scalar DateTime @specifiedBy(url: "https://scalars.graphql.org/andimarek/date-time") - -"The `Int` scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1." -scalar Int -"The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](http://en.wikipedia.org/wiki/IEEE_floating_point)." -scalar Float -"The `String` scalar type represents textual data, represented as UTF-8 character sequences. The String type is most often used by GraphQL to represent free-form human-readable text." -scalar String -"The `Boolean` scalar type represents `true` or `false` ." -scalar Boolean -"The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as `4`) or integer (such as 4) input value will be accepted as an ID." -scalar ID -"Directs the executor to include this field or fragment only when the argument is true." -directive @include( - "Included when true." - if: Boolean! -) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT -"Directs the executor to skip this field or fragment when the argument is true." -directive @skip( - "Skipped when true." - if: Boolean! -) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT -"Marks an element of a GraphQL schema as no longer supported." -directive @deprecated( - """ - Explains why this element was deprecated, usually also including a suggestion - for how to access supported similar data. Formatted in - [Markdown](https://daringfireball.net/projects/markdown/). - """ - reason: String = "No longer supported" @deprecated -) on FIELD_DEFINITION | ARGUMENT_DEFINITION | INPUT_FIELD_DEFINITION | ENUM_VALUE -directive @delegateField( - name: String! -) repeatable on OBJECT | INTERFACE - -""" -A Directive provides a way to describe alternate runtime execution and type validation behavior in a GraphQL document. -In some cases, you need to provide options to alter GraphQL's execution behavior -in ways field arguments will not suffice, such as conditionally including or -skipping a field. Directives provide this by describing additional information -to the executor. -""" -type __Directive { - name: String! - description: String - locations: [__DirectiveLocation!]! - args: [__InputValue!]! - isRepeatable: Boolean! -} - -""" -A Directive can be adjacent to many parts of the GraphQL language, a -__DirectiveLocation describes one such possible adjacencies. -""" -enum __DirectiveLocation { - "Location adjacent to a query operation." - QUERY - "Location adjacent to a mutation operation." - MUTATION - "Location adjacent to a subscription operation." - SUBSCRIPTION - "Location adjacent to a field." - FIELD - "Location adjacent to a fragment definition." - FRAGMENT_DEFINITION - "Location adjacent to a fragment spread." - FRAGMENT_SPREAD - "Location adjacent to an inline fragment." - INLINE_FRAGMENT - "Location adjacent to a schema definition." - SCHEMA - "Location adjacent to a scalar definition." - SCALAR - "Location adjacent to an object type definition." - OBJECT - "Location adjacent to a field definition." - FIELD_DEFINITION - "Location adjacent to an argument definition." - ARGUMENT_DEFINITION - "Location adjacent to an interface definition." - INTERFACE - "Location adjacent to a union definition." - UNION - "Location adjacent to an enum definition." - ENUM - "Location adjacent to an enum value definition." - ENUM_VALUE - "Location adjacent to an input object type definition." - INPUT_OBJECT - "Location adjacent to an input object field definition." - INPUT_FIELD_DEFINITION -} -""" -One possible value for a given Enum. Enum values are unique values, not a -placeholder for a string or numeric value. However an Enum value is returned in -a JSON response as a string. -""" -type __EnumValue { - name: String! - description: String - isDeprecated: Boolean! - deprecationReason: String -} - -""" -Object and Interface types are described by a list of Fields, each of which has -a name, potentially a list of arguments, and a return type. -""" -type __Field { - name: String! - description: String - args: [__InputValue!]! - type: __Type! - isDeprecated: Boolean! - deprecationReason: String -} - -"""Arguments provided to Fields or Directives and the input fields of an -InputObject are represented as Input Values which describe their type and -optionally a default value. -""" -type __InputValue { - name: String! - description: String - type: __Type! - "A GraphQL-formatted string representing the default value for this input value." - defaultValue: String -} - -""" -A GraphQL Schema defines the capabilities of a GraphQL server. It exposes all -available types and directives on the server, as well as the entry points for -query, mutation, and subscription operations. -""" -type __Schema { - "A list of all types supported by this server." - types: [__Type!]! - "The type that query operations will be rooted at." - queryType: __Type! - "If this server supports mutation, the type that mutation operations will be rooted at." - mutationType: __Type - "If this server support subscription, the type that subscription operations will be rooted at." - subscriptionType: __Type - "A list of all directives supported by this server." - directives: [__Directive!]! -} - -""" -The fundamental unit of any GraphQL Schema is the type. There are many kinds of -types in GraphQL as represented by the `__TypeKind` enum. - -Depending on the kind of a type, certain fields describe information about that -type. Scalar types provide no information beyond a name and description, while -Enum types provide their values. Object and Interface types provide the fields -they describe. Abstract types, Union and Interface, provide the Object types -possible at runtime. List and NonNull types compose other types. -""" -type __Type { - kind: __TypeKind! - name: String - description: String - fields(includeDeprecated: Boolean = false): [__Field!] - interfaces: [__Type!] - possibleTypes: [__Type!] - enumValues(includeDeprecated: Boolean = false): [__EnumValue!] - inputFields: [__InputValue!] - ofType: __Type -} - -"An enum describing what kind of type a given `__Type` is." -enum __TypeKind { - "Indicates this type is a scalar." - SCALAR - "Indicates this type is an object. `fields` and `interfaces` are valid fields." - OBJECT - "Indicates this type is an interface. `fields` ` and ` `possibleTypes` are valid fields." - INTERFACE - "Indicates this type is a union. `possibleTypes` is a valid field." - UNION - "Indicates this type is an enum. `enumValues` is a valid field." - ENUM - "Indicates this type is an input object. `inputFields` is a valid field." - INPUT_OBJECT - "Indicates this type is a list. `ofType` is a valid field." - LIST - "Indicates this type is a non-null. `ofType` is a valid field." - NON_NULL -} +scalar DateTime @specifiedBy(url: "https://scalars.graphql.org/andimarek/date-time") \ No newline at end of file diff --git a/v2/pkg/lexer/literal/literal.go b/v2/pkg/lexer/literal/literal.go index 8c57db74c2..20a1da9420 100644 --- a/v2/pkg/lexer/literal/literal.go +++ b/v2/pkg/lexer/literal/literal.go @@ -66,6 +66,8 @@ var ( IF = []byte("if") SKIP = []byte("skip") DEFER = []byte("defer") + DEFER_INTERNAL = []byte("__defer_internal") + LABEL = []byte("label") STREAM = []byte("stream") SCHEMA = []byte("schema") EXTEND = []byte("extend")