Skip to content

spacemeshos/api

Repository files navigation

Spacemesh API

CI Status

Protobuf implementation of the Spacemesh API. This repository contains only API design, not implementation. For implementation work, see go-spacemesh. Note that API implementation may lag design.

Design

The API was designed with the following considerations in mind.

Mesh vs. global state

In Spacemesh, "the mesh" refers to data structures that are explicitly stored by all full nodes and are subject to consensus. This consists of transactions, collated into blocks, which in turn are collated into layers. Note that, in addition to transactions, blocks contain metadata such as layer number and signature. The mesh also includes ATXs (activations).

By contrast, "global state" refers to data structures that are calculated implicitly based on mesh data. These data are not explicitly stored anywhere in the mesh. Global state includes account state (balance, counter/nonce value, and, for smart contract accounts, code), transaction receipts, and smart contract event logs. These data need not be stored indefinitely by all full nodes (although they should be stored indefinitely by archive nodes).

The API provides access to both types of data, but they are divided into different API services. For more information on this distinction, see SMIP-0003: Global state data, STF, APIs, as well as the MeshService and the GlobalStateService.

Transactions

Transactions span mesh and global state data. They are submitted to a node, which may or may not admit the transaction to its mempool. If the transaction is admitted to the mempool, it will probably end up being added to a newly-mined block, and that block will be submitted to the mesh in some layer. After that, the layer containing the block will eventually be approved, and then confirmed, by the consensus mechanism. After the layer is approved, the transaction will be run through the STF (state transition function), and if it succeeds, it may update global state.

Since transactions span multiple layers of abstraction, the API exposes transaction data in its own service, TransactionService.

Types of endpoints

Broadly speaking, there are four types of endpoints: simple, command, query, and stream. Each type is described below. Note that in some cases, the same data are exposed through multiple endpoints, e.g., both a query and a stream endpoint.

  • Simple endpoints are used to query a single data element. Some simple endpoints accept a request object (e.g., GlobalStateService.Account), while some which return data that's global to a node accept no request object (e.g., NodeService.Version).
  • Command endpoints are used to send a command to a node. Examples include TransactionService.SubmitTransaction and SmesherService.StartSmeshing.
  • Query endpoints are used to read paginated historical data. A *Query endpoint accepts a *Request message that typically contains the following:
    • filter: A filter (see Streams, below)
    • min_layer: The first layer to return results from
    • max_results: The maximum number of results to return
    • offset: Page offset
  • Stream endpoints are used to read realtime data. They do not return historical data. Each time the node creates, or learns of, a piece of data matching the filter and type, or sees an update to a matching piece of data, it sends it over the stream. A *Stream endpoint accepts a *Request message, containing the following, that functions as a filter:
    • *_id: The ID of the data type to filter on (e.g., "show me all data items that touch this account_id")
    • flags: A bit field that allows the client to select which, among multiple types multiplexed on this stream, to receive

Services

The Spacemesh API consists of several logical services, each of which contains a set of one or more RPC endpoints. The node operator can enable or disable each service independently using the CLI. The current set of services is as follows:

  • DebugService is an experimental service designed for debugging and testing. The endpoints in this service are volatile and subject to change without notice. They should not be relied on in production.
  • GatewayService is a read-write interface that allows a poet server to broadcast proofs to the network via a gateway node.
  • GlobalStateService is a readonly interface that provides access to data elements that are not explicitly part of the mesh such as accounts, rewards, and transaction state and receipts.
  • MeshService is a readonly interface that provides access to mesh data such as layer number, epoch number, and network ID. It provides streams for watching layers (which contain blocks, transactions, etc.). In the future this service will be expanded to include other mesh-related endpoints.
  • NodeService is a readonly interface for reading basic node-related data such as node status, software version and build number, and errors. It also allows a consumer to request that the node start the sync process, thus enabling the stream endpoints.
  • SmesherService is a read-write interface that allows the client to query, and set, parameters related to smeshing (mining), such as PoST commitment, coinbase, etc.
  • TransactionService is a read-write interface that allows the client to submit a new transaction, and to follow the state of one or more transactions on its journey from submission to mempool to block to mesh to STF.

Each of these services relies on one or more sets of message types, which live in *types.proto files in the same directory as the service definition files.

Intended Usage Pattern

Mesh data processing flow

  1. Client starts a full node with one or more relevant GRPC endpoints enabled
  2. Client subscribes to the streaming GRPC api methods that are of interest
  3. Client calls NodeService.SyncStart() to request that the node start syncing (note that, at present, sync is on by default and this step is unnecessary, but in future, it will be possible to start the node with sync turned off so that the client can subscribe to streams before the sync process begins, ensuring they don't miss any data)
  4. Client processes streaming data it receives from the node
  5. Client monitors node using NodeService.SyncStatusStream() and NodeService.ErrorStream() and handles node critical errors. Return to step 1 as necessary.
  6. Client gracefully shuts down the node by calling NodeService.Shutdown() when it is done processing data.

Development

Versioning

We use standard semantic versioning. Please regularly cut releases against the master branch and increment the version accordingly. Releases are managed at Releases and the current version line is 1.x. Note that this is especially important for downstream code that relies on individual builds, such as the golang build.

Build targets

This repository currently contains builds for two targets: golang and grpc-gateway. Every time a protobuf definition file is changed, you must update the build and include the updated build files with your PR in order to keep everything in sync. You can check this at any time by running make check, and it's also enforced by CI (see below for more information).

  • golang builds live in release/go. You may use this repository directly as a go module with an import statement such as import "github.com/spacemeshos/api/release/go/spacemesh/v1".
  • grpc-gateway builds live alongside the golang builds in release/go/spacemesh/v1 (they have a .gw.go extension).

Makefile

The repository includes a Makefile that makes it easy to run most regular tasks:

  • make lint runs the linter (see below)
  • make local checks for breaking changes against local master (see below)
  • make breaking checks for breaking changes against github repository (see below)
  • make build builds the API for all targets
  • make check ensures that the build is up to date with respect to the proto source files

Under the hood, it uses a helpful tool called buf.

Buf

In addition to running make commands, you can also manually use the buf tool to compile the API to an image. First, install buf, then run:

> buf image build -o /dev/null

to test the build. To output the image in json format, run:

> buf image build --exclude-source-info -o -#format=json

Breaking changes detection

buf also supports detection of breaking changes. To do this, first create an image from the current state:

> buf image build -o image.bin

Make a breaking change, then run against this change:

> buf breaking --against-input image.bin

buf will report all breaking changes.

Linting

buf runs several linters. It's pretty strict about things such as naming conventions, to prevent downstream issues in the various languages and framework that rely upon the protobuf definition files. You can run the linter like this:

> buf lint

If there are no issues, this command should have exit code 0 and no output.

For more information on linting, see the style guide. For more information on the difference between the buf tool and the protoc compiler, see Use protoc input instead of the internal compiler.

Continuous integration

This repository has a continuous integration (CI) workflow built on GitHub Actions. In addition to linting and breaking changes detection, it also runs the protoc compiler, since that tends to surface a slightly different set of warnings and errors than buf.

You can use a nifty tool called act to run the CI workflow locally, although it doesn't always play nice with our workflow configuration.