Releases: tradingmachines/level4
level4 version 2.0.0
██╗ ███████╗██╗ ██╗███████╗██╗ ██╗ ██╗
██║ ██╔════╝██║ ██║██╔════╝██║ ██║ ██║
██║ █████╗ ██║ ██║█████╗ ██║ ███████║
██║ ██╔══╝ ╚██╗ ██╔╝██╔══╝ ██║ ╚════██║
███████╗███████╗ ╚████╔╝ ███████╗███████╗ ██║
╚══════╝╚══════╝ ╚═══╝ ╚══════╝╚══════╝ ╚═╝
---
version 2.0 (february 2023)
by william santos
Main changes
- Replaced HTTP server with gRPC - Level4 is now controlled via RPC calls to a server listening on port
50051
- Removed the SQL backend - Level4 is now "stateless". Currencies, pairs, and markets are stored externally. Data feeds are started by sending
(exchange, market(pair, type))
via theStartMarket
RPC call - Added clustering: instances will automatically discover each other via gossip on port 45892
- Abandoned the RocksDB idea - orderbooks are still stored in-memory using a general balanced tree
- Refactored and tidied a lot of the old codebase
Proto schema
The following shows the RPC services block for controlling a Level4 instance.
Note: all instances in a cluster run the same gRPC server and will return exactly the same responses.
service Control {
rpc StartMarket (StartMarketRequest) returns (StopMarketReply);
rpc StopMarket (StopMarketRequest) returns (StopMarketReply);
rpc ListNodes (ListNodesRequest) returns (ListNodesReply);
rpc ListActiveMarkets (ListMarketsRequest) returns (ListMarketsReply);
rpc IsMarketOnline (MarketOnlineRequest) returns (MarketOnlineReply);
}
The main messages of interest are Node
and Market
:
message Node {
string name = 1;
int64 active_market_count = 2;
int64 max_active_markets = 3;
}
message Market {
int64 id = 1;
string base_symbol = 2;
string quote_symbol = 3;
string exchange_name = 4;
string type = 5;
}
Where next
The main purpose of clustering at the moment is load-balancing, not fault-tolerance. If a node fails then the data feeds on that node are NOT automatically re-started on other nodes. This is obviously a severe limitation. Next release will address this.
level4 version 1.0.0
All translation schemes have been tested and are fully working. Level3 data feeds are received and full price aggregated orderbooks are maintained in memory. Synchronisation checks have not been implemented.
The best bid and ask price changes are published to a kafka topic. All messages are written to the same partition, which works for now but is not scalable. Each stream has numeric market id, which can be used to allocate partitions. Flink will handle this just fine, as long as automatic partition discovery is enabled. In the future, the bid/ask price change will be paired with its level: such that best bid/ask is level 0, the second is level 2, up to some threshold. This will allow for more complex volume-weighted mid market price calculations in the future.
This version of level4 is not a distributed system: it stores market and exchange metadata in Postgres tables using ecto as an abstraction on-top of SQL. This is not fault-tolerant and makes deploying more complex, Furthermore, data feeds cannot be spawned and shutdown on multiple erlang nodes; all data feeds reside on one machine. The next step is to make the system more production ready: moving from Postgres to Mnesia and horizontally scaling data feeds across multiple erlang nodes. To alleviate high memory usage: orderbooks will be stored on-disk using rocksdb.