A modular, scalable, and extensible indexing service that listens to any events on Ethereum & L2 chains like Sonieum and Minato. This should also allow for future expansion to other blockchain events. The indexer listens to AA related events as now.
We use alloy-rpc-client and alloy-provider to connect to the blockchain. It sends JSON-RPC requests directly to the blockchain nodes (like Alchemy, Infura, or self-hosted RPCs).
We manually define event signatures and contract addresses in config.toml.
Contract address to listen for events. Event signatures (topics). From and to block range (polling the latest N blocks).
The indexer repeatedly queries eth_getLogs every polling_blocks * block_time seconds. It fetches logs within the specified block range. This is a direct approach rather than indexing the full blockchain state like The Graph.
When logs are received, they are: Decoded using alloy_sol_types::SolEvent and forwarded to storage options.
Indexer Core - Handles blockchain event streaming and processing. Storage Layer - Stores indexed data (Redis or Message queue like Kafka/NATS/RabbitMQ). Configuration Layer - Manages environment variables and chain-specific configurations.
Uses Alloy (ethers-rs alternative) to listen for Paymaster contract events. Can connect to multiple RPC endpoints for redundancy. Processes logs and filters relevant events. Supports Sonieum, Minato, and other L2 chains. Example Flow:
Subscribe to NewBlockHeaders. Retrieve logs for Paymaster/EP contract or any other contract events. Decode logs and send them to the processing queue.
Normalizes data from different chains. Validates and transforms event data. Pushes to storage layer (Redis, or Message Queue). Key Features:
Supports batch processing for high throughput.
The indexed data needs to be stored efficiently. We will support multiple backends:
Redis - Fast lookups and caching. Kafka/NATS - Streaming for real-time processing.
Uses .env and config.toml to manage RPC URLs, contracts, events and storage settings. Supports multiple chains with different contract addresses as well as different event signatures.

- Create .env file similar to .env.examples
- Start services Redis and Kafka locally. Can use docker image or brew service start
- Optionally: Update/Add all contracts, event signatures and chain configuration required in config/config.toml file.
- Run in terminal: RUST_LOG=debug cargo run