diff --git a/.github/workflows/espresso-devnet-tests.yaml b/.github/workflows/espresso-devnet-tests.yaml
index f9bbade3a0d..5a611fd6c82 100644
--- a/.github/workflows/espresso-devnet-tests.yaml
+++ b/.github/workflows/espresso-devnet-tests.yaml
@@ -12,7 +12,7 @@ jobs:
strategy:
fail-fast: false
matrix:
- group: [0, 1, 2, 3]
+ group: [0, 1, 2, 3, 4]
include:
- group: 0
tests: "TestChallengeGame|TestChangeBatchInboxOwner"
@@ -26,6 +26,9 @@ jobs:
- group: 3
tests: "TestSmokeWithTEE|TestForcedTransaction"
tee: true
+ - group: 4
+ tests: "TestBatcherActivePublishOnly"
+ tee: false
env:
ESPRESSO_DEVNET_TESTS_LIVENESS_PERIOD: "1m"
ESPRESSO_DEVNET_TESTS_OUTAGE_PERIOD: "1m"
diff --git a/README_ESPRESSO.md b/README_ESPRESSO.md
index 62077d48474..872742f47cf 100644
--- a/README_ESPRESSO.md
+++ b/README_ESPRESSO.md
@@ -1,6 +1,7 @@
# Optimism Espresso Integration
Notes:
+
* For deployment configuration, read `README_ESPRESSO_DEPLOY_CONFIG.md`.
* For code sync with upstreams, read `README_ESPRESSO_CODE_SYNC_PROCEDURE.md`.
@@ -15,7 +16,7 @@ Notes:
### Nix shell
-* Install nix following the instructions at https://nixos.org/download/
+* Install nix following the instructions at
* Enter the nix shell of this project
@@ -23,7 +24,6 @@ Notes:
> nix develop .
```
-
### Configuring Docker
In order to download the docker images required by this project you may need to authenticate using a PAT.
@@ -38,6 +38,7 @@ Provide Docker with the PAT.
```
Run docker as a non root user:
+
```console
> sudo add group docker
> sudo usermod -aG docker $USER
@@ -70,6 +71,7 @@ To run a subset of the tests above (fast):
```
To run the devnet tests:
+
```console
> just devnet-tests
```
@@ -122,11 +124,13 @@ ESPRESSO_ATTESTATION_VERIFIER_DOCKER_IMAGE= just remove-containers
-
### Guide: Setting Up an Enclave-Enabled Nitro EC2 Instance
This guide explains how to prepare an enclave-enabled parent EC2 instance.
-You can follow the official AWS Enclaves setup guide: https://docs.aws.amazon.com/enclaves/latest/user/getting-started.html.
-
+You can follow the official AWS Enclaves setup guide: .
#### Step-by-Step Instructions
@@ -151,21 +153,21 @@ Use the AWS Management Console or AWS CLI to launch a new EC2 instance.
Make sure to:
-- **Enable Enclaves**
- - In the CLI: set the `--enclave-options` flag to `true`
- - In the Console: select `Enabled` under the **Enclave** section
-
-- **Use the following configuration:**
- - **Architecture:** x86_64
- - **AMI:** Amazon Linux 2023
- - **Instance Type:** `m6a.2xlarge`
- - **Volume Size:** 100 GB
+* **Enable Enclaves**
+ * In the CLI: set the `--enclave-options` flag to `true`
+ * In the Console: select `Enabled` under the **Enclave** section
+* **Use the following configuration:**
+ * **Architecture:** x86_64
+ * **AMI:** Amazon Linux 2023
+ * **Instance Type:** `m6a.2xlarge`
+ * **Volume Size:** 100 GB
##### 2. Connect to the Instance
Once the instance is running, connect to it via the AWS Console or CLI.
In practice, you will be provided a `key.pem` file, and you can connect like this:
+
```console
chmod 400 key.pem
ssh -i "key.pem" ec2-user@
@@ -173,23 +175,24 @@ ssh -i "key.pem" ec2-user@
Note that the command above can be found in the AWS Console by selecting the instance and clicking on the button "Connect".
-
##### 3. Install dependencies
* Nix
+
```console
sh <(curl --proto '=https' --tlsv1.2 -L https://nixos.org/nix/install) --daemon
source ~/.bashrc
```
* Git, Docker
+
```console
- sudo yum update
- sudo yum install git
- sudo yum install docker
- sudo usermod -a -G docker ec2-user
- sudo service docker start
- sudo chown ec2-user /var/run/docker.sock
+sudo yum update
+sudo yum install git
+sudo yum install docker
+sudo usermod -a -G docker ec2-user
+sudo service docker start
+sudo chown ec2-user /var/run/docker.sock
```
* Nitro
@@ -204,14 +207,15 @@ sudo systemctl start nitro-enclaves-allocator.service
```
* Clone repository and update submodules
+
```console
git clone https://github.com/EspressoSystems/optimism-espresso-integration.git
cd optimism-espresso-integration
git submodule update --init --recursive
```
-
* Enter the nix shell and run the enclave tests
+
```console
nix --extra-experimental-features "nix-command flakes" develop
just compile-contracts
@@ -229,32 +233,42 @@ just enclave-tools
```
This should create `op-batcher/bin/enclave-tools` binary. You can run
+
```console
./op-batcher/bin/enclave-tools --help
```
+
to get information on available commands and flags.
##### Building a batcher image
To build a batcher enclave image, and tag it with specified tag:
+
```console
./op-batcher/bin/enclave-tools build --op-root ./ --tag op-batcher-enclave
```
+
On success this command will output PCR measurements of the enclave image, which can then be registered with BatchAuthenticator
contract.
##### Running a batcher image
+
To run enclave image built by the previous command:
+
```console
./op-batcher/bin/enclave-tools run --image op-batcher-enclave --args --argument-1,value-1,--argument-2,value-2
```
+
Arguments will be forwarded to the op-batcher
##### Registering a batcher image
+
To register PCR0 of the batcher enclave image built by the previous command:
+
```console
./op-batcher/bin/enclave-tools register --l1-url example.com:1234 --authenticator 0x123..def --private-key 0x123..def --pcr0 0x123..def
```
+
You will need to provide the L1 URL, the contract address of BatchAuthenticator, private key of L1 account used to deploy BatchAuthenticator and PCR0 obtained when building the image.
# Local Devnet
@@ -268,11 +282,13 @@ Compose version is `2.37.3` or the Docker Engine version is `27.4.0`, and the Do
you may need to upgrade the version.
* Enter the Nix shell in the repo root.
+
```console
nix develop
```
* Build the op-deployer. This step needs to be re-run if the op-deployer is modified.
+
```console
cd op-deployer
just
@@ -280,35 +296,43 @@ cd ../
```
* Build the contracts. This step needs to be re-run if the contracts are modified.
+
```console
just compile-contracts
```
* Go to the `espresso` directory.
+
```console
cd espresso
```
* Shut down all containers.
+
```console
docker compose down -v --remove-orphans
```
* Prepare OP contract allocations. Nix shell provides dependencies for the script. This step needs to be re-run only when the OP contracts are modified.
+
```console
./scripts/prepare-allocs.sh
```
* Build and start all services in the background.
+
```console
docker compose up --build -d
```
+
If you're on a machine with [AWS Nitro Enclaves enabled](#guide-setting-up-an-enclave-enabled-nitro-ec2-instance), use the `tee` profile instead to start the enclave batcher.
+
```console
COMPOSE_PROFILES=tee docker compose up --build -d
```
* Run the services and check the log.
+
```console
docker compose logs -f
```
@@ -316,16 +340,19 @@ docker compose logs -f
## Investigate a Service
* Shut down all containers.
+
```console
docker compose down
```
* Build and start the specific service and check the log.
+
```console
docker compose up
```
* If the environment variable setting is not picked up, pass it explicitly.
+
```console
docker compose --env-file .env up
```
@@ -333,16 +360,19 @@ docker compose --env-file .env up
## Apply a Change
* In most cases, simply remove all containers and run commands as normal.
+
```console
docker compose down
```
* To start the project fresh, remove containers, volumes, and network, from this project.
+
```console
docker compose down -v
```
* To start the system fresh, remove all volumes.
+
```console
docker volume prune -a
```
@@ -350,10 +380,13 @@ docker volume prune -a
* If encountering an issue related to outdated deployment files, remove those files before
restarting.
* Go to the scripts directory.
+
```console
cd espresso/scripts
```
+
* Run the script.
+
```console
./cleanup.sh
```
@@ -361,15 +394,14 @@ restarting.
* If you have changed OP contracts, you will have to start the devnet fresh and re-generate
the genesis allocations by running `prepare-allocs.sh`
-
## Log monitoring
+
For a selection of important metrics to monitor for and corresponding log lines see `espresso/docs/metrics.md`
## Blockscout
Blockscout is a block explorer that reads from the sequencer node. It can be accessed at `http://localhost:3000`.
-
## Continuous Integration environment
### Running enclave tests in EC2
@@ -378,54 +410,61 @@ In order to run the tests for the enclave in EC2 via github actions one must cre
```json
{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Action": [
- "ec2:AuthorizeSecurityGroupIngress",
- "ec2:RunInstances",
- "ec2:DescribeInstances",
- "ec2:TerminateInstances",
- "ec2:DescribeImages",
- "ec2:CreateTags",
- "ec2:DescribeSecurityGroups",
- "ec2:DescribeKeyPairs",
- "ec2:ImportKeyPair",
- "ec2:DescribeInstanceStatus"
- ],
- "Resource": "*"
- }
- ]
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:AuthorizeSecurityGroupIngress",
+ "ec2:RunInstances",
+ "ec2:DescribeInstances",
+ "ec2:TerminateInstances",
+ "ec2:DescribeImages",
+ "ec2:CreateTags",
+ "ec2:DescribeSecurityGroups",
+ "ec2:DescribeKeyPairs",
+ "ec2:ImportKeyPair",
+ "ec2:DescribeInstanceStatus"
+ ],
+ "Resource": "*"
+ }
+ ]
}
```
Currently, the github workflow in `.github/workflows/enclave.yaml` relies on AWS AMI with id `ami-0d259f3ae020af5f9` under `arn:aws:iam::324783324287`.
In order to refresh this AMI one needs to:
+
1. Create an AWS EC2 instance with the characteristics described in (see `.github/workflows/enclave.yaml` *Launch EC2 Instance* job).
2. Copy the script `espresso/scrips/enclave-prepare-ami.sh` in the EC2 instance (e.g. using scp) and run it.
3. [Export the AMI instance](https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/tkv-create-ami-from-instance.html).
-
# Celo Deployment
## Prepare for the Deployment
+
* Go to the scripts directory.
+
```console
cd espresso/scripts
```
## Prebuild Everything and Start All Services
+
Note that `l2-genesis` is expected to take around 2 minutes.
+
```console
./startup.sh
```
+
Or build and start the devnet with AWS Nitro Enclave as the TEE:
+
```console
USE_TEE=true ./startup.sh
```
## View Logs
+
There are 17 services in total, as listed in `logs.sh`. Run the script with the service name to
view its logs, e.g., `./logs.sh op-geth-sequencer`. Note that some service names can be replaced
by more convenient alias, e.g., `sequencer` instead of `op-node-sequencer`, but it is also suported
@@ -433,6 +472,7 @@ to use their full names.
The following are common commands to view the logs of critical services. Add `-tee` to the batcher
and the proposer services if running with the TEE.
+
```console
./logs.sh dev-node
./logs.sh sequencer
@@ -443,6 +483,7 @@ and the proposer services if running with the TEE.
```
## Shut Down All Services
+
```console
./shutdown.sh
```
@@ -452,6 +493,7 @@ and the proposer services if running with the TEE.
## Repositories
There are three types of repositories:
+
1. Kona implements the OP stack in Rust.
2. Celo-Kona is a wrapper of Kona with Celo specific changes.
3. OP Succinct: uses Kona and in our case also Celo-Kona in order to compute zk proofs for an OP rollup state change which is used in the challenger and proposer services.
@@ -466,19 +508,16 @@ The OP Succinct repository for Espresso generates using Github actions the docke
The table below is more specific regarding which branches of these repositories are used.
-
| External | Celo (rep/branch) | Espresso (rep/branch)|
| :-------: | :----: | :------:|
| [kona](https://github.com/op-rs/kona) | [Celo/kona](https://github.com/celo-org/kona)/[palango/kona-1.1.7-celo](https://github.com/celo-org/kona/tree/palango/kona-1.1.7-celo) | [Espresso/kona-celo-fork](https://github.com/EspressoSystems/kona-celo-fork)/[espresso-integration](https://github.com/EspressoSystems/kona-celo-fork/tree/espresso-integration) |
| | [Celo/celo-kona](https://github.com/celo-org/celo-kona)/[main](https://github.com/celo-org/celo-kona/tree/main) | [Espresso/celo-kona](https://github.com/EspressoSystems/celo-kona)/[espresso-integration](https://github.com/EspressoSystems/celo-kona/tree/espresso-integration) |
| [op-succinct](https://github.com/succinctlabs/op-succinct) | [Celo/op-succinct](https://github.com/celo-org/op-succinct)/[develop](https://github.com/celo-org/op-succinct/tree/develop) | [Espresso/op-succinct](https://github.com/EspressoSystems/op-succinct)/[espresso-integration](https://github.com/EspressoSystems/op-succinct/tree/espresso-integration)|
-
-## Making a change to the derivation pipeline and propagating it to the relevant repositories.
+## Making a change to the derivation pipeline and propagating it to the relevant repositories
In our setting changes to the derivation pipeline are made in the [kona](https://github.com/EspressoSystems/kona/tree/espresso-integration-v1.1.7) repository. Then these changes need to be propagated to the [celo-kona](https://github.com/EspressoSystems/celo-kona) and [op-succinct](https://github.com/EspressoSystems/op-succinct) repositories, generate the docker images for the challenger and proposer, and use these images in [optimism-espresso-integration](https://github.com/EspressoSystems/optimism-espresso-integration) as follows.
-
1. Merge your PR into [kona-celo-fork](https://github.com/EspressoSystems/kona-celo-fork/tree/espresso-integration). This PR contains some changes to the derivation pipeline. E.g.: [bfabb62](https://github.com/EspressoSystems/kona-celo-fork/commit/bfabb62754bc53317ecb93442bb09d347cd6aad9).
1. Create a PR against [celo-kona](https://github.com/EspressoSystems/celo-kona/tree/espresso-integration). This PR will edit the `Cargo.toml` file to reference the updated kona version, e.g: [a94b317](https://github.com/EspressoSystems/celo-kona/commit/a94b3172b1248a7cd650d692226c9d17b832eec9).
@@ -486,19 +525,18 @@ In our setting changes to the derivation pipeline are made in the [kona](https:/
1. Create a PR in [op-succinct](https://github.com/EspressoSystems/op-succinct) and merge it into the branch [espresso-integration](https://github.com/EspressoSystems/op-succinct/tree/espresso-integration). This PR will edit the `Cargo.toml` file to reference the updated kona and celo-kona version, e.g: [41780a3](https://github.com/EspressoSystems/op-succinct/pull/3/commits/41780a339bb1e177281957fcfe0383dfa41eff15).
1. After running CI, check for new images of the succinct proposer and challenger services at
- * [containers/op-succinct-lite-proposer-celo](https://github.com/espressosystems/op-succinct/pkgs/container/op-succinct%2Fop-succinct-lite-proposer-celo)
- * [containers/op-succinct-lite-challenger-celo](https://github.com/espressosystems/op-succinct/pkgs/container/op-succinct%2Fop-succinct-lite-challenger-celo)
-* These images should be updated in the [docker-compose.yml](https://github.com/EspressoSystems/optimism-espresso-integration/blob/b73ee83611418cd6ce3aa2d27e00881d9df7e012/espresso/docker-compose.yml) file when new versions are available. See for example [bd90858](https://github.com/EspressoSystems/optimism-espresso-integration/pull/293/commits/bd90858b0f871441785d4ac6437ff78b76d4b1f8).
+* [containers/op-succinct-lite-proposer-celo](https://github.com/espressosystems/op-succinct/pkgs/container/op-succinct%2Fop-succinct-lite-proposer-celo)
+* [containers/op-succinct-lite-challenger-celo](https://github.com/espressosystems/op-succinct/pkgs/container/op-succinct%2Fop-succinct-lite-challenger-celo)
+* These images should be updated in the [docker-compose.yml](https://github.com/EspressoSystems/optimism-espresso-integration/blob/b73ee83611418cd6ce3aa2d27e00881d9df7e012/espresso/docker-compose.yml) file when new versions are available. See for example [bd90858](https://github.com/EspressoSystems/optimism-espresso-integration/pull/293/commits/bd90858b0f871441785d4ac6437ff78b76d4b1f8).
Note that periodically we need to merge upstream changes in the `kona`, `celo-kona`, and `op-succinct` repositories to keep our integration branches up to date. This ensures that our custom modifications don't drift too far from the upstream codebase and that we can easily incorporate bug fixes and new features from the upstream projects.
-
# Testnet Migration
We are working on a set of scripts to handle the migration from a Celo Testnet to a version integrated with Espresso.
Some relevant documents:
+
* [Documentation of configuration parameters](docs/README_ESPRESSO_DEPLOY_CONFIG.md)
* [Celo Testnet Migration Guide](docs/CELO_TESTNET_MIGRATION.md) (WIP)
-
diff --git a/espresso/devnet-tests/batcher_active_publish_test.go b/espresso/devnet-tests/batcher_active_publish_test.go
new file mode 100644
index 00000000000..73b6d4cc4ff
--- /dev/null
+++ b/espresso/devnet-tests/batcher_active_publish_test.go
@@ -0,0 +1,140 @@
+package devnet_tests
+
+import (
+ "context"
+ "fmt"
+ "math/big"
+ "testing"
+ "time"
+
+ "github.com/ethereum-optimism/optimism/op-batcher/bindings"
+ "github.com/ethereum-optimism/optimism/op-e2e/e2eutils/wait"
+ "github.com/ethereum/go-ethereum/accounts/abi/bind"
+ "github.com/ethereum/go-ethereum/common"
+ "github.com/ethereum/go-ethereum/core/types"
+ "github.com/ethereum/go-ethereum/ethclient"
+ "github.com/stretchr/testify/require"
+)
+
+// hasBatchTransactions checks if any transactions were sent to the BatchInbox from the given sender.
+func hasBatchTransactions(ctx context.Context, client *ethclient.Client, batchInboxAddr, senderAddr common.Address, startBlock, endBlock uint64) (bool, error) {
+ for i := startBlock; i <= endBlock; i++ {
+ timeoutCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
+ block, err := client.BlockByNumber(timeoutCtx, new(big.Int).SetUint64(i))
+ cancel()
+ if err != nil {
+ return false, fmt.Errorf("failed to get block %d: %w", i, err)
+ }
+
+ for _, tx := range block.Transactions() {
+ if tx.To() != nil && *tx.To() == batchInboxAddr {
+ signer := types.LatestSignerForChainID(tx.ChainId())
+ sender, err := types.Sender(signer, tx)
+ if err != nil {
+ continue
+ }
+ if sender == senderAddr {
+ return true, nil
+ }
+ }
+ }
+ }
+
+ return false, nil
+}
+
+// TestBatcherActivePublishOnly tests that only the active batcher publishes to L1.
+func TestBatcherActivePublishOnly(t *testing.T) {
+ ctx, cancel := context.WithTimeout(context.Background(), 20*time.Minute)
+ defer cancel()
+
+ // Initialize devnet with NON_TEE profile (starts both batchers)
+ d := NewDevnet(ctx, t)
+ require.NoError(t, d.Up(NON_TEE))
+ defer func() {
+ require.NoError(t, d.Down())
+ }()
+
+ // Send initial transaction to verify everything has started up ok
+ require.NoError(t, d.RunSimpleL2Burn())
+ config, err := d.RollupConfig(ctx)
+ require.NoError(t, err)
+
+ l1ChainID, err := d.L1.ChainID(ctx)
+ require.NoError(t, err)
+
+ deployerOpts, err := bind.NewKeyedTransactorWithChainID(d.secrets.Deployer, l1ChainID)
+ require.NoError(t, err)
+
+ batchAuthenticator, err := bindings.NewBatchAuthenticator(config.BatchAuthenticatorAddress, d.L1)
+ require.NoError(t, err)
+
+ teeBatcherAddr, err := batchAuthenticator.TeeBatcher(&bind.CallOpts{})
+ require.NoError(t, err)
+ nonTeeBatcherAddr, err := batchAuthenticator.NonTeeBatcher(&bind.CallOpts{})
+ require.NoError(t, err)
+
+ activeIsTee, err := batchAuthenticator.ActiveIsTee(&bind.CallOpts{})
+ require.NoError(t, err)
+ t.Logf("Initial state: activeIsTee = %v", activeIsTee)
+
+ // verifyPublishing helper function
+ verifyPublishing := func(expectTeeActive bool) {
+ t.Logf("Verifying publishing for state: expectTeeActive=%v", expectTeeActive)
+
+ startBlock, err := d.L1.BlockNumber(ctx)
+ require.NoError(t, err)
+ t.Logf("Starting from block %d", startBlock)
+
+ // Generate L2 traffic
+ burnReceipt, err := d.SubmitSimpleL2Burn()
+ require.NoError(t, err)
+ t.Logf("Generated L2 transaction: %s (L2 block %d)", burnReceipt.Receipt.TxHash, burnReceipt.Receipt.BlockNumber)
+
+ // Wait for batcher to publish
+ // We wait long enough for the active batcher to publish, but not so long that we timeout the test
+ // The idle batcher check inside the driver should prevent it from publishing
+ time.Sleep(60 * time.Second)
+ t.Logf("Waited 60s for L1 confirmation")
+
+ endBlock, err := d.L1.BlockNumber(ctx)
+ require.NoError(t, err)
+ t.Logf("Checking blocks %d-%d", startBlock, endBlock)
+
+ teePublished, err := hasBatchTransactions(ctx, d.L1, config.BatchInboxAddress, teeBatcherAddr, startBlock, endBlock)
+ require.NoError(t, err)
+ nonTeePublished, err := hasBatchTransactions(ctx, d.L1, config.BatchInboxAddress, nonTeeBatcherAddr, startBlock, endBlock)
+ require.NoError(t, err)
+
+ t.Logf("TEE batcher published: %v, non-TEE batcher published: %v", teePublished, nonTeePublished)
+
+ if expectTeeActive {
+ require.True(t, teePublished, "TEE batcher should publish when active")
+ require.False(t, nonTeePublished, "non-TEE batcher should NOT publish when inactive")
+ } else {
+ require.True(t, nonTeePublished, "non-TEE batcher should publish when active")
+ require.False(t, teePublished, "TEE batcher should NOT publish when inactive")
+ }
+ }
+
+ // 1. Verify initial state
+ verifyPublishing(activeIsTee)
+
+ // 2. Switch state
+ t.Logf("Switching batcher state...")
+ switchTx, err := batchAuthenticator.SwitchBatcher(deployerOpts)
+ require.NoError(t, err)
+ receipt, err := wait.ForReceiptOK(ctx, d.L1, switchTx.Hash())
+ require.NoError(t, err)
+ require.Equal(t, types.ReceiptStatusSuccessful, receipt.Status)
+
+ // Update expected state
+ activeIsTee = !activeIsTee
+ t.Logf("Switched state to: activeIsTee=%v", activeIsTee)
+
+ // Wait for services to stabilize after switch (key for the batcher loop to pick up the change)
+ time.Sleep(10 * time.Second)
+
+ // 3. Verify new state
+ verifyPublishing(activeIsTee)
+}
diff --git a/espresso/devnet-tests/devnet_tools.go b/espresso/devnet-tests/devnet_tools.go
index 6fe44c3e521..e4428e191e3 100644
--- a/espresso/devnet-tests/devnet_tools.go
+++ b/espresso/devnet-tests/devnet_tools.go
@@ -349,7 +349,7 @@ func (d *Devnet) SubmitL2Tx(applyTxOpts helpers.TxOptsFn) (*types.Receipt, error
// Waits for a previously submitted transaction to be confirmed by the verifier.
func (d *Devnet) VerifyL2Tx(receipt *types.Receipt) error {
// Use longer timeout in CI environments due to Espresso processing delays
- timeout := 2 * time.Minute
+ timeout := 5 * time.Minute
// Check if running in CI environment
if os.Getenv("CI") != "" || os.Getenv("GITHUB_ACTIONS") != "" {
diff --git a/espresso/docker-compose.yml b/espresso/docker-compose.yml
index 99914a4b744..cf0cfec7446 100644
--- a/espresso/docker-compose.yml
+++ b/espresso/docker-compose.yml
@@ -104,7 +104,7 @@ services:
l1-genesis:
condition: service_completed_successfully
healthcheck:
- test: ["CMD", "curl", "-f", "http://localhost:${L1_HTTP_PORT}"]
+ test: [ "CMD", "curl", "-f", "http://localhost:${L1_HTTP_PORT}" ]
interval: 3s
timeout: 2s
retries: 40
@@ -199,7 +199,7 @@ services:
dockerfile: espresso/docker/op-stack/Dockerfile
target: op-node-target
healthcheck:
- test: ["CMD", "curl", "-f", "http://localhost:${ROLLUP_PORT}"]
+ test: [ "CMD", "curl", "-f", "http://localhost:${ROLLUP_PORT}" ]
interval: 3s
timeout: 2s
retries: 40
@@ -249,7 +249,7 @@ services:
dockerfile: espresso/docker/op-stack/Dockerfile
target: op-node-target
healthcheck:
- test: ["CMD", "curl", "-f", "http://localhost:${VERIFIER_PORT}"]
+ test: [ "CMD", "curl", "-f", "http://localhost:${VERIFIER_PORT}" ]
interval: 3s
timeout: 2s
retries: 40
@@ -295,7 +295,7 @@ services:
dockerfile: espresso/docker/op-stack/Dockerfile
target: op-node-target
healthcheck:
- test: ["CMD", "curl", "-f", "http://localhost:${CAFF_PORT}"]
+ test: [ "CMD", "curl", "-f", "http://localhost:${CAFF_PORT}" ]
interval: 3s
timeout: 2s
retries: 40
@@ -339,7 +339,7 @@ services:
restart: "no"
op-batcher:
- profiles: ["default"]
+ profiles: [ "default" ]
build:
context: ../
dockerfile: espresso/docker/op-stack/Dockerfile
@@ -394,7 +394,7 @@ services:
- --rpc.enable-admin
op-batcher-fallback:
- profiles: ["default"]
+ profiles: [ "default" ]
build:
context: ../
dockerfile: espresso/docker/op-stack/Dockerfile
@@ -416,14 +416,12 @@ services:
OP_BATCHER_ROLLUP_RPC: http://op-node-sequencer:${ROLLUP_PORT}
OP_BATCHER_MAX_CHANNEL_DURATION: ${MAX_CHANNEL_DURATION:-32}
OP_BATCHER_MAX_PENDING_TX: ${MAX_PENDING_TX:-32}
- OP_BATCHER_STOPPED: "true"
volumes:
- ../packages/contracts-bedrock/lib/superchain-registry/ops/testdata/monorepo:/config
command:
- op-batcher
- --espresso.enabled=false
- --private-key=7c852118294e51e653712a81e05800f419141751be58f605c371e15141b007a6
- - --stopped=true
- --throttle-threshold=0
- --max-channel-duration=2
- --target-num-frames=1
@@ -432,18 +430,14 @@ services:
- --rpc.enable-admin
op-batcher-tee:
- profiles: ["tee"]
+ profiles: [ "tee" ]
build:
context: ../
dockerfile: espresso/docker/op-stack/Dockerfile
target: op-batcher-enclave-target
image: op-batcher-tee:espresso
healthcheck:
- test:
- [
- "CMD-SHELL",
- "test -f /tmp/enclave-tools.pid && kill -0 $(cat /tmp/enclave-tools.pid) 2>/dev/null || exit 1",
- ]
+ test: [ "CMD-SHELL", "test -f /tmp/enclave-tools.pid && kill -0 $(cat /tmp/enclave-tools.pid) 2>/dev/null || exit 1" ]
interval: 30s
timeout: 10s
retries: 3
@@ -498,7 +492,7 @@ services:
# Legacy op-proposer (for non-succinct mode)
op-proposer:
- profiles: ["legacy"]
+ profiles: [ "legacy" ]
build:
context: ../
dockerfile: espresso/docker/op-stack/Dockerfile
@@ -558,7 +552,7 @@ services:
restart: unless-stopped
op-proposer-tee:
- profiles: ["tee"]
+ profiles: [ "tee" ]
build:
context: ../
dockerfile: espresso/docker/op-stack/Dockerfile
@@ -586,7 +580,7 @@ services:
# Legacy op-challenger (for non-succinct mode)
op-challenger:
- profiles: ["legacy"]
+ profiles: [ "legacy" ]
build:
context: ../
dockerfile: espresso/docker/op-stack/Dockerfile
@@ -670,7 +664,7 @@ services:
# PORT configuration
PORT: "3100"
healthcheck:
- test: ["CMD-SHELL", "nc -z localhost 3100 || exit 1"]
+ test: [ "CMD-SHELL", "nc -z localhost 3100 || exit 1" ]
interval: 10s
timeout: 5s
retries: 5
@@ -683,11 +677,7 @@ services:
ports:
- "${ESPRESSO_ATTESTATION_VERIFIER_PORT}:${ESPRESSO_ATTESTATION_VERIFIER_PORT}"
healthcheck:
- test:
- [
- "CMD-SHELL",
- "timeout 2 bash -c 'cat < /dev/null > /dev/tcp/localhost/${ESPRESSO_ATTESTATION_VERIFIER_PORT}' || exit 1",
- ]
+ test: [ "CMD-SHELL", "timeout 2 bash -c 'cat < /dev/null > /dev/tcp/localhost/${ESPRESSO_ATTESTATION_VERIFIER_PORT}' || exit 1" ]
interval: 5s
timeout: 3s
retries: 30
@@ -731,7 +721,7 @@ services:
ESPRESSO_SEQUENCER_ETH_MNEMONIC: "giant issue aisle success illegal bike spike question tent bar rely arctic volcano long crawl hungry vocal artwork sniff fantasy very lucky have athlete"
blockscout-db:
- profiles: ["default"]
+ profiles: [ "default" ]
image: postgres:14
restart: on-failure
environment:
@@ -742,7 +732,7 @@ services:
- blockscout-db-data:/var/lib/postgresql/data
blockscout:
- profiles: ["default"]
+ profiles: [ "default" ]
image: ghcr.io/blockscout/blockscout@sha256:7659f168e4e2f6b73dd559ae5278fe96ba67bc2905ea01b57a814c68adf5a9dc
restart: always
depends_on:
@@ -768,7 +758,7 @@ services:
MIX_ENV: "prod"
blockscout-frontend:
- profiles: ["default"]
+ profiles: [ "default" ]
image: ghcr.io/blockscout/frontend@sha256:4b69f44148414b55c6b8550bc3270c63c9f99e923d54ef0b307e762af6bac90a
restart: always
depends_on:
diff --git a/espresso/scripts/prepare-allocs.sh b/espresso/scripts/prepare-allocs.sh
index 887cb777ac9..891ad04cc11 100755
--- a/espresso/scripts/prepare-allocs.sh
+++ b/espresso/scripts/prepare-allocs.sh
@@ -37,8 +37,8 @@ trap cleanup EXIT
# Give anvil a moment to start up
sleep 1
-cast rpc anvil_setBalance "${OPERATOR_ADDRESS}" 0x100000000000000000000000000000000000
-cast rpc anvil_setBalance "${PROPOSER_ADDRESS}" 0x100000000000000000000000000000000000
+cast rpc anvil_setBalance "${OPERATOR_ADDRESS}" 0x100000000000000000000000000000000000 --rpc-url "${ANVIL_URL}"
+cast rpc anvil_setBalance "${PROPOSER_ADDRESS}" 0x100000000000000000000000000000000000 --rpc-url "${ANVIL_URL}"
op-deployer bootstrap proxy \
--l1-rpc-url="${ANVIL_URL}" \
diff --git a/justfile b/justfile
index 631d4a8ca9f..6a9712179e0 100644
--- a/justfile
+++ b/justfile
@@ -29,7 +29,10 @@ devnet-withdraw-test: build-devnet
devnet-batcher-switching-test: build-devnet
U_ID={{uid}} GID={{gid}} go test -timeout 30m -p 1 -count 1 -v -run TestBatcherSwitching ./espresso/devnet-tests/...
-build-devnet: compile-contracts
+devnet-batcher-active-publish-only-test: build-devnet
+ U_ID={{uid}} GID={{gid}} go test -timeout 30m -p 1 -count 1 -v -run TestBatcherActivePublishOnly ./espresso/devnet-tests/...
+
+build-devnet: stop-containers compile-contracts
rm -Rf espresso/deployment
(cd op-deployer && just)
(cd espresso && ./scripts/prepare-allocs.sh && docker compose build)