Skip to content

Optimize EIP-196 AltBn128 EcAdd#301

Merged
siladu merged 16 commits intobesu-eth:mainfrom
siladu:improve-ecadd-perf
Dec 14, 2025
Merged

Optimize EIP-196 AltBn128 EcAdd#301
siladu merged 16 commits intobesu-eth:mainfrom
siladu:improve-ecadd-perf

Conversation

@siladu
Copy link
Copy Markdown
Contributor

@siladu siladu commented Oct 31, 2025

Changes

Core optimizations by @ivokub

  1. Optimized Memory Management

Selective Pooling Strategy:

  • bigIntPool - Reuses big.Int allocations for scalar operations (heap-allocated, expensive to create)
  • G1/G2 points and small byte arrays now use stack allocation (cheaper than pooling overhead)
  1. Simplified Error Handling
  • Before: Functions returned error strings passed through buffers between Go and Java
  • After: Functions return integer error codes (0 for success, 1-6 for various errors)
  • Removes overhead of string allocation and copying across JNI boundary
  1. Streamlined JNI Interface

Changes function signatures from:

func eip196altbn128G1Add(input, output, errorBuf *C.char, inputLen C.int, outputLen, errorLen *C.int) C.int

To:

func eip196altbn128G1Add(input, output *C.char, inputLen C.int) errorCode

Safety improvement: Added Java-side buffer size validation to prevent JVM crashes from undersized output buffers.

  1. Optimized Field Validation
  • Removes manual field checking (checkInFieldEIP196 function)
  • Uses SetBytesCanonical() which performs validation internally
  • Eliminates redundant modulus comparisons
  1. Direct Encoding
  • g1AffineEncode now works directly with point objects using RawBytes()
  • Eliminates intermediate Marshal() allocations
  1. Reduced Buffer Sizes
  • Result buffer size reduced from 128 to 64 bytes (only needs to hold one G1 point)
  • Removes 256-byte error buffer entirely
  1. Enhanced Test Coverage

Added 32 new tests covering:

  • Concurrent operations (validates thread-safety of pool implementations)
  • Edge cases (partial inputs, buffer truncation, zero-padding)
  • Specific error code validation
  • Output buffer safety (exact size, oversized, undersized)

Results

Before this PR

                               |  Actual cost | Derived Cost |  Iteration time |      Throughput
EcAdd                          |      150 gas |      162 gas |      1,618.3 ns |      98.35 ±1.62 MGps
EcAddMarius                    |      150 gas |      417 gas |      4,173.7 ns |      36.87 ±0.36 MGps
EcAddAmez1                     |      150 gas |      402 gas |      4,019.2 ns |      37.66 ±0.29 MGps
EcAddAmez2                     |      150 gas |      406 gas |      4,055.4 ns |      37.66 ±0.33 MGps
EcAddAmez3                     |      150 gas |      407 gas |      4,074.4 ns |      37.81 ±0.38 MGps
EcAddCase0                     |      150 gas |      419 gas |      4,185.6 ns |      37.71 ±0.37 MGps
EcAddCase1                     |      150 gas |      404 gas |      4,040.7 ns |      37.69 ±0.33 MGps
...
EcAddCase100                   |      150 gas |      158 gas |      1,584.8 ns |     101.14 ±1.51 MGps
EcAddCase106                   |      150 gas |      141 gas |      1,412.5 ns |     110.12 ±1.73 MGps
mul1                           |    6,000 gas |    5,191 gas |     51,909.7 ns |     115.89 ±0.56 MGps
mul2                           |    6,000 gas |    5,148 gas |     51,479.8 ns |     117.04 ±0.67 MGps
2 pairings                     |   79,000 gas |   44,288 gas |    442,882.6 ns |     178.45 ±0.37 MGps
4 pairings                     |  113,000 gas |   62,853 gas |    628,526.7 ns |     179.86 ±0.35 MGps
6 pairings                     |  147,000 gas |   81,424 gas |    814,240.0 ns |     180.58 ±0.28 MGps

This PR

                               |  Actual cost | Derived Cost |  Iteration time |      Throughput
EcAdd                          |      150 gas |       65 gas |        649.8 ns |     234.45 ±1.80 MGps
EcAddMarius                    |      150 gas |      313 gas |      3,133.5 ns |      48.87 ±0.39 MGps
EcAddAmez1                     |      150 gas |      305 gas |      3,053.7 ns |      49.77 ±0.33 MGps
EcAddAmez2                     |      150 gas |      305 gas |      3,046.6 ns |      49.92 ±0.34 MGps
EcAddAmez3                     |      150 gas |      308 gas |      3,081.2 ns |      49.59 ±0.36 MGps
EcAddCase0                     |      150 gas |      302 gas |      3,024.6 ns |      50.55 ±0.34 MGps
EcAddCase1                     |      150 gas |      294 gas |      2,942.1 ns |      51.22 ±0.24 MGps
...
EcAddCase100                   |      150 gas |       62 gas |        618.4 ns |     250.46 ±2.18 MGps
EcAddCase106                   |      150 gas |       56 gas |        561.4 ns |     274.32 ±2.25 MGps
mul1                           |    6,000 gas |    4,800 gas |     48,001.8 ns |     125.51 ±0.70 MGps
mul2                           |    6,000 gas |    4,724 gas |     47,235.8 ns |     127.48 ±0.67 MGps
2 pairings                     |   79,000 gas |   44,067 gas |    440,668.6 ns |     179.62 ±0.70 MGps
4 pairings                     |  113,000 gas |   62,492 gas |    624,919.0 ns |     181.17 ±0.69 MGps
6 pairings                     |  147,000 gas |   80,948 gas |    809,483.1 ns |     181.91 ±0.65 MGps

Performance Improvements Summary

  | Operation    | Before (ns) | After (ns) | Speedup  | Throughput Gain |
  |--------------|-------------|------------|----------|-----------------|
  | EcAdd        | 1,618.3     | 649.8      | 2.49x 🚀 | +138%           |
  | EcAddMarius  | 4,173.7     | 3,133.5    | 1.33x    | +33%            |
  | EcAddAmez1   | 4,019.2     | 3,053.7    | 1.32x    | +32%            |
  | EcAddAmez2   | 4,055.4     | 3,046.6    | 1.33x    | +33%            |
  | EcAddAmez3   | 4,074.4     | 3,081.2    | 1.32x    | +31%            |
  | EcAddCase0   | 4,185.6     | 3,024.6    | 1.38x    | +34%            |
  | EcAddCase1   | 4,040.7     | 2,942.1    | 1.37x    | +36%            |
  | EcAddCase100 | 1,584.8     | 618.4      | 2.56x 🚀 | +148%           |
  | EcAddCase106 | 1,412.5     | 561.4      | 2.52x 🚀 | +149%           |
  | mul1         | 51,909.7    | 48,001.8   | 1.08x    | +8%             |
  | mul2         | 51,479.8    | 47,235.8   | 1.09x    | +9%             |
  | 2 pairings   | 442,882.6   | 440,668.6  | 1.01x    | +0.6%           |
  | 4 pairings   | 628,526.7   | 624,919.0  | 1.01x    | +0.6%           |
  | 6 pairings   | 814,240.0   | 809,483.1  | 1.01x    | +0.6%           |

Key Improvements:

  • 🚀 EC Addition (average cases): Up to 2.56x faster (149% throughput increase)
  • 🚀 EC Addition (worst cases): 1.3-1.4x faster (32-36% throughput increase)
  • ✅ Scalar Multiplication: ~8-9% faster (thanks to gnark-crypto v0.19.2)
  • ✅ Pairing Operations: Slightly improved (~0.6%)

Overall EC Addition improvement range: 33-138% 🏅

Benchmark Details

besu-ecadd-warm-exec-invert$ time ./build/install/besu/bin/evmtool benchmark --native --warm-iterations=20000 --exec-iterations=1000 --warm-invert=true altBn128
besu/v25.7-develop-a88105d/linux-x86_64/openjdk-java-21

****************************** Hardware Specs ******************************
* VM Type: m6a.2xlarge
* OS: GNU/Linux Ubuntu 24.04.2 LTS (Noble Numbat) build 6.14.0-1009-aws
* Processor: AMD EPYC 7R13 Processor
* Microarchitecture: Zen 3
* Physical CPU packages: 1
* Physical CPU cores: 4
* Logical CPU cores: 8
* Average Max Frequency per core: 4501 MHz
* Memory Total: 32 GB

Testing

siladu and others added 4 commits October 31, 2025 16:40
1. Memory Pooling for Performance

Introduces sync.Pool objects to reuse allocations and reduce garbage collection pressure:
- bigIntPool - reuses big.Int allocations for scalar operations
- g1Pool / g2Pool - reuses elliptic curve point allocations
- bytes64Pool - reuses 64-byte buffer allocations

2. Simplified Error Handling

- Before: Functions returned error strings passed through buffers between Go and Java
- After: Functions return integer error codes (0 for success, 1-8 for various errors)
- Removes overhead of string allocation and copying across JNI boundary

3. Streamlined JNI Interface

Changes function signatures from:
func eip196altbn128G1Add(input, output, errorBuf *C.char, inputLen C.int,
                         outputLen, errorLen *C.int) C.int
To:
func eip196altbn128G1Add(input, output *C.char, inputLen C.int) errorCode

4. Optimized Field Validation

- Removes manual field checking (checkInFieldEIP196 function)
- Uses SetBytesCanonical() which performs validation internally
- Eliminates redundant modulus comparisons

5. Direct Encoding

- g1AffineEncode now works directly with point objects using RawBytes()
- Eliminates intermediate Marshal() allocations

6. Reduced Buffer Sizes

- Result buffer size reduced from 128 to 64 bytes (only needs to hold one G1 point)
- Removes 256-byte error buffer entirely

Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
Co-authored-by: Ivo Kubjas <ivo.kubjas@consensys.net>
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
The pairing function now writes results (0x01 or 0x00) directly to the output buffer and only returns error codes for actual errors, eliminating the previous hack of using an error code to represent a valid pairing result of zero.

Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
@macfarla macfarla mentioned this pull request Nov 4, 2025
6 tasks
inputBytes.length,
output);

if (errorCode != LibGnarkEIP196.EIP196_ERR_CODE_SUCCESS) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we call err_code_success return_code_success?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could, but it stems from this set of related go consts and it's idiomatic to share the same prefix, so would need to change them all to returnCode and most of them are indeed errorCodes https://github.com/hyperledger/besu-native/pull/301/files#diff-9622b17a1165cbfa1780cbc92d116bcbbcb4136daf03dd3d0aa4f9d77373a2ddR35-R41
I'm leaning towards keeping unless you feel strongly to change them all to returnCode...?

Copy link
Copy Markdown
Contributor

@ivokub ivokub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also recommend updating gnark-crypto dependency to v0.19.2 (most recent). Most concretely, it contains optimizations for scalar multiplication in case scalars are small.

For a lot of use-cases it can provide significant speedup (Consensys/gnark-crypto#703). It will be less evident due to JNI.

To update:

cd gnark/gnark-jni
go get github.com/consensys/gnark-crypto@v0.19.2
go mod tidy

I built and ran unit tests locally and tests pass. I didn't run evmtool.

Otherwise, the changes look good - I think passing directly the pairing return value is better with my previous approach (by passing it through error code).

assertThat(errorCode).isEqualTo(LibGnarkEIP196.EIP196_ERR_CODE_SUCCESS);
// The key test: byte 31 should have been written by Go code (either 0x00 or 0x01, not 0xFF)
assertThat(output[31]).isNotEqualTo((byte) 0xFF);
assertThat(output[31]).isIn((byte) 0x00, (byte) 0x01);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also check that rest is 0x00?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Contributor

@garyschulte garyschulte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, one safety concern highlighted

Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
@siladu siladu force-pushed the improve-ecadd-perf branch from 14b9a14 to d43a197 Compare November 13, 2025 13:42
@siladu
Copy link
Copy Markdown
Contributor Author

siladu commented Nov 13, 2025

Rerun of benchmark with gnark-crypto v0.19.2 bump

                               |  Actual cost | Derived Cost |  Iteration time |      Throughput
EcAdd                          |      150 gas |       72 gas |        717.7 ns |     215.21 ±1.54 MGps
EcAddMarius                    |      150 gas |      324 gas |      3,240.1 ns |      46.92 ±0.30 MGps
EcAddAmez1                     |      150 gas |      319 gas |      3,193.3 ns |      48.09 ±0.31 MGps
EcAddAmez2                     |      150 gas |      313 gas |      3,131.4 ns |      48.17 ±0.21 MGps
EcAddAmez3                     |      150 gas |      315 gas |      3,152.9 ns |      48.09 ±0.26 MGps
EcAddCase0                     |      150 gas |      312 gas |      3,124.0 ns |      48.42 ±0.25 MGps
EcAddCase1                     |      150 gas |      313 gas |      3,127.4 ns |      48.45 ±0.26 MGps
...
EcAddCase100                   |      150 gas |       66 gas |        661.2 ns |     227.69 ±1.20 MGps
EcAddCase106                   |      150 gas |       59 gas |        590.4 ns |     265.09 ±2.25 MGps
mul1                           |    6,000 gas |    4,759 gas |     47,586.2 ns |     126.56 ±0.70 MGps
mul2                           |    6,000 gas |    4,712 gas |     47,121.1 ns |     128.01 ±0.81 MGps
2 pairings                     |   79,000 gas |   43,924 gas |    439,235.3 ns |     179.93 ±0.35 MGps
4 pairings                     |  113,000 gas |   62,278 gas |    622,780.7 ns |     181.51 ±0.33 MGps
6 pairings                     |  147,000 gas |   80,592 gas |    805,917.8 ns |     182.44 ±0.29 MGps

Seems to give a small improvement to EcAdd (~1 MGas/s) and mul, but not pairings. Not checked if the benching include the small scalars that might benefit the most.

Comment on lines +268 to +269
if !g1.IsOnCurve() {
return errCodePointOnCurveCheckFailedEIP196
Copy link
Copy Markdown
Contributor

@garyschulte garyschulte Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fwiw, if we are trying to eke out additional performance, the isOnCurve() checks are duplicated by SetBytesCanonical since gnark-crypto 0.17.0. Doing duplicate checks were kept out of an abundance of caution. see #262 (comment) for context

it is worth at least testing without the duplicate isOnCurve check to determine if the impact is negligible enough to keep for "visibility" reasons within the code.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's on my list but would rather keep this as incremental as possible - would save that for another PR.

Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
@siladu siladu force-pushed the improve-ecadd-perf branch from 98f5b5a to b807d30 Compare November 25, 2025 11:12
@siladu
Copy link
Copy Markdown
Contributor Author

siladu commented Nov 25, 2025

New benchmark with the latest code, i.e. the added length check. Maybe a very slight reduction in throughput, probably cancelled out the gnark-crypto upgrade 😁

                               |  Actual cost | Derived Cost |  Iteration time |      Throughput
EcAdd                          |      150 gas |       80 gas |        803.3 ns |     205.76 ±3.36 MGps
EcAddMarius                    |      150 gas |      334 gas |      3,340.0 ns |      45.95 ±0.46 MGps
EcAddAmez1                     |      150 gas |      328 gas |      3,280.6 ns |      47.02 ±0.50 MGps
EcAddAmez2                     |      150 gas |      327 gas |      3,273.8 ns |      46.88 ±0.48 MGps
EcAddAmez3                     |      150 gas |      337 gas |      3,370.1 ns |      46.84 ±0.56 MGps
EcAddCase0                     |      150 gas |      325 gas |      3,249.5 ns |      47.81 ±0.49 MGps
EcAddCase1                     |      150 gas |      322 gas |      3,220.7 ns |      47.77 ±0.45 MGps
...
EcAddCase100                   |      150 gas |       73 gas |        725.5 ns |     220.15 ±3.28 MGps
EcAddCase106                   |      150 gas |       64 gas |        642.8 ns |     251.34 ±3.94 MGps
mul1                           |    6,000 gas |    4,771 gas |     47,708.4 ns |     126.34 ±0.76 MGps
mul2                           |    6,000 gas |    4,690 gas |     46,902.1 ns |     128.48 ±0.72 MGps
2 pairings                     |   79,000 gas |   43,882 gas |    438,815.8 ns |     180.09 ±0.34 MGps
4 pairings                     |  113,000 gas |   62,162 gas |    621,623.3 ns |     181.85 ±0.33 MGps
6 pairings                     |  147,000 gas |   80,518 gas |    805,178.5 ns |     182.61 ±0.27 MGps

Copy link
Copy Markdown
Contributor

@garyschulte garyschulte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see safety comment, otherwise LGTM

Clarifying comments

Changelog

Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
Copy link
Copy Markdown
Contributor

@garyschulte garyschulte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚢

@siladu siladu marked this pull request as ready for review December 7, 2025 07:40
inputLen := int(cInputLen)
errorLen := (*int)(unsafe.Pointer(cErrorLen))
outputLen := (*int)(unsafe.Pointer(cOutputLen))
var bytes64Pool = sync.Pool{
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure using a sync.Pool to allocate a fixed size array makes sense. Just declaring it in the function it is used (as an array, not a slice):
var a [64]byte
will allocate it on the stack, and will be more performant than using sync.Pool (sync mechanism + allocated on the heap).

return new(big.Int)
},
}
var g1Pool = sync.Pool{
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar comment than for the [64]byte array pool; G1 and G2 have known fix sizes, allocating directly in the function will likely be more performant.

Pool would be useful for the big.Int that would end up anyway on the heap.

@ivokub
Copy link
Copy Markdown
Contributor

ivokub commented Dec 8, 2025

I removed the pools for G1/G2/bytearray. For mul and pairing there is no difference, but EcAdd is somewhat faster (5%-ish):

diff --git a/gnark/gnark-jni/gnark-eip-196.go b/gnark/gnark-jni/gnark-eip-196.go
index fcf5711..b280f19 100644
--- a/gnark/gnark-jni/gnark-eip-196.go
+++ b/gnark/gnark-jni/gnark-eip-196.go
@@ -52,22 +52,6 @@ var bigIntPool = sync.Pool{
 		return new(big.Int)
 	},
 }
-var g1Pool = sync.Pool{
-	New: func() any {
-		return new(bn254.G1Affine)
-	},
-}
-var g2Pool = sync.Pool{
-	New: func() any {
-		return new(bn254.G2Affine)
-	},
-}
-
-var bytes64Pool = sync.Pool{
-	New: func() any {
-		return [64]byte{}
-	},
-}
 
 var EIP196ScalarTwo = big.NewInt(2)
 
@@ -82,33 +66,29 @@ func eip196altbn128G1Add(javaInputBuf, javaOutputBuf *C.char, cInputLen C.int) e
 	input := (*[2 * EIP196PreallocateForG1]byte)(unsafe.Pointer(javaInputBuf))[:inputLen:inputLen]
 
 	// generate p0 g1 affine
-	p0 := g1Pool.Get().(*bn254.G1Affine)
-	defer g1Pool.Put(p0)
-
-	if err := safeUnmarshalEIP196(p0, input, 0); err != errCodeSuccess {
+	var p0 bn254.G1Affine
+	if err := safeUnmarshalEIP196(&p0, input, 0); err != errCodeSuccess {
 		return err
 	}
 
 	if inputLen < 2*EIP196PreallocateForG1 {
 		// if incomplete input is all zero, return p0
 		if isAllZeroEIP196(input, 64, 64) {
-			g1AffineEncode(p0, javaOutputBuf)
+			g1AffineEncode(&p0, javaOutputBuf)
 			return errCodeSuccess
 		}
 	}
 	// generate p1 g1 affine
-	p1 := g1Pool.Get().(*bn254.G1Affine)
-	defer g1Pool.Put(p1)
-
-	if err := safeUnmarshalEIP196(p1, input, 64); err != errCodeSuccess {
+	var p1 bn254.G1Affine
+	if err := safeUnmarshalEIP196(&p1, input, 64); err != errCodeSuccess {
 		return err
 	}
 
 	// Use the Add method to combine points
-	p0.Add(p0, p1)
+	p0.Add(&p0, &p1)
 
 	// marshal the resulting point and encode directly to the output buffer
-	g1AffineEncode(p0, javaOutputBuf)
+	g1AffineEncode(&p0, javaOutputBuf)
 	return errCodeSuccess
 
 }
@@ -150,9 +130,8 @@ func eip196altbn128G1Mul(javaInputBuf, javaOutputBuf *C.char, cInputLen C.int) e
 	scalarBytes := input[EIP196PreallocateForG1:]
 	if 96 > int(cInputLen) {
 		// if the input is truncated, copy the bytes to the high order portion of the scalar
-		bytes64 := bytes64Pool.Get().([64]byte)
-		defer bytes64Pool.Put(bytes64)
-		scalarBytes = bytes64[:32]
+		var bytes32 [32]byte
+		scalarBytes = bytes32[:]
 		copy(scalarBytes[:], input[64:int(cInputLen)])
 	}
 
@@ -246,9 +225,8 @@ func safeUnmarshalEIP196(g1 *bn254.G1Affine, input []byte, offset int) errorCode
 		return errCodeSuccess
 	} else if len(input)-offset < 64 {
 		// If we have some input, but it is incomplete, pad with zero
-		bytes64 := bytes64Pool.Get().([64]byte)
-		defer bytes64Pool.Put(bytes64)
-		pointBytes = bytes64[:64]
+		var bytes64 [64]byte
+		pointBytes = bytes64[:]
 		shortLen := len(input) - offset
 		copy(pointBytes, input[offset:len(input)])
 		for i := shortLen; i < 64; i++ {

update the macos GHA runners since macos-13 runners are now deprecated

Signed-off-by: garyschulte <garyschulte@gmail.com>
Signed-off-by: garyschulte <garyschulte@gmail.com>
@siladu
Copy link
Copy Markdown
Contributor Author

siladu commented Dec 10, 2025

@gbotrel @ivokub @garyschulte Removing the small object pooling is 6-15% improvement for ecadd cases 👍
That means this PR overall improvement for ecadd is 33-138% 🏅

And agree no improvement to mul or pairing compared to last iteration. Though looks like mul improved ~8% after gnark-crypto v0.19.2

                               |  Actual cost | Derived Cost |  Iteration time |      Throughput
EcAdd                          |      150 gas |       65 gas |        649.8 ns |     234.45 ±1.80 MGps
EcAddMarius                    |      150 gas |      313 gas |      3,133.5 ns |      48.87 ±0.39 MGps
EcAddAmez1                     |      150 gas |      305 gas |      3,053.7 ns |      49.77 ±0.33 MGps
EcAddAmez2                     |      150 gas |      305 gas |      3,046.6 ns |      49.92 ±0.34 MGps
EcAddAmez3                     |      150 gas |      308 gas |      3,081.2 ns |      49.59 ±0.36 MGps
EcAddCase0                     |      150 gas |      302 gas |      3,024.6 ns |      50.55 ±0.34 MGps
EcAddCase1                     |      150 gas |      294 gas |      2,942.1 ns |      51.22 ±0.24 MGps
...
EcAddCase100                   |      150 gas |       62 gas |        618.4 ns |     250.46 ±2.18 MGps
EcAddCase106                   |      150 gas |       56 gas |        561.4 ns |     274.32 ±2.25 MGps
mul1                           |    6,000 gas |    4,800 gas |     48,001.8 ns |     125.51 ±0.70 MGps
mul2                           |    6,000 gas |    4,724 gas |     47,235.8 ns |     127.48 ±0.67 MGps
2 pairings                     |   79,000 gas |   44,067 gas |    440,668.6 ns |     179.62 ±0.70 MGps
4 pairings                     |  113,000 gas |   62,492 gas |    624,919.0 ns |     181.17 ±0.69 MGps
6 pairings                     |  147,000 gas |   80,948 gas |    809,483.1 ns |     181.91 ±0.65 MGps

@siladu siladu merged commit edab3fa into besu-eth:main Dec 14, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants