Skip to content

chore: reduce precision of MAX_FEE_ASSET_PRICE_MODIFIER#56

Merged
LHerskind merged 1 commit intomainfrom
lh/reduce-precision-price-modifier
Mar 23, 2025
Merged

chore: reduce precision of MAX_FEE_ASSET_PRICE_MODIFIER#56
LHerskind merged 1 commit intomainfrom
lh/reduce-precision-price-modifier

Conversation

@LHerskind
Copy link
Contributor

As mentioned in AztecProtocol/aztec-packages#12871 we wish to reduce the precision of the fee asset price modifier such that we can hold related values in smaller types and reduce storage overhead.

@LHerskind LHerskind merged commit 1c3003d into main Mar 23, 2025
@LHerskind LHerskind deleted the lh/reduce-precision-price-modifier branch March 23, 2025 19:51
LHerskind added a commit to AztecProtocol/aztec-packages that referenced this pull request Mar 24, 2025
Fixes #12870. 

The changes are fairly simple. Define boundaries for the values that we
are going to store as part of the fee header, then use storage that can
fit those boundaries but not much bigger.

Currently everything was `uint256`. We create a `CompressedFeeHeader`
which is the value that we are going to put into storage, and we will
define it as follows:
```solidity
struct CompressedFeeHeader {
  uint64 congestionCost;
  uint64 provingCost;
  uint48 feeAssetPriceNumerator;
  uint48 excessMana;
  uint32 manaUsed;
}
```

Following the standard of biggest type first.

The `congestionCost` and `provingCost` are both costs per unit of mana,
and even if it grows very large, `uint64` should be sufficient. If you
need to pay multiple who coins per unit of compute something is wrong.

For the `feeAssetPriceNumerator` we update its precision to "only" use
1e6 for 1% diff. With that, we can increase by `1e6` at most per block,
so starting 0 we should be able to handle a 1/4 BILLION blocks in a row
all increasing to the max - should cover it. This update in precision
required the model in engineering designs to also be updated to provide
values in the proper size. Therefore the diff looks much bigger as they
were updated. The change in engineering designs is in
AztecProtocol/engineering-designs#56.

For `excessMana` we can at most increase by `manaUsed` each round, and
with `uint48` we have a few trillion at hand. At that point, the
congestion fee should already be so nasty if even computable that there
are separate issues.

Lastly the amount of mana used in the specific block is expected to fit
in uint32 for a long time. I am happy to plesently surprised if possible
for gigablocks, but I don't believe it. 4 billion gas could fit.

If running out of space or some vars need to be bigger we can play
around with the precision of some measures (such at the costs at the
top).

--- 

Comment on gas. If you look at the `gas-report` it won't change
drastically, the main reason is that a lot of the tests are using zero
values, e.g., updating from zero to zero, which is not nearly as costly
as going from nothing to something. A separate funnel for it will short
bigger savings, but I expect this to be visible from some of the work on
#12614.
DanielKotov pushed a commit to AztecProtocol/aztec-packages that referenced this pull request Mar 27, 2025
Fixes #12870. 

The changes are fairly simple. Define boundaries for the values that we
are going to store as part of the fee header, then use storage that can
fit those boundaries but not much bigger.

Currently everything was `uint256`. We create a `CompressedFeeHeader`
which is the value that we are going to put into storage, and we will
define it as follows:
```solidity
struct CompressedFeeHeader {
  uint64 congestionCost;
  uint64 provingCost;
  uint48 feeAssetPriceNumerator;
  uint48 excessMana;
  uint32 manaUsed;
}
```

Following the standard of biggest type first.

The `congestionCost` and `provingCost` are both costs per unit of mana,
and even if it grows very large, `uint64` should be sufficient. If you
need to pay multiple who coins per unit of compute something is wrong.

For the `feeAssetPriceNumerator` we update its precision to "only" use
1e6 for 1% diff. With that, we can increase by `1e6` at most per block,
so starting 0 we should be able to handle a 1/4 BILLION blocks in a row
all increasing to the max - should cover it. This update in precision
required the model in engineering designs to also be updated to provide
values in the proper size. Therefore the diff looks much bigger as they
were updated. The change in engineering designs is in
AztecProtocol/engineering-designs#56.

For `excessMana` we can at most increase by `manaUsed` each round, and
with `uint48` we have a few trillion at hand. At that point, the
congestion fee should already be so nasty if even computable that there
are separate issues.

Lastly the amount of mana used in the specific block is expected to fit
in uint32 for a long time. I am happy to plesently surprised if possible
for gigablocks, but I don't believe it. 4 billion gas could fit.

If running out of space or some vars need to be bigger we can play
around with the precision of some measures (such at the costs at the
top).

--- 

Comment on gas. If you look at the `gas-report` it won't change
drastically, the main reason is that a lot of the tests are using zero
values, e.g., updating from zero to zero, which is not nearly as costly
as going from nothing to something. A separate funnel for it will short
bigger savings, but I expect this to be visible from some of the work on
#12614.
DanielKotov pushed a commit to AztecProtocol/aztec-packages that referenced this pull request Mar 27, 2025
Fixes #12870. 

The changes are fairly simple. Define boundaries for the values that we
are going to store as part of the fee header, then use storage that can
fit those boundaries but not much bigger.

Currently everything was `uint256`. We create a `CompressedFeeHeader`
which is the value that we are going to put into storage, and we will
define it as follows:
```solidity
struct CompressedFeeHeader {
  uint64 congestionCost;
  uint64 provingCost;
  uint48 feeAssetPriceNumerator;
  uint48 excessMana;
  uint32 manaUsed;
}
```

Following the standard of biggest type first.

The `congestionCost` and `provingCost` are both costs per unit of mana,
and even if it grows very large, `uint64` should be sufficient. If you
need to pay multiple who coins per unit of compute something is wrong.

For the `feeAssetPriceNumerator` we update its precision to "only" use
1e6 for 1% diff. With that, we can increase by `1e6` at most per block,
so starting 0 we should be able to handle a 1/4 BILLION blocks in a row
all increasing to the max - should cover it. This update in precision
required the model in engineering designs to also be updated to provide
values in the proper size. Therefore the diff looks much bigger as they
were updated. The change in engineering designs is in
AztecProtocol/engineering-designs#56.

For `excessMana` we can at most increase by `manaUsed` each round, and
with `uint48` we have a few trillion at hand. At that point, the
congestion fee should already be so nasty if even computable that there
are separate issues.

Lastly the amount of mana used in the specific block is expected to fit
in uint32 for a long time. I am happy to plesently surprised if possible
for gigablocks, but I don't believe it. 4 billion gas could fit.

If running out of space or some vars need to be bigger we can play
around with the precision of some measures (such at the costs at the
top).

--- 

Comment on gas. If you look at the `gas-report` it won't change
drastically, the main reason is that a lot of the tests are using zero
values, e.g., updating from zero to zero, which is not nearly as costly
as going from nothing to something. A separate funnel for it will short
bigger savings, but I expect this to be visible from some of the work on
#12614.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants