Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix priority access list estimator #191

Merged
merged 2 commits into from
May 5, 2022

Conversation

sunce86
Copy link
Contributor

@sunce86 sunce86 commented May 4, 2022

If one of the access list estimators returns global Ok result, but with Err for each individual transaction, then it is meaningful to try with other access list estimators rather then returning the result.

For example, this happens for Web3 access list estimator when node does not support eth_createAccessList method.

I think this change needs to be part of the PriorityAccessListEstimating rather then part of the NodeAccessList, because this type of result (global Ok result, but with Err for each individual transaction) is perfectly valid from the NodeAccessList point of view (we asked the node something and it returned a valid batch response).

@sunce86 sunce86 requested a review from a team as a code owner May 4, 2022 16:35
Copy link
Contributor

@MartinquaXD MartinquaXD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it would be nice for such a case to check that the node actually supports access lists during initialization. If a node doesn't support that API it could then return an Err() immediately instead of making network calls to eventually return Ok(Vec<Err>) every time. This would prevent the problematic NodeAccessList to not add any latency.
Does this change originate from this slack thread?
If so, it seems like the problem this PR solves is not that common and could be solved quickly by fixing our nodes. In that case I wouldn't mind pessimizing latency temporarily to make the implementation as simple as it is right now.

@sunce86
Copy link
Contributor Author

sunce86 commented May 5, 2022

IMO it would be nice for such a case to check that the node actually supports access lists during initialization. If a node doesn't support that API it could then return an Err() immediately instead of making network calls to eventually return Ok(Vec) every time. This would prevent the problematic NodeAccessList to not add any latency.

Yes, nice suggestion. I would maybe just skip adding the NodeAccessList to the PriorityAccessListEstimating list in case it does not support the method.

Does this change originate from this slack thread?

Yes, nice catch :)

If so, it seems like the problem this PR solves is not that common and could be solved quickly by fixing our nodes. In that case I wouldn't mind pessimizing latency temporarily to make the implementation as simple as it is right now.

That was also my reasoning. This is just a temporary situation and I expect it to not happen anymore after OE is removed from our nodes (other clients should already have a support).

@sunce86 sunce86 enabled auto-merge (squash) May 5, 2022 10:45
@sunce86 sunce86 merged commit 8a412ec into main May 5, 2022
@sunce86 sunce86 deleted the sunce86/fix-priority-access-list-estimator branch May 5, 2022 10:49
@github-actions github-actions bot locked and limited conversation to collaborators May 5, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants