Work around uniswap and perpetual-pools ext tests failing OOM#14915
Work around uniswap and perpetual-pools ext tests failing OOM#14915
Conversation
| # Disable a test failing due to a non-deterministic order of keys in a returned dict. | ||
| # TODO: Figure out why it's failing and re-enable. | ||
| sed -i 's|\(it\)\(("Rotates the observations array"\)|\1.skip\2|g' test/PriceObserver.spec.ts |
There was a problem hiding this comment.
Here's the relevant output from the the failed run of t_native_test_ext_perpetual_pools:
262 passing (10m)
15 pending
1 failing
1) PriceObserver
add
When called with a full observations array
Rotates the observations array:
AssertionError: expected [ …(24) ] to deeply equal [ …(24) ]
+ expected - actual
[
{
+ "_hex": "0x03"
+ "_isBigNumber": true
+ }
+ {
"_hex": "0x04"
"_isBigNumber": true
}
{
--
"_hex": "0x07"
"_isBigNumber": true
}
{
+ "_hex": "0x08"
+ "_isBigNumber": true
+ }
+ {
"_hex": "0x0c"
"_isBigNumber": true
}
{
- "_hex": "0x08"
+ "_hex": "0x0a"
"_isBigNumber": true
}
{
"_hex": "0x0b"
"_isBigNumber": true
}
{
- "_hex": "0x0a"
+ "_hex": "0x0c"
"_isBigNumber": true
}
{
"_hex": "0x0e"
--
"_hex": "0x05"
"_isBigNumber": true
}
{
- "_hex": "0x03"
+ "_hex": "0x05"
"_isBigNumber": true
}
{
"_hex": "0x09"
"_isBigNumber": true
}
{
- "_hex": "0x0c"
- "_isBigNumber": true
- }
- {
- "_hex": "0x05"
- "_isBigNumber": true
- }
- {
"_hex": "0x0a"
"_isBigNumber": true
}
{
The dict it shows seems the have the same content, just ordered differently.
EDIT: Actually, I see now that it's a list. Still, the items are the same and I wouldn't be surprised if it was created from a dict anyway.
There was a problem hiding this comment.
Yeah, this will do it for now if there is no better fix. Sadly, just keeping the package.lock file doesn't worked as I expected (https://app.circleci.com/pipelines/github/ethereum/solidity/33185/workflows/5edbf286-8dab-4ee8-af98-82d25d87bf0e), so it will need some more adjusts, but at least we would not need to use xlarge resources :)
|
With this workaround here: #14919 we don't need to increase the machine sizes. |
|
Closing in favor of #14919. |
As we agreed on the call, I'm submitting my workaround as a PR, but hopefully @r0qs will come up with a better fix. Mine just massively increases the machine size to make it still work despite the huge memory leak and disables tests that fail due to some non-determinism.