Account memory for OrcDeletedRows - fix memory accounting for ORC ACID delete delta#9914
Conversation
There was a problem hiding this comment.
sizeOfObjectArray(rowCount) + rowCount * RowId.INSTANCE_SIZE
There was a problem hiding this comment.
Is it worth doing setting this as we loop vs just doing it once at the end?
Also, can we safely ignore the return value of setBytes?
There was a problem hiding this comment.
The benefit of doing that in the loop (I think) is that we can pause some other query for a while, allowing us to get to the end of loop without OOM. Other than that it does not change much I think.
@findepi am I missing something?
There was a problem hiding this comment.
we should assume that building deletedRows list can exhaust node memory.
if we set memory usage only at the end, we still do not protect ourselves against node OOM.
Also, can we safely ignore the return value of setBytes?
we cannot block here (can we?).
we could use trySetBytes here, perhaps.
plugin/trino-hive/src/main/java/io/trino/plugin/hive/orc/OrcDeletedRows.java
Outdated
Show resolved
Hide resolved
8a18ee9 to
40446fb
Compare
|
AC |
It does not look like. Maybe with a significant refactoring.
I was thinking about that. But then what if it returns that we cannot allocate. Throw exception ourselves? |
yes, we would throw then. cc @sopel39 for |
There was a problem hiding this comment.
dropped as data is not passed to operator context immediately.
40446fb to
2c5a097
Compare
2c5a097 to
e3560ed
Compare
No description provided.