Shard exceeding the size available in a worker node #5799
-
If the worker node is limited to certain storage size[ e.g 2TB in Azure], and when there is more data coming in for a particular shard, will there be error thrown on reaching the limit, or is there any mechanism to spill/extend the shard to another node? In the Azure docs, there are only 2 options possible when worker node becomes read-only when disk is almost full - either rebalance the data or drop some data. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
It would behave in the same way as when Postgres runs out of disk space (writing to WAL fails). We are starting to work on shard split machinery, but it's not available yet. |
Beta Was this translation helpful? Give feedback.
It would behave in the same way as when Postgres runs out of disk space (writing to WAL fails). We are starting to work on shard split machinery, but it's not available yet.