-
Hi everyone, I am working on a server in which each user needs to get a completely isolated database. I'd like to use LMDB.js as the storage layer, it's awesome. Preferably I'd like each user to get their own database file, so that backups and portability are much easier on a per-user level. The most straightforward way I can think of is to simply open up a separate LMDB database for each unique user. However, as expected, in some initial tests I did memory usage seems to climb with the number of open databases. I am not sure how I should go about managing all the open databases. I can 'cache' connections in a pool and limit the number of open connections or something, but that feels a bit wonky. I can also create sub-databases within one big LMDB file as suggested in the README, but in that case I feel like I loose some of the charms and benefits of a per user db file. Looking for some thoughts on this. What would be the 'lmdb way' of solving this? Thanks :) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I think it sounds like you have a good understanding of the trade-offs here. Maintaining separate databases for each user does incur more memory. As you noted, you can probably mitigate this by lazily opening these databases and closing them when they haven't been used recently (could be an LRU pool, like you mentioned). Also, if writes are scattered across many databases, this can improve concurrency, allowing multiple write transactions in parallel (the write transaction itself has exclusive lock within a database). Alternately you can setup sub-databases, and this would generally be more memory efficient. If there are many writes that are occurring, this can provide the opportunity for lmdb-js to batch these together in the underlying transactions for better write efficiency. One thing to note, is that you will need to increase the |
Beta Was this translation helpful? Give feedback.
I think it sounds like you have a good understanding of the trade-offs here. Maintaining separate databases for each user does incur more memory. As you noted, you can probably mitigate this by lazily opening these databases and closing them when they haven't been used recently (could be an LRU pool, like you mentioned). Also, if writes are scattered across many databases, this can improve concurrency, allowing multiple write transactions in parallel (the write transaction itself has exclusive lock within a database).
Alternately you can setup sub-databases, and this would generally be more memory efficient. If there are many writes that are occurring, this can provide the opportunity for…