-
-
Notifications
You must be signed in to change notification settings - Fork 8
Add db.keys()
, .values()
, iterator.nextv()
and .all()
#12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@Level/core Side note: I'm concerned about the bus factor here, adding code to an already-refactored code base that no one else is familiar with. Realistically, there's not much I can do about that, besides spending more time not writing code (i.e. looking for contributors, funding, doing that awful thing called self-marketing). I'm still having fun here though, so I'm moving ahead. Just let me know if there's anything I can do to facilitate code reviews. |
Hey @vweevers, thanks for all the work you're putting in and the clarity of communication! I personally currently don't have the time to look at large PRs, so if it made sense to make smaller PRs that would help me but also I don't think the bus factor is critical here as everything looks to be well organized and your direction is clear |
@juliangruber Noted, thanks! Once I'm past the current refactoring/forking stage, smaller PRs will be easier. For now I will remind myself to make less (unrelated) tweaks, like how this PR includes README changes that I could easily do separately. |
Ref Level/abstract-leveldown#380 Adds key and value iterators: `db.keys()` and `db.values()`. As a replacement for levelup's `db.create{Key,Value}Stream()`. Example: ``` for await (const key of db.keys({ gte: 'a' }) { console.log(key) } ``` Adds two new methods to all three types of iterators: `nextv()` for getting many items (entries, keys or values) in one call and `all()` for getting all (remaining) items. These methods have functional but suboptimal defaults: `all()` falls back to repeatedly calling `nextv()` and that in turn falls back to `next()`. Example: ``` const values = await db.values({ limit: 10 }).all() ```
d542126
to
273de0d
Compare
Added tests and needed a few more things to make those pass:
To improve coverage, Lines 121 to 278 in 273de0d
This PR is good to go, but the amount of code makes me want to sleep on it. |
I'm happy with the public API. Less so with the private API and tests (which in short are "structurally inconsistent") but it's easy enough (for implementations) to work around. In addition, I want to start benchmarking (#4) and I'm making it harder for myself to benchmark fairly when I'm moving further away from |
On the C++ side: - Replace asBuffer options with encoding options - Refactor iterator_next to work for nextv(). We already had a iterator.ReadMany(size) method in C++, with a hardcoded size. Now size is taken from the JS argument to _nextv(size). The cache logic for next() is the same as before. Ref Level/community#70 Ref Level/abstract-level#12 - Use std::vector<Entry> in iterator.cache_ instead of std::vector<std::string> so that the structure of the cache matches the desired result of nextv() in JS. On the JS side: - Use classes for ChainedBatch, Iterator, ClassicLevel - Defer approximateSize() and compactRange() - Encode arguments of approximateSize() and compactRange(). Ref Level/community#85 - Add promise support to additional methods - Remove tests that were copied to abstract-level. This is the most of it, a few more changes are needed in follow-up commits.
On the C++ side: - Replace asBuffer options with encoding options - Refactor iterator_next to work for nextv(). We already had a iterator.ReadMany(size) method in C++, with a hardcoded size. Now size is taken from the JS argument to _nextv(size). The cache logic for next() is the same as before. Ref Level/community#70 Ref Level/abstract-level#12 - Use std::vector<Entry> in iterator.cache_ instead of std::vector<std::string> so that the structure of the cache matches the desired result of nextv() in JS. On the JS side: - Use classes for ChainedBatch, Iterator, ClassicLevel - Defer approximateSize() and compactRange() - Encode arguments of approximateSize() and compactRange(). Ref Level/community#85 - Add promise support to additional methods - Remove tests that were copied to abstract-level. This is the most of it, a few more changes are needed in follow-up commits.
I'm squeezing this in for similar reasons as #8. I wondered whether these additions would be breaking. The short answer is no. In addition, adding this now means
level-read-stream
can make use of it without conditional code paths based ondb.supports.*
.Ref Level/abstract-leveldown#380
Adds key and value iterators:
db.keys()
anddb.values()
. As a replacement for levelup'sdb.create{Key,Value}Stream()
. Example:Adds two new methods to all three types of iterators:
nextv()
for getting many items (entries, keys or values) in one call andall()
for getting all (remaining) items. These methods have functional but suboptimal defaults:all()
falls back to repeatedly callingnextv()
and that in turn falls back tonext()
. Example:Adds a lot of new code, with unfortunately some duplicate code because I wanted to avoid mixins and other forms of complexity, which means key and value iterators use classes that are separate from preexisting iterators. For example, a method like
_seek()
must now be implemented on three classes:AbstractIterator
,AbstractKeyIterator
andAbstractValueIterator
. This (small?) problem extends to implementations and their subclasses, if they choose to override key and value iterators to improve performance.On the flip side, the new methods are supported across the board: on sublevels, with encodings, with deferred open, and fully functional. This may demonstrate one of the major benefits of
abstract-level
overabstract-leveldown
paired withlevelup
.Yet todo:
level-concat-iterator
in existing testsAfter that, try another approach with amode
property on iterators, that is one of entries, keys or values (moving logic to if-branches... I already don't like it but it may result in cleaner logic downstream).