-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The 1M index entries problem #5967
Comments
This is only true so long a we continue to explode things back out into separate rows on ingest. It's not articulated as such in the temporal-batch issue, but if the store itself had end-to-end support for temporal batches (admittedly a lot of work) I believe this would solve many of the issues for real-world happy-path use-cases. By happy-path, I mean if the temporal batch is pre-sorted on the same timeline as your range query and potentially even non-overlapping with other temporal batches. In this case a store that worked natively with temporal batches should only need to carry overhead of O(batches) rather than O(scalars) and be able to just return direct scalar slices from the original arrow array without ever having needed to explode these batches into separate rows in the first place. I believe this constraint would be acceptable to most users that are really pushing the envelope of large scalar streams. We don't need to make every possible edge-case fast. We need to provide some means for users to follow a happy-path to make a specific constrained edge-cases fast. |
Good write-up! I'd like to see a more detailed breakdown of where the overhead is once we start looking at this a little bit closer (i.e. how many bytes is the reverse index, how much is because of |
I also forgot to mention that the solution should fit nicely with promises too. |
Looking a bit closer on the storage overhead: In the Logically this is a The actual implementation of pub struct DataCellColumn(pub DataCellOptVec);
pub type DataCellOptVec = VecDeque<Option<DataCell>>;
// 8 bytes
pub struct DataCell {
// An Arc so we can share the same `DataCellInner` over different timelines,
// e.g. the same set of point cloud colors.
pub inner: Arc<DataCellInner>,
}
// 48 bytes
pub struct DataCellInner {
name: ComponentName,
size_bytes: u64,
// Internally, this is most likely a slice of another, larger array (one microbatch per LogMsg),
// so there is another Arc, offset, and indirection. Oh, and a DataType.
values: Box<dyn arrow2::array::Array>, // god knows how many extra bytes
} Obviously this is a horrible way to store small things like scalars. A quick test to measure the overhead: allocate 1M f64 scalars on a single timeline. Should be 32B/row (including RowId), i.e. 32MB in total, but actually:
That's just Work-in-progress / thinking out loud:In terms of storage, we should be able to always represent the entire We have a multi-index problem (we index the data over many timelines), which can lead to a lot o overhead if we are not careful. So that makes me wonder (like Jeremy above) if we should switch from each row/cell be the atomic unit to instead always consider a bucket of rows as the atomic thing. Proposal Astruct DataStore {
component_streams: Map<ComponentPath, ComponentStream>
}
/// One per ComponentPath
struct ComponentStream {
// Note: buckets are shared across timelines.
// For each timeline, maps potentially overlapping ranges to buckets.
timelines: Map<TimeLine, IntervalMap<TimeRange, Arc<Bucket>>>
}
/// Many for each `ComponentPath`.
/// Stores some number of rows of data.
struct Bucket {
row_ids: Vec<RowId>,
// The time columns.
// The times are optional, because not all rows appear on all timelines.
// Could also be stored as arrow buffers.
times: Map<TimeLine, Vec<Option<TimeInt>>>,
// NOTE: it is not always possible to keep all timelines sorted!
// If we sort on one, another may get out-of-order.
// In most cases, they are monotonically increasing together, and we can stay sorted on all timelines.
// For pathological cases (one timeline increases as another decreases, for instance)
// we split the buckets so that they are small and do linear scans for unsorted timelines at query-time.
is_timeline_sorted: Map<TimeLine, bool>,
// The data column
components: DataCellColumn, // A type-erased `Vec<Option<Vec<T>>>` backed as raw Arrow data.
} However, allowing overlapping buckets that are sometimes unsorted is quite complicated. Luckily there is a better way! Proposal B - the one we are likely going withExtremely fast inserts, slower queries (requires a query cache).
struct DataStore {
entity_streams: Map<EntityPath, EntityStream>
}
struct EntityStream {
// Note: buckets are shared across timelines.
// For each timeline, maps potentially overlapping ranges to buckets.
timelines: Map<TimeLine, IntervalMap<TimeRange, BucketId>>,
/// If we use `Arc<Bucket>` we can return dense query results as `Set<(Arc<Bucket>, Range<usize>)>`
buckets: Map<BucketId, Arc<Bucket>>,
}
/// Many for each `ComponentPath`.
/// Stores some number of rows of data.
/// Split/new bucket on:
/// * different number of timelines
/// * change of datatype for any of the components
/// * if it is sorted and larger than 1M rows
/// * if it is unsorted and larger than 1k rows
struct Bucket {
// Always sorted
row_ids: Vec<RowId>,
// The time columns.
timelines: Map<TimeLine, BucketTimeline>,
// Future improvements: keep a per-timeline shuffle-index in case of unsorted.
// Each `Box<dyn Array>` is effectively a `Vec<Option<Vec<T>>>` backed as raw Arrow data.
// The outer option here is so that we can log only `Position` one time, but not a `Color`.
components: BTreeMap<ComponentName, Box<dyn Array>>,
}
struct BucketTimeline {
times: Vec<TimeInt>,
sorted: bool,
range: TImeRange,
} Proposal CLet's add the constraint that each bucket of data has all timelines monotonically increase together (no out-of-order). What would this give us? This means that for each component and timeline we would store a BTreeMap of time to buckets (just like we do now), with the different that all timelines point to the same shared buckets (deduplicating data across timelines). Each such bucket would then consist of N rows, sorted by time (for every timeline). struct DataStore {
component_streams: Map<ComponentPath, ComponentStream>
}
/// One per ComponentPath
struct ComponentStream {
component_name: ComponentName,
// For each timeline, maps the lowest time in each bucket to that bucket.
timelines: Map<TimelineName, BTreeMap<TimeInt, BucketId>>,
// Note: each piece of data is in exactly one bucket (deduplication).
buckets: Map<BucketId, Bucket>,
}
/// Never empty.
/// All timelines are dense (no holes!).
struct Bucket {
// This supports having hereogenous data types for the same ComponentName,
// as long as we split buckets if the data type change
// (e.g. sometimes logging a promise, somethimes logging the actual data).
data_type: re_arrow2::DataType,
// Future optimization: RLE
row_ids: Vec<RowId>,
// The time columns.
// We always keep these sorted (or sort-on-demand, but they should all be sortable).
// Could also be stored as arrow buffers.
// Future optimization: RLE
times: Map<TimelineName, Vec<TimeInt>>,
// The data column
// A type-erased `Vec<Option<Vec<T>>>` backed as raw Arrow data.
// Queries return slices into this.
components: DataCellColumn,
} For the common case where timelines increase monotonically together (in-order logging) we can have very dense storage of data with very little overhead. Bucket insertion is similar to what we have today, but with some additional causes for splits:
Pathological cases would degenerate into putting each row into its own bucket, (e.g. when ExampleInput
(each color here could also be Storage
(split requiered because of new timeline)
(split requiered because of out-of-order timelines)
Notice how each piece of data is only stored once, and how how each bucket is sorted, making queries easy and fast. Notice also that the amortized overhead is minimal in the happy path (within each bucket): 16 bytes per RowId plus 8 bytes per timeline, plus 8 byte index per component (for the DataCellColumn). Obviously there is a lot of details to figure out here, but we should consider this corner of the solution space and what its implications would be. |
After a long meeting, we came up with and decided on Proposal B from above. |
Internal slack thread containing real-world, third-party data and ingestion script that will be made much greater when this issue is resolved: https://rerunio.slack.com/archives/C040X6J573Q/p1720458010590199 |
This is likely done, but needs some testing to verify. |
This is now all implemented, and will be available in 0.18. Logging many scalars is now orders of magnitudes faster and less memory intensive |
Solutions
TimeHistogram
#6065Context
Every
log
call in Rerun results in aDataRow
: Rerun's atomic unit of change.DataRow
s are indexed by the datastore, and that index ultimately powers all the row/time-based features in Rerun, such as latest-at & (visible-time-)range queries, change detection, the time panel and time cursor, garbage collection, etc.Each entry in this index (i.e. each row) carries of a bunch of metadata:
RowId
, that uniquely identifies the associated row and defines its global order (!).TimePoint
: a collection of timeline names and timestamps (generally around 3 of each in the common case).DataCell
s, each of which carry some metadata of their own:Array
), so that we can know whether the cell is:We also maintain a smaller, but still significant, reverse index, for when we need to fetch information about a specific
RowId
.Finally, in addition to these primary indices maintained by the datastore itself, many pieces of the app also maintain their own secondary indices via our change detection system (e.g. all the time panel stuff).
All of this easily adds up to a few hundred bytes of overhead per index entry.
Whether that becomes a problem or not from a memory standpoint depends on the amount of data that the user actually stores in each of these index entries.
For something like e.g. a point cloud or an image, the indexing overhead is dwarfed by the actual data, and everything is fine (for the most part).
For something like a scalar, the indexing overhead is orders or magnitude larger than the data itself, which leads to situations like this one (#5904), where 20MiB of scalar data in a parquet file somehow ends up with 20GiB of memory footprint once logged.
Unfortunately the overhead issues don't stop at the memory footprint.
All row/time-based subsystems in the app must spend compute time to deal with these indices, which slows down all aspects of the viewer (including ingestion!) e.g.:
And, of course, the infamous worst of all: range queries that span a lot of indices will have to unpack as many rows, which quickly becomes prohibitively expensive and therefore requires a lot of complex caching machinery to pull off (including the dreaded deserialization cache!).
Things once again get even worse from an overhead perspective if it turns out that the contents of the row was actually one single float value all along...
Having too many index entries / rows wrecks everything: from logging, to ingestion, to visualization and all the way to the memory footprint.
Given this, here's the 1M$ question: how can Rerun help users index less rows while still somehow meeting all their needs without any (of very little) added complexity?
Whichever solution we come up with, it should work both with static known-ahead-of-time datasets (e.g. logging a scalar plot from parquet file) and real-time datasets (e.g. an high-frequency IMU).
Things that either wouldn't help, or not enough
Temporal batches
Temporal batches provide a huge DX and performance gain on the logging side, but once the data gets exploded back into rows and indexed into the store, all the problems stated above still apply.
In fact, temporal batches are just way to make it easier to create more index entries!
Temporal batches also require the user to know their dataset ahead of time, while we want a solution that works both for AOT and RT use cases.
We do want temporal batches for other reasons, but they won't help with this issue.
Optimizing the memory footprint of index entries
Some possible avenues on that front:
log_tick
timeline.arrow::Array
?).RowId
(varints?!).While those would definitely be welcomed improvements, I don't see that kind of optimization getting us anywhere near a viable memory footprint for scalar use cases.
This also only helps with the memory footprint, compute is still as much as an issue as before.
Others?
???
Related:
ScalarChart
in place ofBarChart
? #5622The text was updated successfully, but these errors were encountered: