Skip to content

Doc: add a "How to record from multiple places" #4055

@teh-cmc

Description

@teh-cmc

I keep answering questions of the form "I have multiple machines recording different PoV of the same thing, how can I visualize everything in on place?".
We need a how-to guide for this.

Here's my most recent answer on the matter:

Rerun has the notion of a Recording ID: https://www.rerun.io/docs/concepts/apps-and-recordings#recording-id.
Any recorded datasets that share the same Recording ID will be visualized together.

By default, we generate a random ID everytime you start a new logging session, but you can override that behavior, e.g.:

# Process 1 logs some spheres
./app1.py  # rr.init(recording_id='my_shared_recording', rr.save('/tmp/recording1.rrd')

# Process 2 logs some cubes
./app2.py  # rr.init(recording_id='my_shared_recording', rr.save('/tmp/recording2.rrd')

# Visualize a 3D scene with both spheres and cubes
rerun /tmp/recording*.rrd  # load both recording files

It's up to you to decide where each recording ends up:

  • all processes could stream their share in real-time to a Rerun Viewer,
  • or maybe they all write to their own file on disk that are later loaded in a viewer,
  • or some other combination of the above.

We do not yet provide a way to merge multiple recording files into a single one directly from the CLI, although you can load all of them in the Viewer and then use the save feature.
(caveat: the save feature in the viewer is currently sub-optimal, see #3091)

Related note: we do support logging to a file in a streaming fashion, but we do not yet support visualizing a file in a stream fashion. I'll open an issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    user-requestThis is a pressing issue for one of our users📖 documentationImprovements or additions to documentation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions