-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add serve
function to C++ API
#4638
Comments
### What Limit the amount of RAM used by the WebSocket server. The WebSocket server will buffer all log data in memory so that late connecting viewers will get all the data. Previously this would just eat more and more RAM until the server crashed. There is now a limit, which default to 25% of the RAM. That is, the server will try to keep _its own_ memory use under that limit. Note that this acts different from the viewer's `--memory-limit` which throws away old data when the total process memory exceeds the limit. The memory limit is set by the new `--server-memory-limit` argument to `rerun`, or by argument to the `serve` functions. Note that I opted for a different argument for the server limit than for the viewer memory limit. This is partially because it should be a lot less, and partially because the limit acts differently (it is a local budget, rather than a total process limit). While working on this I noticed we don't have a `serve` function in C++, so I created an issue for that: #4638 The default limit of 25% can be discussed. It leaves the other 75% for the viewer (though if the viewer is in a browser, it will likely be using much less). ### Checklist * [x] I have read and agree to [Contributor Guide](https://github.com/rerun-io/rerun/blob/main/CONTRIBUTING.md) and the [Code of Conduct](https://github.com/rerun-io/rerun/blob/main/CODE_OF_CONDUCT.md) * [x] I've included a screenshot or gif (if applicable) * [x] I have tested the web demo (if applicable): * Using newly built examples: [app.rerun.io](https://app.rerun.io/pr/4636/index.html) * Using examples from latest `main` build: [app.rerun.io](https://app.rerun.io/pr/4636/index.html?manifest_url=https://app.rerun.io/version/main/examples_manifest.json) * Using full set of examples from `nightly` build: [app.rerun.io](https://app.rerun.io/pr/4636/index.html?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json) * [x] The PR title and labels are set such as to maximize their usefulness for the next release's CHANGELOG - [PR Build Summary](https://build.rerun.io/pr/4636) - [Docs preview](https://rerun.io/preview/15e94a7ce913541da83675bff6bc4bb050c4b5a5/docs) <!--DOCS-PREVIEW--> - [Examples preview](https://rerun.io/preview/15e94a7ce913541da83675bff6bc4bb050c4b5a5/examples) <!--EXAMPLES-PREVIEW--> - [Recent benchmark results](https://build.rerun.io/graphs/crates.html) - [Wasm size tracking](https://build.rerun.io/graphs/sizes.html) --------- Co-authored-by: Clement Rey <[email protected]>
serve
functionserve
function to C++ API
This would imply dragging in a lot of stuff to rerun_c, meaning we should make this opt-in. But having two rerun_c artifacts seems a bit cumbersome. Not a 'quick issue' in that case then anymore |
(via discussion with @jleibs:) Alternatively to embedding the wasm blobs in the SDK (both in C++ and Python!) we could instead let the rerun cli do the work. We'd start rerun in serve mode and then connect to the serving rerun. This has a bunch of advantages over the approach we're taking today in Python (and should probably abandon there as well!):
Reportedly, this already works as is! obvious drawback is that we have another tcp loopback indirection. In the future we could do shared memory based interprocess communication, but it this very unlikely to become an issue all that soon. |
RecordingStream
hasconnect
andspawn
, but noserve
.The text was updated successfully, but these errors were encountered: