Skip to content

Commit 58f5703

Browse files
authored
Transactional saving of volume annotations (#7264)
* create volume transactions according to debounced pushqueue.push; increase efficiency of compressing-worker by increasing payload size * make PushQueue.push more robust by avoiding concurrent execution of it (by implementing createDebouncedAbortableCallable * Revert "Revert "temporarily disable most CI checks"" This reverts commit d69a7cf. * don't use AsyncTaskQueue in pushqueue anymore * remove AsyncTaskQueue implementation + specs * implement small AsyncFifoResolver to prevent theoretical race condition * ensure that the save saga consumes N items from the save queue where N is the size of the queue at the time the auto-save-interval kicked in * fix tests * fix accidentally skipped tests; improve linting rule to avoid this; fix broken segment group test * harden error handling in PushQueue * move some lib modules into libs/async * warn user when pushqueue is starving * Apply suggestions from code review * clean up a bit * tune batch count constants for volume tracings; also show downloading buckets in save button tooltip * fix race condition in AsyncFifoResolver * fix incorrect dtype in comment * update changelog * improve comment * rename pendingQueue to pendingBuckets * fix incorrect method name
1 parent 2485549 commit 58f5703

38 files changed

+688
-462
lines changed

CHANGELOG.unreleased.md

+3
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,9 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released
2121

2222
### Changed
2323
- Small messages during annotating (e.g. “finished undo”, “applying mapping…”) are now click-through so they do not block users from selecting tools. [7239](https://github.com/scalableminds/webknossos/pull/7239)
24+
- Annotating volume data uses a transaction-based mechanism now. As a result, WK is more robust against partial saves (i.e., due to a crashing tab). [#7264](https://github.com/scalableminds/webknossos/pull/7264)
25+
- Improved speed of saving volume data. [#7264](https://github.com/scalableminds/webknossos/pull/7264)
26+
- Improved progress indicator when saving volume data. [#7264](https://github.com/scalableminds/webknossos/pull/7264)
2427
- The order of color layers can now also be manipulated in additive blend mode (see [#7188](https://github.com/scalableminds/webknossos/pull/7188)). [#7289](https://github.com/scalableminds/webknossos/pull/7289)
2528
- OpenID Connect authorization now fetches the server’s public key automatically. The config keys `singleSignOn.openIdConnect.publicKey` and `singleSignOn.openIdConnect.publicKeyAlgorithm` are now unused. [7267](https://github.com/scalableminds/webknossos/pull/7267)
2629

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
/*
2+
* This class can be used to await promises
3+
* in the order they were passed to
4+
* orderedWaitFor.
5+
*
6+
* This enables scheduling of asynchronous work
7+
* concurrently while ensuring that the results
8+
* are processed in the order they were requested
9+
* (instead of the order in which they finished).
10+
*
11+
* Example:
12+
* const resolver = new AsyncFifoResolver();
13+
* const promise1Done = resolver.orderedWaitFor(promise1);
14+
* const promise2Done = resolver.orderedWaitFor(promise2);
15+
*
16+
* Even if promise2 resolves before promise1, promise2Done
17+
* will resolve *after* promise1Done.
18+
*/
19+
20+
export class AsyncFifoResolver<T> {
21+
queue: Promise<T>[];
22+
constructor() {
23+
this.queue = [];
24+
}
25+
26+
async orderedWaitFor(promise: Promise<T>): Promise<T> {
27+
this.queue.push(promise);
28+
const promiseCountToAwait = this.queue.length;
29+
const retVals = await Promise.all(this.queue);
30+
// Note that this.queue can have changed during the await.
31+
// Find the index of the promise and trim the queue accordingly.
32+
this.queue = this.queue.slice(this.queue.indexOf(promise) + 1);
33+
return retVals[promiseCountToAwait - 1];
34+
}
35+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
import { call, type Saga } from "oxalis/model/sagas/effect-generators";
2+
import { buffers, Channel, channel, runSaga } from "redux-saga";
3+
import { delay, race, take } from "redux-saga/effects";
4+
5+
/*
6+
* This function takes a saga and a debounce threshold
7+
* and returns a function F that will trigger the given saga
8+
* in a debounced manner.
9+
* In contrast to a normal debouncing mechanism, the saga
10+
* will be cancelled if F is called while the saga is running.
11+
* Note that this means that concurrent executions of the saga
12+
* are impossible that way (by design).
13+
*
14+
* Also note that the performance of this debouncing mechanism
15+
* is slower than a standard _.debounce. Also see
16+
* debounced_abortable_saga.spec.ts for a small benchmark.
17+
*/
18+
export function createDebouncedAbortableCallable<T, C>(
19+
fn: (param1: T) => Saga<void>,
20+
debounceThreshold: number,
21+
context: C,
22+
) {
23+
// The communication with the saga is done via a channel.
24+
// That way, we can expose a normal function that
25+
// does the triggering by filling the channel.
26+
27+
// Only the most recent invocation should survive.
28+
// Therefore, create a sliding buffer with size 1.
29+
const buffer = buffers.sliding<T>(1);
30+
const triggerChannel: Channel<T> = channel<T>(buffer);
31+
32+
const _task = runSaga(
33+
{},
34+
debouncedAbortableSagaRunner,
35+
debounceThreshold,
36+
triggerChannel,
37+
// @ts-expect-error TS thinks fn doesnt match, but it does.
38+
fn,
39+
context,
40+
);
41+
42+
return (msg: T) => {
43+
triggerChannel.put(msg);
44+
};
45+
}
46+
47+
export function createDebouncedAbortableParameterlessCallable<C>(
48+
fn: () => Saga<void>,
49+
debounceThreshold: number,
50+
context: C,
51+
) {
52+
const wrappedFn = createDebouncedAbortableCallable(fn, debounceThreshold, context);
53+
const dummyParameter = {};
54+
return () => {
55+
wrappedFn(dummyParameter);
56+
};
57+
}
58+
59+
function* debouncedAbortableSagaRunner<T, C>(
60+
debounceThreshold: number,
61+
triggerChannel: Channel<T>,
62+
abortableFn: (param: T) => Saga<void>,
63+
context: C,
64+
): Saga<void> {
65+
while (true) {
66+
// Wait for a trigger-call by consuming
67+
// the channel.
68+
let msg = yield take(triggerChannel);
69+
70+
// Repeatedly try to execute abortableFn (each try
71+
// might be cancelled due to new initiation-requests)
72+
while (true) {
73+
const { debounced, latestMessage } = yield race({
74+
debounced: delay(debounceThreshold),
75+
latestMessage: take(triggerChannel),
76+
});
77+
78+
if (latestMessage) {
79+
msg = latestMessage;
80+
}
81+
82+
if (debounced) {
83+
const { abortingMessage } = yield race({
84+
finished: call([context, abortableFn], msg),
85+
abortingMessage: take(triggerChannel),
86+
});
87+
if (abortingMessage) {
88+
msg = abortingMessage;
89+
} else {
90+
break;
91+
}
92+
}
93+
}
94+
}
95+
}

frontend/javascripts/libs/latest_task_executor.ts renamed to frontend/javascripts/libs/async/latest_task_executor.ts

+5-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
import Deferred from "libs/deferred";
1+
import Deferred from "libs/async/deferred";
22
type Task<T> = () => Promise<T>;
33
export const SKIPPED_TASK_REASON = "Skipped task";
44
/*
@@ -11,6 +11,10 @@ export const SKIPPED_TASK_REASON = "Skipped task";
1111
* LatestTaskExecutor instance.
1212
*
1313
* See the corresponding spec for examples.
14+
*
15+
* If you need the same behavior plus cancellation of running
16+
* tasks, take a look at the saga-based `createDebouncedAbortableCallable`
17+
* utility.
1418
*/
1519

1620
export default class LatestTaskExecutor<T> {

frontend/javascripts/libs/task_pool.ts renamed to frontend/javascripts/libs/async/task_pool.ts

+5-8
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
import type { Saga, Task } from "oxalis/model/sagas/effect-generators";
2-
import { join, call, fork } from "typed-redux-saga";
1+
import type { Saga } from "oxalis/model/sagas/effect-generators";
2+
import { join, call, fork, FixedTask } from "typed-redux-saga";
33

44
/*
55
Given an array of async tasks, processTaskWithPool
@@ -10,12 +10,11 @@ export default function* processTaskWithPool(
1010
tasks: Array<() => Saga<void>>,
1111
poolSize: number,
1212
): Saga<void> {
13-
const startedTasks: Array<Task<void>> = [];
13+
const startedTasks: Array<FixedTask<void>> = [];
1414
let isFinalResolveScheduled = false;
1515
let error: Error | null = null;
1616

17-
// @ts-expect-error ts-migrate(7006) FIXME: Parameter 'fn' implicitly has an 'any' type.
18-
function* forkSafely(fn): Saga<void> {
17+
function* forkSafely(fn: () => Saga<void>): Saga<void> {
1918
// Errors from forked tasks cannot be caught, see https://redux-saga.js.org/docs/advanced/ForkModel/#error-propagation
2019
// However, the task pool should not abort if a single task fails.
2120
// Therefore, use this wrapper to safely execute all tasks and possibly rethrow the last error in the end.
@@ -32,17 +31,15 @@ export default function* processTaskWithPool(
3231
isFinalResolveScheduled = true;
3332
// All tasks were kicked off, which is why all tasks can be
3433
// awaited now together.
35-
// @ts-expect-error ts-migrate(2769) FIXME: No overload matches this call.
3634
yield* join(startedTasks);
3735
if (error != null) throw error;
3836
}
3937

4038
return;
4139
}
4240

43-
const task = tasks.shift();
41+
const task = tasks.shift() as () => Saga<void>;
4442
const newTask = yield* fork(forkSafely, task);
45-
// @ts-expect-error ts-migrate(2345) FIXME: Argument of type 'FixedTask<void>' is not assignab... Remove this comment to see the full error message
4643
startedTasks.push(newTask);
4744
// If that task is done, process a new one (that way,
4845
// the pool size stays constant until the queue is almost empty.)

frontend/javascripts/libs/async_task_queue.ts

-143
This file was deleted.

frontend/javascripts/libs/worker_pool.ts renamed to frontend/javascripts/libs/webworker_pool.ts

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
import _ from "lodash";
2-
export default class WorkerPool<P, R> {
2+
export default class WebWorkerPool<P, R> {
33
// This class can be used to instantiate multiple web workers
44
// which are then used for computation in a simple round-robin manner.
55
//
66
// Example:
7-
// const compressionPool = new WorkerPool(
8-
// () => createWorker(ByteArrayToLz4Base64Worker),
7+
// const compressionPool = new WebWorkerPool(
8+
// () => createWorker(ByteArraysToLz4Base64Worker),
99
// COMPRESSION_WORKER_COUNT,
1010
// );
1111
// const promise1 = compressionPool.submit(data1);

0 commit comments

Comments
 (0)