Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transactional Batch Saving #3829

Merged
merged 54 commits into from
Mar 14, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
f2d225d
[WIP] batch transactions
fm3 Feb 26, 2019
6181cbb
fix compilation
fm3 Feb 26, 2019
f412778
change front-end to send transactional meta data when saving
philippotto Feb 26, 2019
4b587ed
introduce transactionBatchStore in tracingService
fm3 Feb 26, 2019
89802e0
iterate on transactional batch saving
fm3 Feb 26, 2019
df60181
Merge branch 'master' into save-batch-transactions
fm3 Feb 26, 2019
4981e7d
commit update actions when the final one arrives
fm3 Feb 26, 2019
2c0fd40
enforce right order for updategroup handling
fm3 Feb 28, 2019
03aabda
Merge branch 'master' into save-batch-transactions
fm3 Feb 28, 2019
1c4aa62
use transactionId for handledGroupCache
fm3 Mar 4, 2019
17b3944
remove transactions from uncommitted temporary store after they are c…
fm3 Mar 4, 2019
5d14628
prepare redis temporary store
fm3 Mar 4, 2019
460eae2
use redis temporary store for tracing updates
fm3 Mar 4, 2019
b600fb2
[WIP] save only strings in redis
fm3 Mar 5, 2019
d1fb921
save update groups as json in redis. refactor
fm3 Mar 6, 2019
1b4e92f
clean up handled group id store
fm3 Mar 6, 2019
904b732
remove redundant requestId from save requests
fm3 Mar 6, 2019
cdb7da3
remove unused import
fm3 Mar 6, 2019
84c3a0f
changelog
fm3 Mar 6, 2019
b1ed6ef
Merge branch 'master' into save-batch-transactions
fm3 Mar 6, 2019
b4af7c1
add redis health check + move redis address to config
fm3 Mar 7, 2019
9f547ce
undo changing autologin
fm3 Mar 7, 2019
5864999
[WIP] redis in docker-compose
fm3 Mar 7, 2019
3e1ab40
fox-powered redis error handling
fm3 Mar 11, 2019
24818a7
Update README.md
jstriebel Mar 11, 2019
a20d7a0
fix frontend tests
fm3 Mar 12, 2019
2fd3bad
Merge branch 'master' into save-batch-transactions
fm3 Mar 12, 2019
7ed78e6
fix lint
fm3 Mar 12, 2019
6a85118
wait for saving to handled group id store
fm3 Mar 12, 2019
95c68ea
Update docker-compose.yml
jstriebel Mar 12, 2019
a621309
Merge branch 'save-batch-transactions' of github.com:scalableminds/we…
fm3 Mar 12, 2019
48c7634
include redis in tracingstore docker-compose
fm3 Mar 12, 2019
daa65a7
fix remaining tests
philippotto Mar 12, 2019
6b03b5f
add missing uid mock file
fm3 Mar 12, 2019
df367fd
pretty-backend
fm3 Mar 12, 2019
ee4bc44
link redis in docker-compose
fm3 Mar 12, 2019
e39093b
more logging for handledgroup tracing saving
fm3 Mar 12, 2019
26f96dc
simplify redis links in docker-compose
fm3 Mar 12, 2019
00f78c1
set redis address in docker-compose
fm3 Mar 12, 2019
09f19e4
fix redis error handling
fm3 Mar 12, 2019
1678a6a
migration guide
fm3 Mar 12, 2019
e7df8b5
Merge branch 'master' into save-batch-transactions
fm3 Mar 13, 2019
c5acaf9
streamline debug output
fm3 Mar 13, 2019
38f378f
formatter
fm3 Mar 13, 2019
3ced60c
Update tracingstore-docker.conf
jstriebel Mar 13, 2019
47f4ead
Update README.md
jstriebel Mar 13, 2019
ed6d472
Update application.conf
fm3 Mar 13, 2019
ec04017
Merge branch 'master' into save-batch-transactions
fm3 Mar 13, 2019
3509469
Merge branch 'master' into save-batch-transactions
jstriebel Mar 13, 2019
60f4800
remove obsolete todo comment
fm3 Mar 14, 2019
4bfb262
merge master into save-batch-transactions
fm3 Mar 14, 2019
3d2808c
pretty
fm3 Mar 14, 2019
3ed2cba
Update docker-compose.yml
jstriebel Mar 14, 2019
c9e02e3
Update webknossos-tracingstore.service
jstriebel Mar 14, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.md).
- Brush size is independent of zoom value, now. This change simplifies volume annotations, as brush sizes can be adapted to certain structures (e.g., vesicles) and don't need to be changed when zooming. [#3868](https://github.com/scalableminds/webknossos/pull/3889)

### Fixed
- Fixed a bug where failed large save requests lead to inconsistent tracings on the server. [#3829](https://github.com/scalableminds/webknossos/pull/3829)
- Fixed the setting which enables to hide the planes within the 3D viewport. [#3857](https://github.com/scalableminds/webknossos/pull/3857)
- Fixed a bug which allowed the brush size to become negative when using shortcuts. [#3861](https://github.com/scalableminds/webknossos/pull/3861)
- Fixed interpolation along z-axis. [#3888](https://github.com/scalableminds/webknossos/pull/3888)
Expand Down
1 change: 1 addition & 0 deletions MIGRATIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ This project adheres to [Calendar Versioning](http://calver.org/) `0Y.0M.MICRO`.
User-facing changes are documented in the [changelog](CHANGELOG.md).

## Unreleased
- Redis is now needed for the tracingstore module. Make sure to install redis in your setup and adapt the config keys `tracingstore.redis.address` and `tracingstore.redis.port`.
- To ensure that the existing behavior for loading data is preserved ("best quality first" as opposed to the new "progressive quality" default) execute: `update webknossos.user_datasetconfigurations set configuration = configuration || jsonb '{"loadingStrategy":"BEST_QUALITY_FIRST"}'`. See [#3801](https://github.com/scalableminds/webknossos/pull/3801) for additional context.

### Postgres Evolutions:
Expand Down
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ See the [wiki](https://github.com/scalableminds/webknossos/wiki/Try-setup) for i
* [Oracle JDK 8+](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or [Open JDK 8+](http://openjdk.java.net/) (full JDK, JRE is not enough)
* [sbt](http://www.scala-sbt.org/)
* [PostgreSQL 10](https://www.postgresql.org/)
* [Redis 5+](https://redis.io/)
* [node.js 9+](http://nodejs.org/download/)
* [yarn package manager](https://yarnpkg.com/)
* [git](http://git-scm.com/downloads)
Expand All @@ -66,7 +67,7 @@ Or install Java manually and run:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

# Install git, node.js, postgres, sbt, gfind, gsed
brew install git node postgresql sbt findutils coreutils gnu-sed
brew install git node postgresql sbt findutils coreutils gnu-sed redis
npm install -g yarn

# Start postgres
Expand Down Expand Up @@ -98,7 +99,7 @@ echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/source

# Installing everything
sudo apt-get update
sudo apt-get install -y git postgresql-10 postgresql-client-10 nodejs scala sbt openjdk-8-jdk yarn
sudo apt-get install -y git postgresql-10 postgresql-client-10 nodejs scala sbt openjdk-8-jdk yarn redis-server

# Assign a password to PostgreSQL user
sudo -u postgres psql -c "ALTER USER postgres WITH ENCRYPTED PASSWORD 'postgres';"
Expand All @@ -119,6 +120,9 @@ See: http://www.scala-sbt.org/release/docs/Getting-Started/Setup.html
* Install PostgreSQL from https://www.postgresql.org/download/
* PostgreSQL version **10+ is required**

##### Redis
* Install Redis from https://redis.io/download

##### node.js & yarn
* Install node from http://nodejs.org/download/
* node version **9+ is required**
Expand All @@ -129,7 +133,7 @@ See: http://www.scala-sbt.org/release/docs/Getting-Started/Setup.html
yarn start
```
Will fetch all Scala, Java and node dependencies and run the application on Port 9000.
Make sure that the PostgreSQL service is running before you start the application.
Make sure that the PostgreSQL and Redis services are running before you start the application.

## Production setup
[See wiki](https://github.com/scalableminds/webknossos/wiki/Production-setup) for recommended production setup.
Expand Down
4 changes: 4 additions & 0 deletions conf/application.conf
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,10 @@ tracingstore {
address = "localhost"
port = 7155
}
redis {
address = "localhost"
port = 6379
}
}

http {
Expand Down
18 changes: 16 additions & 2 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ services:
links:
- "fossildb-persisted:fossildb"
- "postgres-persisted:postgres"
- redis
depends_on:
postgres-persisted:
condition: service_healthy
Expand All @@ -21,6 +22,7 @@ services:
- -Djava.net.preferIPv4Stack=true
- -Dhttp.address=0.0.0.0
- -Dtracingstore.fossildb.address=fossildb
- -Dtracingstore.redis.address=redis
- -Dslick.db.url=jdbc:postgresql://postgres/webknossos
- -Dapplication.insertInitialData=false
- -Dapplication.authentication.enableDevAutoLogin=false
Expand Down Expand Up @@ -65,9 +67,11 @@ services:
- -Dhttp.address=0.0.0.0
- -Dhttp.uri=http://webknossos-datastore:9050
- -Dtracingstore.fossildb.address=fossildb
- -Dtracingstore.redis.address=redis
- -Ddatastore.oxalis.uri=webknossos:9000
links:
- fossildb-persisted:fossildb
- redis
depends_on:
fossildb-persisted:
condition: service_healthy
Expand Down Expand Up @@ -117,6 +121,7 @@ services:
links:
- "fossildb-dev:fossildb"
- "postgres-dev:postgres"
- redis
depends_on:
postgres-dev:
condition: service_healthy
Expand All @@ -133,7 +138,8 @@ services:
"run
-Djava.net.preferIPv4Stack=true
-Dhttp.address=0.0.0.0
-Dtracingstore.fossildb.address=fossildb"
-Dtracingstore.fossildb.address=fossildb
-Dtracingstore.redis.address=redis"
stdin_open: true

# Tests
Expand All @@ -159,6 +165,7 @@ services:
links:
- postgres
- fossildb
- redis
depends_on:
postgres:
condition: service_healthy
Expand All @@ -173,7 +180,8 @@ services:
sbt
-v
"testOnly e2e.* --
-Dtracingstore.fossildb.address=fossildb"
-Dtracingstore.fossildb.address=fossildb
-Dtracingstore.redis.address=redis"
volumes:
- ./binaryData/Connectomics department:/home/${USER_NAME:-sbt-user}/webknossos/binaryData/Organization_X

Expand Down Expand Up @@ -261,3 +269,9 @@ services:
volumes:
- "./fossildb-dev/data:/fossildb/data"
- "./fossildb-dev/backup:/fossildb/backup"

# Redis
redis:
image: redis:5.0-alpine
command:
- redis-server
16 changes: 10 additions & 6 deletions frontend/javascripts/oxalis/model/actions/save_actions.js
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
// @flow
import type { UpdateAction } from "oxalis/model/sagas/update_actions";
import { getUid } from "libs/uid_generator";
import Date from "libs/date";

type Tracing = "skeleton" | "volume";

type PushSaveQueueAction = {
type: "PUSH_SAVE_QUEUE",
type PushSaveQueueTransaction = {
type: "PUSH_SAVE_QUEUE_TRANSACTION",
items: Array<UpdateAction>,
tracingType: Tracing,
transactionId: string,
};
type SaveNowAction = { type: "SAVE_NOW" };
type ShiftSaveQueueAction = {
Expand All @@ -30,7 +32,7 @@ type SetVersionNumberAction = {
type UndoAction = { type: "UNDO" };
type RedoAction = { type: "REDO" };
export type SaveAction =
| PushSaveQueueAction
| PushSaveQueueTransaction
| SaveNowAction
| ShiftSaveQueueAction
| DiscardSaveQueuesAction
Expand All @@ -40,13 +42,15 @@ export type SaveAction =
| UndoAction
| RedoAction;

export const pushSaveQueueAction = (
export const pushSaveQueueTransaction = (
items: Array<UpdateAction>,
tracingType: Tracing,
): PushSaveQueueAction => ({
type: "PUSH_SAVE_QUEUE",
transactionId: string = getUid(),
): PushSaveQueueTransaction => ({
type: "PUSH_SAVE_QUEUE_TRANSACTION",
items,
tracingType,
transactionId,
});

export const saveNowAction = (): SaveNowAction => ({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import {
getByteCountFromLayer,
} from "oxalis/model/accessors/dataset_accessor";
import { parseAsMaybe } from "libs/utils";
import { pushSaveQueueAction } from "oxalis/model/actions/save_actions";
import { pushSaveQueueTransaction } from "oxalis/model/actions/save_actions";
import { updateBucket } from "oxalis/model/sagas/update_actions";
import ByteArrayToBase64Worker from "oxalis/workers/byte_array_to_base64.worker";
import DecodeFourBitWorker from "oxalis/workers/decode_four_bit.worker";
Expand Down Expand Up @@ -178,5 +178,5 @@ export async function sendToStore(batch: Array<DataBucket>): Promise<void> {
const base64 = await byteArrayToBase64(bucketData);
items.push(updateBucket(bucketInfo, base64));
}
Store.dispatch(pushSaveQueueAction(items, "volume"));
Store.dispatch(pushSaveQueueTransaction(items, "volume"));
}
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ const actionBlacklist = [
"MOVE_FLYCAM",
"MOVE_FLYCAM_ORTHO",
"MOVE_PLANE_FLYCAM_ORTHO",
"PUSH_SAVE_QUEUE",
"PUSH_SAVE_QUEUE_TRANSACTION",
"SET_DIRECTION",
"SET_INPUT_CATCHER_RECT",
"SET_MOUSE_POSITION",
Expand Down
39 changes: 24 additions & 15 deletions frontend/javascripts/oxalis/model/reducers/save_reducer.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,36 +8,45 @@ import update from "immutability-helper";

import type { Action } from "oxalis/model/actions/actions";
import type { OxalisState } from "oxalis/store";
import { getActionLog } from "oxalis/model/helpers/action_logger_middleware";
import { getStats } from "oxalis/model/accessors/skeletontracing_accessor";
import { maximumActionCountPerBatch } from "oxalis/model/sagas/save_saga_constants";
import Date from "libs/date";
import * as Utils from "libs/utils";
import { getActionLog } from "oxalis/model/helpers/action_logger_middleware";

function SaveReducer(state: OxalisState, action: Action): OxalisState {
switch (action.type) {
case "PUSH_SAVE_QUEUE": {
case "PUSH_SAVE_QUEUE_TRANSACTION": {
// Only report tracing statistics, if a "real" update to the tracing happened
const stats = _.some(action.items, ua => ua.name !== "updateTracing")
? Utils.toNullable(getStats(state.tracing))
: null;
const { items } = action;
const { items, transactionId } = action;

if (items.length > 0) {
const updateActionChunks = _.chunk(items, maximumActionCountPerBatch);
const transactionGroupCount = updateActionChunks.length;

const oldQueue = state.save.queue[action.tracingType];
const newQueue = oldQueue.concat(
updateActionChunks.map((actions, transactionGroupIndex) => ({
// Placeholder, the version number will be updated before sending to the server
version: -1,
transactionId,
transactionGroupCount,
transactionGroupIndex,
timestamp: Date.now(),
actions,
stats,
// TODO: Redux Action Log context for debugging purposes. Remove this again if it is no longer needed.
info: JSON.stringify(getActionLog().slice(-50)),
})),
);
return update(state, {
save: {
queue: {
[action.tracingType]: {
$push: [
{
// Placeholder, the version number and requestId will be updated before sending to the server
version: -1,
requestId: "",
timestamp: Date.now(),
actions: items,
stats,
// TODO: Redux Action Log context for debugging purposes. Remove this again if it is no longer needed.
info: JSON.stringify(getActionLog().slice(-50)),
},
],
$set: newQueue,
},
},
progressInfo: {
Expand Down
37 changes: 11 additions & 26 deletions frontend/javascripts/oxalis/model/sagas/save_saga.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,13 @@ import Maybe from "data.maybe";
import _ from "lodash";

import { FlycamActions } from "oxalis/model/actions/flycam_actions";
import {
PUSH_THROTTLE_TIME,
SAVE_RETRY_WAITING_TIME,
MAX_SAVE_RETRY_WAITING_TIME,
UNDO_HISTORY_SIZE,
maximumActionCountPerSave,
} from "oxalis/model/sagas/save_saga_constants";
import type { Tracing, Flycam, SaveQueueEntry } from "oxalis/store";
import { type UpdateAction, moveTreeComponent } from "oxalis/model/sagas/update_actions";
import { VolumeTracingSaveRelevantActions } from "oxalis/model/actions/volumetracing_actions";
Expand All @@ -33,26 +40,17 @@ import {
shiftSaveQueueAction,
setSaveBusyAction,
setLastSaveTimestampAction,
pushSaveQueueAction,
pushSaveQueueTransaction,
setVersionNumberAction,
} from "oxalis/model/actions/save_actions";
import Date from "libs/date";
import Request, { type RequestOptionsWithData } from "libs/request";
import Toast from "libs/toast";
import messages from "messages";
import window, { alert, document, location } from "libs/window";
import { getUid } from "libs/uid_generator";

import { enforceSkeletonTracing } from "../accessors/skeletontracing_accessor";

const PUSH_THROTTLE_TIME = 30000; // 30s
const SAVE_RETRY_WAITING_TIME = 2000;
const MAX_SAVE_RETRY_WAITING_TIME = 300000; // 5m
const UNDO_HISTORY_SIZE = 20;

export const maximumActionCountPerBatch = 5000;
const maximumActionCountPerSave = 15000;

export function* collectUndoStates(): Saga<void> {
const undoStack = [];
const redoStack = [];
Expand Down Expand Up @@ -106,13 +104,13 @@ export function* pushTracingTypeAsync(tracingType: "skeleton" | "volume"): Saga<
yield* put(setLastSaveTimestampAction(tracingType));
while (true) {
let saveQueue;
// Check whether the save queue is actually empty, the PUSH_SAVE_QUEUE action
// Check whether the save queue is actually empty, the PUSH_SAVE_QUEUE_TRANSACTION action
// could have been triggered during the call to sendRequestToServer

saveQueue = yield* select(state => state.save.queue[tracingType]);
if (saveQueue.length === 0) {
// Save queue is empty, wait for push event
yield* take("PUSH_SAVE_QUEUE");
yield* take("PUSH_SAVE_QUEUE_TRANSACTION");
}
yield* race({
timeout: _call(delay, PUSH_THROTTLE_TIME),
Expand Down Expand Up @@ -174,8 +172,6 @@ export function* sendRequestToServer(tracingType: "skeleton" | "volume"): Saga<v
const tracingStoreUrl = yield* select(state => state.tracing.tracingStore.url);
compactedSaveQueue = addVersionNumbers(compactedSaveQueue, version);

compactedSaveQueue = addRequestIds(compactedSaveQueue, getUid());

let retryCount = 0;
while (true) {
try {
Expand Down Expand Up @@ -230,13 +226,6 @@ export function addVersionNumbers(
return updateActionsBatches.map(batch => Object.assign({}, batch, { version: ++lastVersion }));
}

export function addRequestIds(
updateActionsBatches: Array<SaveQueueEntry>,
requestId: string,
): Array<SaveQueueEntry> {
return updateActionsBatches.map(batch => Object.assign({}, batch, { requestId }));
}

function removeUnrelevantUpdateActions(updateActions: Array<UpdateAction>) {
// This functions removes update actions that should not be sent to the server.
return updateActions.filter(ua => ua.name !== "toggleTree");
Expand Down Expand Up @@ -484,11 +473,7 @@ export function* saveTracingTypeAsync(tracingType: "skeleton" | "volume"): Saga<
),
);
if (items.length > 0) {
const updateActionChunks = _.chunk(items, maximumActionCountPerBatch);

for (const updateActionChunk of updateActionChunks) {
yield* put(pushSaveQueueAction(updateActionChunk, tracingType));
}
yield* put(pushSaveQueueTransaction(items, tracingType));
}
prevTracing = tracing;
prevFlycam = flycam;
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
// @flow

export const PUSH_THROTTLE_TIME = 30000; // 30s
export const SAVE_RETRY_WAITING_TIME = 2000;
export const MAX_SAVE_RETRY_WAITING_TIME = 300000; // 5m
export const UNDO_HISTORY_SIZE = 20;

export const maximumActionCountPerBatch = 5000;
export const maximumActionCountPerSave = 15000;
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ export function* watchSkeletonTracingAsync(): Saga<void> {
],
centerActiveNode,
);
yield _throttle(5000, "PUSH_SAVE_QUEUE", watchTracingConsistency);
yield _throttle(5000, "PUSH_SAVE_QUEUE_TRANSACTION", watchTracingConsistency);
yield* fork(watchFailedNodeCreations);
yield* fork(watchBranchPointDeletion);
yield* fork(watchVersionRestoreParam);
Expand Down
Loading