Skip to content

Commit dca6181

Browse files
committed
Merge branch 'docs' into docs_lili
* docs: Split cells via Min Cut (#5885) Clean up backend util package (#6048) Guard against empty saves (#6052) Time tracking: Do not fail on empty timespans list (#6051) Fix clip button changing position (#6050) Include ParamFailure values in error chains (#6045) Fix non-32-aligned bucket requests (#6047) Don't enforce save state when saving is triggered by a timeout and reduce tracing layout analytics event count (#5999) Bump cached-path-relative from 1.0.2 to 1.1.0 (#5994) Volume annotation download: zip with BEST_SPEED (#6036) Sensible scalebar values (#6034) Faster CircleCI builds (#6040) move to Google Analytics 4 (#6031) Fix nightly (fix tokens, upgrade puppeteer) (#6032) Add neuron reconstruction job backend and frontend part (#5922) Allow uploading multi-layer volume annotations (#6028)
2 parents 622d905 + 5fd7e21 commit dca6181

File tree

160 files changed

+3609
-2284
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

160 files changed

+3609
-2284
lines changed

.circleci/config.yml

+6-4
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,9 @@ version: 2
22
jobs:
33
build_test_deploy:
44
machine:
5-
image: ubuntu-1604:201903-01
5+
image: ubuntu-2004:202111-02
6+
docker_layer_caching: true
7+
resource_class: large
68
environment:
79
USER_NAME: circleci
810
USER_UID: 1001
@@ -253,7 +255,7 @@ jobs:
253255
curl
254256
-X POST
255257
-H "X-Auth-Token: $RELEASE_API_TOKEN"
256-
https://kube.scm.io/hooks/remove/webknossos/dev/master?user=CI+%28nightly%29
258+
https://kubernetix.scm.io/hooks/remove/webknossos/dev/master?user=CI+%28nightly%29
257259
- run:
258260
name: Wait 3min
259261
command: sleep 180
@@ -263,7 +265,7 @@ jobs:
263265
curl
264266
-X POST
265267
-H "X-Auth-Token: $RELEASE_API_TOKEN"
266-
https://kube.scm.io/hooks/install/webknossos/dev/master?user=CI+%28nightly%29
268+
https://kubernetix.scm.io/hooks/install/webknossos/dev/master?user=CI+%28nightly%29
267269
- run:
268270
name: Install dependencies and sleep at least 3min
269271
command: |
@@ -272,7 +274,7 @@ jobs:
272274
wait
273275
- run:
274276
name: Refresh datasets
275-
command: curl https://master.webknossos.xyz/data/triggers/checkInboxBlocking?token=secretSampleUserToken
277+
command: curl -X POST --fail https://master.webknossos.xyz/data/triggers/checkInboxBlocking?token=$WK_AUTH_TOKEN
276278
- run:
277279
name: Run screenshot-tests
278280
command: |

CHANGELOG.unreleased.md

+8-1
Original file line numberDiff line numberDiff line change
@@ -11,26 +11,33 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released
1111
[Commits](https://github.com/scalableminds/webknossos/compare/22.02.0...HEAD)
1212

1313
### Added
14+
- Viewport scale bars are now dynamically adjusted to display sensible values. [#5418](https://github.com/scalableminds/webknossos/pull/6034)
1415
- Added the option to make a segment's ID active via the right-click context menu in the segments list. [#5935](https://github.com/scalableminds/webknossos/pull/6006)
1516
- Added a button next to the histogram which adapts the contrast and brightness to the currently visible data. [#5961](https://github.com/scalableminds/webknossos/pull/5961)
1617
- Running uploads can now be cancelled. [#5958](https://github.com/scalableminds/webknossos/pull/5958)
18+
- Added experimental min-cut feature to split a segment in a volume tracing with two seeds. [#5885](https://github.com/scalableminds/webknossos/pull/5885)
19+
- Annotations with multiple volume layers can now be uploaded. (Note that merging multiple annotations with multiple volume layers each is not supported.) [#6028](https://github.com/scalableminds/webknossos/pull/6028)
20+
- Decrease volume annotation download latency by using a different compression level. [#6036](https://github.com/scalableminds/webknossos/pull/6036)
1721

1822
### Changed
1923
- Upgraded webpack build tool to v5 and all other webpack related dependencies to their latest version. Enabled persistent caching which speeds up server restarts during development as well as production builds. [#5969](https://github.com/scalableminds/webknossos/pull/5969)
2024
- Improved stability when quickly volume-annotating large structures. [#6000](https://github.com/scalableminds/webknossos/pull/6000)
2125
- The front-end API `labelVoxels` returns a promise now which fulfills as soon as the label operation was carried out. [#5955](https://github.com/scalableminds/webknossos/pull/5955)
26+
- Changed that webKnossos no longer tries to reach a save state where all updates are sent to the backend to be in sync with the frontend when the save is triggered by a timeout. [#5999](https://github.com/scalableminds/webknossos/pull/5999)
2227
- When changing which layers are visible in an annotation, this setting is persisted in the annotation, so when you share it, viewers will see the same visibility configuration. [#5967](https://github.com/scalableminds/webknossos/pull/5967)
2328
- Downloading public annotations is now also allowed without being authenticated. [#6001](https://github.com/scalableminds/webknossos/pull/6001)
2429
- Downloaded volume annotation layers no longer produce zero-byte zipfiles but rather a valid header-only zip file with no contents. [#6022](https://github.com/scalableminds/webknossos/pull/6022)
2530
- Changed a number of API routes from GET to POST to avoid unwanted side effects. [#6023](https://github.com/scalableminds/webknossos/pull/6023)
2631
- Removed unused datastore route `checkInbox` (use `checkInboxBlocking` instead). [#6023](https://github.com/scalableminds/webknossos/pull/6023)
32+
- Migrated to Google Analytics 4. [#6031](https://github.com/scalableminds/webknossos/pull/6031)
2733

2834
### Fixed
2935
- Fixed volume-related bugs which could corrupt the volume data in certain scenarios. [#5955](https://github.com/scalableminds/webknossos/pull/5955)
3036
- Fixed the placeholder resolution computation for anisotropic layers with missing base resolutions. [#5983](https://github.com/scalableminds/webknossos/pull/5983)
3137
- Fixed a bug where ad-hoc meshes were computed for a mapping, although it was disabled. [#5982](https://github.com/scalableminds/webknossos/pull/5982)
3238
- Fixed a bug where volume annotation downloads would sometimes contain truncated zips. [#6009](https://github.com/scalableminds/webknossos/pull/6009)
33-
39+
- Fixed a bug where downloaded multi-layer volume annotations would have the wrong data.zip filenames. [#6028](https://github.com/scalableminds/webknossos/pull/6028)
40+
- Fixed a bug which could cause an error message to appear when saving. [#6052](https://github.com/scalableminds/webknossos/pull/6052)
3441

3542
### Removed
3643

MIGRATIONS.unreleased.md

+1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ User-facing changes are documented in the [changelog](CHANGELOG.released.md).
77

88
## Unreleased
99
[Commits](https://github.com/scalableminds/webknossos/compare/22.02.0...HEAD)
10+
- The config field `googleAnalytics.trackingId` needs to be changed to [GA4 measurement id](https://support.google.com/analytics/answer/10089681), if used.
1011

1112
### Postgres Evolutions:
1213
- [081-annotation-viewconfiguration.sql](conf/evolutions/081-annotation-viewconfiguration.sql)

app/controllers/AnnotationIOController.scala

+102-44
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
package controllers
22

33
import java.io.{BufferedOutputStream, File, FileOutputStream}
4+
import java.util.zip.Deflater
45

56
import akka.actor.ActorSystem
67
import akka.stream.Materializer
@@ -11,7 +12,12 @@ import com.scalableminds.util.tools.{Fox, FoxImplicits, TextUtils}
1112
import com.scalableminds.webknossos.datastore.SkeletonTracing.{SkeletonTracing, SkeletonTracingOpt, SkeletonTracings}
1213
import com.scalableminds.webknossos.datastore.VolumeTracing.{VolumeTracing, VolumeTracingOpt, VolumeTracings}
1314
import com.scalableminds.webknossos.datastore.helpers.ProtoGeometryImplicits
14-
import com.scalableminds.webknossos.datastore.models.datasource.{AbstractSegmentationLayer, SegmentationLayer}
15+
import com.scalableminds.webknossos.datastore.models.datasource.{
16+
AbstractSegmentationLayer,
17+
DataLayerLike,
18+
GenericDataSource,
19+
SegmentationLayer
20+
}
1521
import com.scalableminds.webknossos.tracingstore.tracings.TracingType
1622
import com.scalableminds.webknossos.tracingstore.tracings.volume.VolumeTracingDefaults
1723
import com.typesafe.scalalogging.LazyLogging
@@ -20,8 +26,8 @@ import javax.inject.Inject
2026
import models.analytics.{AnalyticsService, DownloadAnnotationEvent, UploadAnnotationEvent}
2127
import models.annotation.AnnotationState._
2228
import models.annotation._
23-
import models.annotation.nml.NmlResults.NmlParseResult
24-
import models.annotation.nml.{NmlResults, NmlService, NmlWriter}
29+
import models.annotation.nml.NmlResults.{NmlParseResult, NmlParseSuccess}
30+
import models.annotation.nml.{NmlResults, NmlWriter}
2531
import models.binary.{DataSet, DataSetDAO, DataSetService}
2632
import models.organization.OrganizationDAO
2733
import models.project.ProjectDAO
@@ -53,7 +59,7 @@ class AnnotationIOController @Inject()(
5359
analyticsService: AnalyticsService,
5460
sil: Silhouette[WkEnv],
5561
provider: AnnotationInformationProvider,
56-
nmlService: NmlService)(implicit ec: ExecutionContext, val materializer: Materializer)
62+
annotationUploadService: AnnotationUploadService)(implicit ec: ExecutionContext, val materializer: Materializer)
5763
extends Controller
5864
with FoxImplicits
5965
with ProtoGeometryImplicits
@@ -64,7 +70,10 @@ class AnnotationIOController @Inject()(
6470
value =
6571
"""Upload NML(s) or ZIP(s) of NML(s) to create a new explorative annotation.
6672
Expects:
67-
- As file attachment: any number of NML files or ZIP files containing NMLs, optionally with at most one volume data ZIP referenced from an NML in a ZIP
73+
- As file attachment:
74+
- Any number of NML files or ZIP files containing NMLs, optionally with volume data ZIPs referenced from an NML in a ZIP
75+
- If multiple annotations are uploaded, they are merged into one.
76+
- This is not supported if any of the annotations has multiple volume layers.
6877
- As form parameter: createGroupForEachFile [String] should be one of "true" or "false"
6978
- If "true": in merged annotation, create tree group wrapping the trees of each file
7079
- If "false": in merged annotation, rename trees with the respective file name as prefix""",
@@ -86,42 +95,35 @@ Expects:
8695
val overwritingDataSetName: Option[String] =
8796
request.body.dataParts.get("datasetName").flatMap(_.headOption)
8897
val attachedFiles = request.body.files.map(f => (f.ref.path.toFile, f.filename))
89-
val parsedFiles = nmlService.extractFromFiles(attachedFiles, useZipName = true, overwritingDataSetName)
90-
val tracingsProcessed = nmlService.wrapOrPrefixTrees(parsedFiles.parseResults, shouldCreateGroupForEachFile)
91-
92-
val parseSuccesses: List[NmlParseResult] = tracingsProcessed.filter(_.succeeded)
98+
val parsedFiles =
99+
annotationUploadService.extractFromFiles(attachedFiles, useZipName = true, overwritingDataSetName)
100+
val parsedFilesWraped =
101+
annotationUploadService.wrapOrPrefixTrees(parsedFiles.parseResults, shouldCreateGroupForEachFile)
102+
val parseResultsFiltered: List[NmlParseResult] = parsedFilesWraped.filter(_.succeeded)
93103

94-
if (parseSuccesses.isEmpty) {
104+
if (parseResultsFiltered.isEmpty) {
95105
returnError(parsedFiles)
96106
} else {
97-
val (skeletonTracings, volumeTracingsWithDataLocations) = extractTracings(parseSuccesses)
98-
val name = nameForUploaded(parseSuccesses.map(_.fileName))
99-
val description = descriptionForNMLs(parseSuccesses.map(_.description))
100-
101107
for {
102-
_ <- bool2Fox(skeletonTracings.nonEmpty || volumeTracingsWithDataLocations.nonEmpty) ?~> "nml.file.noFile"
103-
dataSet <- findDataSetForUploadedAnnotations(skeletonTracings, volumeTracingsWithDataLocations.map(_._1))
108+
parseSuccesses <- Fox.serialCombined(parseResultsFiltered)(r => r.toSuccessBox)
109+
name = nameForUploaded(parseResultsFiltered.map(_.fileName))
110+
description = descriptionForNMLs(parseResultsFiltered.map(_.description))
111+
_ <- assertNonEmpty(parseSuccesses)
112+
skeletonTracings = parseSuccesses.flatMap(_.skeletonTracing)
113+
// Create a list of volume layers for each uploaded (non-skeleton-only) annotation.
114+
// This is what determines the merging strategy for volume layers
115+
volumeLayersGroupedRaw = parseSuccesses.map(_.volumeLayers).filter(_.nonEmpty)
116+
dataSet <- findDataSetForUploadedAnnotations(skeletonTracings,
117+
volumeLayersGroupedRaw.flatten.map(_.tracing))
118+
volumeLayersGrouped <- adaptVolumeTracingsToFallbackLayer(volumeLayersGroupedRaw, dataSet)
104119
tracingStoreClient <- tracingStoreService.clientFor(dataSet)
105-
mergedVolumeTracingIdOpt <- Fox.runOptional(volumeTracingsWithDataLocations.headOption) { _ =>
106-
for {
107-
volumeTracingsAdapted <- Fox.serialCombined(volumeTracingsWithDataLocations)(v =>
108-
adaptPropertiesToFallbackLayer(v._1, dataSet))
109-
mergedIdOpt <- tracingStoreClient.mergeVolumeTracingsByContents(
110-
VolumeTracings(volumeTracingsAdapted.map(v => VolumeTracingOpt(Some(v)))),
111-
volumeTracingsWithDataLocations.map(t => parsedFiles.otherFiles.get(t._2).map(_.path.toFile)),
112-
persistTracing = true
113-
)
114-
} yield mergedIdOpt
115-
}
116-
mergedSkeletonTracingIdOpt <- Fox.runOptional(skeletonTracings.headOption) { _ =>
117-
tracingStoreClient.mergeSkeletonTracingsByContents(
118-
SkeletonTracings(skeletonTracings.map(t => SkeletonTracingOpt(Some(t)))),
119-
persistTracing = true)
120-
}
121-
annotationLayers <- AnnotationLayer.layersFromIds(mergedSkeletonTracingIdOpt, mergedVolumeTracingIdOpt)
120+
mergedVolumeLayers <- mergeAndSaveVolumeLayers(volumeLayersGrouped,
121+
tracingStoreClient,
122+
parsedFiles.otherFiles)
123+
mergedSkeletonLayers <- mergeAndSaveSkeletonLayers(skeletonTracings, tracingStoreClient)
122124
annotation <- annotationService.createFrom(request.identity,
123125
dataSet,
124-
annotationLayers,
126+
mergedSkeletonLayers ::: mergedVolumeLayers,
125127
AnnotationType.Explorational,
126128
name,
127129
description)
@@ -135,6 +137,55 @@ Expects:
135137
}
136138
}
137139

140+
private def mergeAndSaveVolumeLayers(volumeLayersGrouped: Seq[List[UploadedVolumeLayer]],
141+
client: WKRemoteTracingStoreClient,
142+
otherFiles: Map[String, TemporaryFile]): Fox[List[AnnotationLayer]] = {
143+
if (volumeLayersGrouped.isEmpty) return Fox.successful(List())
144+
if (volumeLayersGrouped.length > 1 && volumeLayersGrouped.exists(_.length > 1))
145+
return Fox.failure("Cannot merge multiple annotations that each have multiple volume layers.")
146+
if (volumeLayersGrouped.length == 1) { // Just one annotation was uploaded, keep its layers separate
147+
Fox.serialCombined(volumeLayersGrouped.toList.flatten) { uploadedVolumeLayer =>
148+
for {
149+
savedTracingId <- client.saveVolumeTracing(uploadedVolumeLayer.tracing,
150+
uploadedVolumeLayer.getDataZipFrom(otherFiles))
151+
} yield
152+
AnnotationLayer(
153+
savedTracingId,
154+
AnnotationLayerType.Volume,
155+
uploadedVolumeLayer.name
156+
)
157+
}
158+
} else { // Multiple annotations with volume layers (but at most one each) was uploaded merge those volume layers into one
159+
val uploadedVolumeLayersFlat = volumeLayersGrouped.toList.flatten
160+
for {
161+
mergedTracingId <- client.mergeVolumeTracingsByContents(
162+
VolumeTracings(uploadedVolumeLayersFlat.map(v => VolumeTracingOpt(Some(v.tracing)))),
163+
uploadedVolumeLayersFlat.map(v => v.getDataZipFrom(otherFiles)),
164+
persistTracing = true
165+
)
166+
} yield
167+
List(
168+
AnnotationLayer(
169+
mergedTracingId,
170+
AnnotationLayerType.Volume,
171+
None
172+
))
173+
}
174+
}
175+
176+
private def mergeAndSaveSkeletonLayers(skeletonTracings: List[SkeletonTracing],
177+
tracingStoreClient: WKRemoteTracingStoreClient): Fox[List[AnnotationLayer]] = {
178+
if (skeletonTracings.isEmpty) return Fox.successful(List())
179+
for {
180+
mergedTracingId <- tracingStoreClient.mergeSkeletonTracingsByContents(
181+
SkeletonTracings(skeletonTracings.map(t => SkeletonTracingOpt(Some(t)))),
182+
persistTracing = true)
183+
} yield List(AnnotationLayer(mergedTracingId, AnnotationLayerType.Skeleton, None))
184+
}
185+
186+
private def assertNonEmpty(parseSuccesses: List[NmlParseSuccess]) =
187+
bool2Fox(parseSuccesses.exists(p => p.skeletonTracing.nonEmpty || p.volumeLayers.nonEmpty)) ?~> "nml.file.noFile"
188+
138189
private def findDataSetForUploadedAnnotations(
139190
skeletonTracings: List[SkeletonTracing],
140191
volumeTracings: List[VolumeTracing])(implicit mp: MessagesProvider, ctx: DBAccessContext): Fox[DataSet] =
@@ -173,14 +224,6 @@ Expects:
173224
Future.successful(JsonBadRequest(Messages("nml.file.noFile")))
174225
}
175226

176-
private def extractTracings(
177-
parseSuccesses: List[NmlParseResult]): (List[SkeletonTracing], List[(VolumeTracing, String)]) = {
178-
val tracings = parseSuccesses.flatMap(_.bothTracingOpts)
179-
val skeletons = tracings.flatMap(_._1)
180-
val volumes = tracings.flatMap(_._2)
181-
(skeletons, volumes)
182-
}
183-
184227
private def assertAllOnSameDataSet(skeletons: List[SkeletonTracing], volumes: List[VolumeTracing]): Fox[String] =
185228
for {
186229
dataSetName <- volumes.headOption.map(_.dataSetName).orElse(skeletons.headOption.map(_.dataSetName)).toFox
@@ -197,9 +240,23 @@ Expects:
197240
} yield organizationNames.headOption
198241
}
199242

200-
private def adaptPropertiesToFallbackLayer(volumeTracing: VolumeTracing, dataSet: DataSet): Fox[VolumeTracing] =
243+
private def adaptVolumeTracingsToFallbackLayer(volumeLayersGrouped: List[List[UploadedVolumeLayer]],
244+
dataSet: DataSet): Fox[List[List[UploadedVolumeLayer]]] =
201245
for {
202246
dataSource <- dataSetService.dataSourceFor(dataSet).flatMap(_.toUsable)
247+
allAdapted <- Fox.serialCombined(volumeLayersGrouped) { volumeLayers =>
248+
Fox.serialCombined(volumeLayers) { volumeLayer =>
249+
for {
250+
tracingAdapted <- adaptPropertiesToFallbackLayer(volumeLayer.tracing, dataSource)
251+
} yield volumeLayer.copy(tracing = tracingAdapted)
252+
}
253+
}
254+
} yield allAdapted
255+
256+
private def adaptPropertiesToFallbackLayer[T <: DataLayerLike](volumeTracing: VolumeTracing,
257+
dataSource: GenericDataSource[T]): Fox[VolumeTracing] =
258+
for {
259+
_ <- Fox.successful(())
203260
fallbackLayer = dataSource.dataLayers.flatMap {
204261
case layer: SegmentationLayer if volumeTracing.fallbackLayer contains layer.name => Some(layer)
205262
case layer: AbstractSegmentationLayer if volumeTracing.fallbackLayer contains layer.name => Some(layer)
@@ -320,7 +377,8 @@ Expects:
320377
_ = fetchedVolumeLayers.zipWithIndex.map {
321378
case (volumeLayer, index) =>
322379
volumeLayer.volumeDataOpt.foreach { volumeData =>
323-
val dataZipName = volumeLayer.volumeDataZipName(index, fetchedSkeletonLayers.length == 1)
380+
val dataZipName = volumeLayer.volumeDataZipName(index, fetchedVolumeLayers.length == 1)
381+
zipper.stream.setLevel(Deflater.BEST_SPEED)
324382
zipper.addFileFromBytes(dataZipName, volumeData)
325383
}
326384
}

app/controllers/DataSetController.scala

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ package controllers
22

33
import com.mohiva.play.silhouette.api.Silhouette
44
import com.scalableminds.util.accesscontext.{DBAccessContext, GlobalAccessContext}
5-
import com.scalableminds.util.geometry.Point3D
5+
import com.scalableminds.util.geometry.Vec3Int
66
import com.scalableminds.util.mvc.Filter
77
import com.scalableminds.util.tools.DefaultConverters._
88
import com.scalableminds.util.tools.{Fox, JsonHelper, Math}
@@ -87,7 +87,7 @@ class DataSetController @Inject()(userService: UserService,
8787
Fox.successful(a)
8888
case _ =>
8989
val defaultCenterOpt = dataSet.adminViewConfiguration.flatMap(c =>
90-
c.get("position").flatMap(jsValue => JsonHelper.jsResultToOpt(jsValue.validate[Point3D])))
90+
c.get("position").flatMap(jsValue => JsonHelper.jsResultToOpt(jsValue.validate[Vec3Int])))
9191
val defaultZoomOpt = dataSet.adminViewConfiguration.flatMap(c =>
9292
c.get("zoom").flatMap(jsValue => JsonHelper.jsResultToOpt(jsValue.validate[Double])))
9393
dataSetService

0 commit comments

Comments
 (0)