-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Script to merge volume tracing into on-disk segmentation #3431
Conversation
open questions:
|
tracing_tmpdir_path = extract_tracing_zip(args) | ||
|
||
tracing_dataset = wkw.Dataset.open(os.path.join(tracing_tmpdir_path, '1')) | ||
segmentation_dataset = wkw.Dataset.open(os.path.join(args.segmentation_path)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add 1
to segmentation path here as well (and don't require the user to provide it). This prevents users to accidentally merge data from different zoom levels.
tracing_dataset = wkw.Dataset.open(os.path.join(tracing_tmpdir_path, '1')) | ||
segmentation_dataset = wkw.Dataset.open(os.path.join(args.segmentation_path)) | ||
|
||
assert(tracing_dataset.header.num_channels == segmentation_dataset.header.num_channels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also assert
data type?
|
||
print(" Writing segmentation file back to disk...") | ||
segmentation_dataset.write([0, 0, 0], data) | ||
count = count + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe use enumerate
(http://book.pythontips.com/en/latest/enumerate.html) instead
topleft = list(map(lambda x: x % segmentation_file_len_voxels, tracing_bbox[0])) | ||
shape = tracing_bbox[1] | ||
bottomright = list( map(add, topleft, shape) ) | ||
# print(" Broatcasting to 0:1 {}:{}, {}:{}, {}:{}".format(topleft[0], bottomright[0], topleft[1], bottomright[1], topleft[2], bottomright[2])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove?
else: | ||
extract_data_zip(args.tracing_path) | ||
|
||
tracing_tmpdir_path = 'tmp-67X8KZUFP0' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generate randomly? this might otherwise cause problems when merging multiple tracings at the same time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, i see you have tmp_filename
but don't use it here ;)
grouped = {} | ||
for tracing_bbox in tracing_bboxes: | ||
segmentation_bbox = matching_segmentation_bbox(segmentation_bboxes, tracing_bbox) | ||
str(segmentation_bbox) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unused. remove?
do we delete the temp folder after we are done? |
@fm3 When I test this locally I get:
The script creates a temp folder, though, that contains a |
I incorporated your feedback, thanks! :) |
I committed a fix for macOS. Pleas have a look this does not break anything on your side. |
timing is the same for me, I suppose that’s writing the compressed data? |
Ok, the timing is still weird. Reading the file should be faster than writing, but not 100 times faster!? Anyhow, not important for this PR, but depending on user feedback we could/should investigate a bit more. |
* master: Fix rgb support (#3455) Fix docker uid/gid + binaryData permissions. Persist postgres db (#3428) Script to merge volume tracing into on-disk segmentation (#3431) Hotfix for editing TaskTypes (#3451) fix keyboardjs module (#3450) Fix guessed dataset boundingbox for non-zero-aligned datasets (#3437) voxeliterator now checks if the passed map has elements (#3405) integrate .importjs (#3436) Re-write logic for selecting zoom level and support non-uniform buckets per dimension (#3398) fixing selecting bug and improving style of layout dropdown (#3443) refresh screenshots (#3445) Reduce the free space between viewports in tracing (#3333) Scala linter and formatter (#3357) ignore reported datasets of non-existent organization (#3438) Only provide shortcut for tree search and not for comment search (#3407) Update Datastore+Tracingstore Standalone Deployment Templates (#3424) In yarn refresh-schema, also invalidate Tables.scala (#3430) Remove BaseDirService that watched binaryData symlinks (#3416) Ensure that resolutions array is dense (#3406) Fix bucket-collection related rendering bug (#3409)
Steps to test:
python3 main.py my-volumetracing.zip ../../binaryData/Connectomics_Department/ROI2017_wkw_edit_segmentation/segmentation/1/
Issues: