Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flexibly resize input layer of tf.keras.Model upon loading trained model #1084

Merged
merged 12 commits into from
Jan 7, 2023

Conversation

roomrys
Copy link
Collaborator

@roomrys roomrys commented Dec 15, 2022

Description

Flexibly resize the input layer of the tf.keras.Model to (None, None, None, num_channels) upon loading a trained model. This allows users to use a trained model for inference on any sized video/image.

Types of changes

  • Bugfix
  • New feature
  • Refactor / Code style update (no logical changes)
  • Build / CI changes
  • Documentation Update
  • Other (explain)

Does this address any currently open issues?

Outside contributors checklist

  • Review the guidelines for contributing to this repository
  • Read and sign the CLA and add yourself to the authors list
  • Make sure you are making a pull request against the develop branch (not main). Also you should start your branch off develop
  • Add tests that prove your fix is effective or that your feature works
  • Add necessary documentation (if appropriate)

Thank you for contributing to SLEAP!

❤️

@codecov
Copy link

codecov bot commented Dec 16, 2022

Codecov Report

Merging #1084 (54aff84) into develop (62a6f1c) will increase coverage by 0.01%.
The diff coverage is 96.66%.

@@             Coverage Diff             @@
##           develop    #1084      +/-   ##
===========================================
+ Coverage    69.28%   69.30%   +0.01%     
===========================================
  Files          130      130              
  Lines        21954    21978      +24     
===========================================
+ Hits         15211    15231      +20     
- Misses        6743     6747       +4     
Impacted Files Coverage Δ
sleap/nn/data/resizing.py 63.63% <ø> (ø)
sleap/nn/inference.py 79.82% <94.44%> (+0.04%) ⬆️
sleap/nn/utils.py 52.38% <100.00%> (+16.89%) ⬆️
sleap/gui/web.py 82.89% <0.00%> (-2.64%) ⬇️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@roomrys roomrys marked this pull request as ready for review December 16, 2022 00:49
@roomrys roomrys requested a review from talmo December 16, 2022 00:49
Copy link
Collaborator

@talmo talmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM and I added an integration test, but see comments.

Comment on lines 168 to 170
new_model: tf.keras.Model = keras_model.__class__.from_config(
model_config, custom_objects={}
) # Change custom objects if necessary
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love this function but my only concern is that this might bypass the constructor that is used for custom tf.keras.Model subclasses. For example, in our InferenceModel subclasses, sometimes we pass more than the underlying tf.keras.Model config.

To be safe, I'd just build this with tf.keras.Model.from_config instead of keras_model.__class__.from_config. This makes it explicit that this function expects a base tf.keras.Model typed object, not a subclass for which this initialization method might not be appropriate.

Comment on lines 173 to 175
weights = [layer.get_weights() for layer in keras_model.layers[1:]]
for layer, weight in zip(new_model.layers[1:], weights):
layer.set_weights(weight)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it perhaps less general to assume that the input layer is keras_model.layers[0]?

I guess this is baked in above as well, but why do we skip layers[0]? In general it should not have weights anyway, but I'm not seeing what's the reason to skip it.

@@ -25,6 +25,7 @@ def find_padding_for_stride(
A tuple of (pad_bottom, pad_right), integers with the number of pixels that the
image would need to be padded by to meet the divisibility requirement.
"""
# The outer-most modulo handles edge case when image_height % max_stride == 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for figuring this out!

input_layer_name = model_config["layers"][0]["name"]
model_config["layers"][0] = {
"name": f"{input_layer_name}",
"class_name": "InputLayer",
"config": {
"batch_input_shape": new_shape,
"dtype": "float32",
"sparse": False,
"name": f"{input_layer_name}",
},
"inbound_nodes": [],
}

model_config["layers"][1]["inbound_nodes"] = [[[f"{input_layer_name}", 0, 0, {}]]]
model_config["input_layers"] = [[f"{input_layer_name}", 0, 0]]

new_model: tf.keras.Model = keras_model.__class__.from_config(
model_config["layers"][0]["config"]["batch_input_shape"] = new_shape
new_model: tf.keras.Model = tf.keras.Model.from_config(
Copy link
Collaborator Author

@roomrys roomrys Dec 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of recreating the entire config and specifying the inbound_nodes and input_layers, I just adjusted the batch_input_shape. Both pytest and my own test show that this works as expected (at least on the examples that were tested).

@talmo Do you see anything obviously problematic about this that I might be overlooking?

@roomrys roomrys requested a review from talmo December 23, 2022 01:41
Copy link
Collaborator

@talmo talmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!!

@roomrys roomrys merged commit 9ae2941 into develop Jan 7, 2023
@roomrys roomrys deleted the liezl/input-scaling branch January 7, 2023 00:25
roomrys added a commit that referenced this pull request Feb 24, 2023
* GUI Training: Use hidden params from loaded config (#1053)

* Add optional unragging arg to model export (#1054)

* Fix config option to `split_by_inds` (#1060)

* Convert training, validation, and test to Labels object

* Add test for split_by_inds

* Use Labels.extract instead of Labels(List[LabeledFrames])

* Tracking: robust assignment of the best score to an instance (#1062)

* Set max instances for top down models (#1070)

* Add optional unragging arg to model export

* Add option to set max instances for multi-instance models

* Fix test

* Don't create instances during inference if no points were found (#1073)

* Don't create instances during inference if no points were found

* Add points check for all predictors

* Fix single instance predictor logic and test

* Add tests for all predictors

Co-authored-by: roomrys <[email protected]>

* Add one-line fix to VideoWriterSkyvideo (#1082)

* Fix parser for sleap-export (#1085)

* Refactor commands to load project as `AppCommand`s (#1098)

* Add working Proof of Concept

* Create command class for loading project

* Split `LoadProjectFile` as a subclass of `LoadLabelsObject`

* Reroute last existing reference

* Remove debugging code

* Flexibly resize input layer of `tf.keras.Model` upon loading trained model (#1084)

* Add initial implementation (auto output stride problematic)

* Add to load_predictor test (error when auto-compute output stride)

* Use output stride from config instead of auto-computing

* Fix output-stride/padding modulo error and do not resize on export

* Fix resizing bug in multi-class predictors

* Non-functional clean-up

* Rename new input layer to original name

* Add inference integration test

* Minimize config surgery, generalize layer iteration

Co-authored-by: Talmo Pereira <[email protected]>

* Add Option to Make Trail Shade Darker/Lighter  (#1103)

* Make trails 60% darker than track color

* Add menu option for shade of trails

* Remove unexpected indent (fat-fingered)

* Create signal that updates plot instead of removing and replotting items (#1134)

* Create signal that updates plot instead of redrawing

* Remove debug code

* Non-functional self-review changes

* Fix symmetric skeletons (via table input) (#1136)

Ensure variable initialized before calling it

* Nix export of tracking results (#1068)

* [io] export tracking results to NIX file

* [io] nix added to export filter only if available

* [nixio] refactor, add scores link data as mtag

* [nixio] speeding up export by chunked writing

* [nixio] rename point score to node score

* [nixio] fix missing dimension descriptor for node scores

* [export analysis] support multiple formats also for bulk export

* [nixio] export centroid, some documentation

* [nixio] fix double dot before filename suffix

* [nixio] fix bug when not all nodes were found

* [nixio] housekeeping

* [nix] add nix analyis output format to convert

* [nix] tiny fix, catch file write error and properly close file

* [inference] main takes optional args. Can be imported to run inference form scripts

* [convert] simplify if else structure and outfile handling for analysis export

* [nix] use pathlib instead of os

* [nix] catch if there are instances with a None frame_idx ...

not sure why this occurred. The nix adaptor cannot save instances
that are not related to a frame.

* [nix] move checks to top of write function

* [nix] use absolute imports

* [nix] use black to reformat

* [commands] revert qtpy import and apply code style

* [convert] use absolute imports, apply code style

* [commands]fix imports

* [inference/nix]fix linter complaint, adjust nix types for scores

* [nix] add test case for nix export format

* [nix] extended testing, some modifications of adaptor

* [skeleton] add __eq__ to Skeleton ...

make Node.name and Node.weight instance variables instead of class
variables

* [nix] add nixio to requirements, remove unused nix_available, ...

allow for non-unique entries in node, track and skeleton. Extend node
map to store the skeleton it is part of

* [nix] make the linter happy

* [Node] force definition of a name

Co-authored-by: Liezl Maree <[email protected]>

* [nix] use getattr for getting grayscale information

Co-authored-by: Liezl Maree <[email protected]>

* [nix] fixes according to review

* [convert] break out of loop upon finding the video

Co-authored-by: Liezl Maree <[email protected]>

* [commands.py] use pathilb instead of splitting filename

Co-authored-by: Liezl Maree <[email protected]>

* [dev requirements] remove linebreak at last line

* [skeleton] revert attribute creation back to original

* [nix] break lines in class documentation

* Ensure all file references are closed

* Make the linter happy

* Add tests for ExportAnalysis and (docs for) sleap-convert

Co-authored-by: Liezl Maree <[email protected]>

* Fix body vs symmetry subgraph filtering (#1142)

Co-authored-by: Liezl Maree <[email protected]>

* Handle changing backbones in training editor GUI (#1140)

Co-authored-by: Liezl Maree <[email protected]>

* Added scaling functionality for both the instances and bounding box.  (#1133)

* Create VisibleBoundingBox class.

* Added instance scaling functionality in addition to bounding box scaling functionality.

* Update sleap/gui/widgets/video.py

Co-authored-by: Talmo Pereira <[email protected]>

* Update sleap/gui/widgets/video.py

Co-authored-by: Talmo Pereira <[email protected]>

* Update sleap/gui/widgets/video.py

Co-authored-by: Talmo Pereira <[email protected]>

* Update sleap/gui/widgets/video.py

Co-authored-by: Talmo Pereira <[email protected]>

* Update sleap/gui/widgets/video.py

Co-authored-by: Talmo Pereira <[email protected]>

* Added new testing for scaling operation and simplified VisibleBoundingBox class code.

* Added type hinting to the scaling padding and removed erroneous bounding rect initialization.

Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: Liezl Maree <[email protected]>

* Add better error message for top down (#1121)

* Add better error message for top down

* Add test for error message

* Raise different error, fix test

* Hotfix for video save #1098 (#1148)

* Add a hotfix for #1098

* WIP: Add test for detecting changes on load

* Finialize change on load test

* Remove unused imports

* Skip test if on windows since files are being used in parallel

* Add central padding to SizeMatcher (#1129)

* add center padding to size matcher

* add test for center padding

* add ensure_float option to inference layer

* reformat resizing and test_resizing

* Remove redundant operation

* Replace existing constants with fixtures

---------

Co-authored-by: Liezl Maree <[email protected]>

* Added MoveNet as an external model reference (#1141)

* add center padding to size matcher

* add test for center padding

* add ensure_float option to inference layer

* reformat resizing and test_resizing

* add MoveNet as an external model inference

* add the movenet to the from_model_paths

* add tests

* add comments to movenet predictor

* add tensorflow_hub to the requirements.txt

* modified default video path

* resolved most of the comments except expanding the predictor

* expanded Predictor.from_model_paths function to include any pre-trained models.

* add test_load_model

* added from_trained_models in class Predictor and modified test_load_model for it.

* modified test_load_model to be more generalized.

* moved pretrained model from Predictor.from_trained_model to Predictor.from_model_paths and added a test for it.

* Fix Predictor.from_model_paths and tests

* Rename load_movenet_model to make_model_movenet

* minor clean-up

* Remove redundant operation

* Replace existing constants with fixtures

* Handle loading movenet models via load_model API

* Clean-up doc strings

---------

Co-authored-by: Liezl Maree <[email protected]>

* Resumable Training (#1130)

      * add resume training functionality

* add testing function for resume training functionality

* linting black

* Resumable Training 2 - CLI Options (#1131)

* add cli options for resumable training

* add test for cli resume training

* black linting for cli resumable training

* simplify resumable checkpoint CLI fn to a single CLI arg (#1132)

* simplify resumable checkpoint CLI fn to a single CLI arg

* Adam/resumable training 3 (#1150)

* correct path of labels_path for test_training

* add resume training to gui

* add train from scratch message

* Add finishing touches to resumable training PR (#1150) (#1168)

* Refactor/update 'use trained' and 'resume training' checkbox logic

* Simplify checkbox logic and reset model field when resume training

* Reset checkboxes upon changing config selection

* Handle case for updating TrainingEditor when sender is not a checkbox

* Add complete state space GUI test for checkboxes

* Finish combobox test

* Test that form is reset

* Remove straggling TODO

---------

Co-authored-by: roomrys <[email protected]>
Co-authored-by: jimzers <[email protected]>

---------

Co-authored-by: jimzers <[email protected]>
Co-authored-by: Liezl Maree <[email protected]>

* Return trainer from sleap-train and check that trainer configured correctly

* Add CLI documentation for website

---------

Co-authored-by: jimzers <[email protected]>
Co-authored-by: Liezl Maree <[email protected]>

* Small (final?) revisions and fix test

* Revert changes to fixture

---------

Co-authored-by: jimzers <[email protected]>
Co-authored-by: Liezl Maree <[email protected]>

* GenericTableModel/View improvements (#1163)

* [dataviews] GenericTableModel/View improvements ...

* GenericTableView got a new argument specifying whether the ellipsis
for long cell content should be right (old behavior, default) or left
useful for long content such as the filenames in the video table.
* GenericTableView uses all the space that available to the table.
* The model's data function returns the full cell content to be shown as
tool tip text.

* [gui/app] set the ellipsis to be on the left for long table contents

---------

Co-authored-by: Liezl Maree <[email protected]>

* Add Skeleton Templates (#1122)

* Update docs: change 'M1' to 'Apple Silicon' (#1188)

* Bump to 1.3.0a0 (#1189)

---------

Co-authored-by: sheridana <[email protected]>
Co-authored-by: getzze <[email protected]>
Co-authored-by: Talmo Pereira <[email protected]>
Co-authored-by: Jan Grewe <[email protected]>
Co-authored-by: Sean Afshar <[email protected]>
Co-authored-by: Jiaying Hsu <[email protected]>
Co-authored-by: Adam Lee <[email protected]>
Co-authored-by: jimzers <[email protected]>
Co-authored-by: Jan Grewe <[email protected]>
Co-authored-by: Aaditya Prasad <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants