Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hard-Coded Scaling is removed for ProgressiveGANs, GanTrainer and ProgressiveAE Training and Model files #209

Merged
merged 13 commits into from
Mar 11, 2022

Conversation

Aakanksha-Rana
Copy link
Member

@Aakanksha-Rana Aakanksha-Rana commented Jan 24, 2022

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Summary

Checklist

  • I have added tests to cover my changes
  • I have updated documentation (if necessary)

Acknowledgment

Aakanksha-Rana and others added 8 commits January 24, 2022 17:58
Scaling of any form should be a part of data-loader (get_dataset function). 
Setting "normalizer" = True  for mean-norm. minmaxIntensityScaling for [0,1] and CustomIntensityScaling augmentation for [-1,1].
Scaling in postprocessing should be done in accordance pre-processing.
@Aakanksha-Rana
Copy link
Member Author

Aakanksha-Rana commented Jan 24, 2022

We should use MinMaxIntensityScaling for [0,1] or CustomIntensityScaling for [-1,1] in "augment" option or set "normalizer = True" while loading the data. This also removes any need of post-processing in the generate function of ProgressiveGANs.

@Aakanksha-Rana Aakanksha-Rana changed the title Hard-Coded Scaling is removed for ProgressiveGANs, BrainSiam and ProgressiveAE Training and Model files Hard-Coded Scaling is removed for ProgressiveGANs, GanTrainer and ProgressiveAE Training and Model files Jan 25, 2022
Copy link
Contributor

@satra satra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's ok to remove instead of commenting. the details will remain in git history.

@Aakanksha-Rana
Copy link
Member Author

@satra @Hoda1394 any idea why this update could fail the checks? I simply removed comments.

@Hoda1394
Copy link
Contributor

Hoda1394 commented Feb 1, 2022

It seems to be a pytest issue. have you run pytest locally?

@Aakanksha-Rana
Copy link
Member Author

Aakanksha-Rana commented Feb 1, 2022

It seems to be a pytest issue. have you run pytest locally?

yes and that is fine.

@wazeerzulfikar
Copy link
Contributor

wazeerzulfikar commented Feb 2, 2022

I think that with these changes (if the real images are not scaled), during the training of the discriminator, the generated images will be in range [-1, 1] while real images will not be in that range.
The final activation function should probably be removed then to not constrain the output of the generator.

Also, the generate function will not convert the tanh final outputs. Hence, that function cannot be used to visualize/save images.
Has the effect of not scaling the data on training also been verified? or upon reading the above comments, it seems like normalization will be a different PR?

@satra
Copy link
Contributor

satra commented Feb 5, 2022

@wazeerzulfikar - normalization will be part of the get dataset step, in the sense that this tool can expect it's inputs to have been normalized in a [1, -1] range, and thus it's outputs should also be in that range. any mapping is thus external to this code. you can see an example through the change in the example notebook.

@Aakanksha-Rana - i do think this would be a good class for the new API and it would be good to create an example. also in the notebook, does the generation part of the notebook still work with these changes?

@satra
Copy link
Contributor

satra commented Mar 1, 2022

@wanderine - can you try this branch in this PR for testing?

@satra
Copy link
Contributor

satra commented Mar 11, 2022

i'm going to merge this in and we can check it in relation to the new generation api.

@satra satra merged commit 0d9b389 into neuronets:master Mar 11, 2022
@wanderine
Copy link

I pulled the latest docker container from docker hub and ran a training using the same training script as before, but the problem is still there. Do I need to change something more, redo the creation of TFrecords ? Replace some python file in the container?

https://github.com/wanderine/ASSIST/blob/main/3DGANsegmentation/train_multigpu.py

@Aakanksha-Rana
Copy link
Member Author

Aakanksha-Rana commented Mar 13, 2022

https://github.com/wanderine/ASSIST/blob/main/3DGANsegmentation/train_multigpu.py
Have you tired using "normalizer=True" in line 71? It still looks like set as None...

@wanderine
Copy link

I will try that, thanks

@wanderine
Copy link

TypeError: in user code:

File "/usr/local/lib/python3.8/dist-packages/nobrainer/dataset.py", line 149, in None  *
    lambda x, y: (normalizer(x), y)

TypeError: 'bool' object is not callable

@satra
Copy link
Contributor

satra commented Mar 13, 2022

@wanderine - are you using the enh/api branch as noted in the other issue? the docker containers will not have all these changes.

@satra
Copy link
Contributor

satra commented Mar 13, 2022

this generation example also includes scaling the data: https://github.com/neuronets/nobrainer/blob/enh/api/guide/api_train_generation_progressive.ipynb (but it will require installing that branch of nobrainer - pip install https://github.com/neuronets/nobrainer/archive/refs/heads/enh/api.zip

@Aakanksha-Rana
Copy link
Member Author

TypeError: in user code:

File "/usr/local/lib/python3.8/dist-packages/nobrainer/dataset.py", line 149, in None  *
    lambda x, y: (normalizer(x), y)

TypeError: 'bool' object is not callable

Ah! by default normalizer is set to standardize the data in get_dataset. Can you just remove the "normalizer=None" in line 71 ?

@wanderine
Copy link

OK let me know when the Docker containers have the changes, because it seems too complicated to run it otherwise.

@satra
Copy link
Contributor

satra commented Mar 16, 2022

it seems too complicated to run it otherwise.

@wanderine - could you please let us know as to why it is complicated to run otherwise? does the system you are working on not support conda or some such environment ? if this is something that can be improved with documentation, we would like to do so.

@wanderine
Copy link

There are too many abstraction layers, I can barely keep track of everything. For the old code I used anaconda, then I switched to singularity, now I should use anaconda again?

@satra
Copy link
Contributor

satra commented Mar 16, 2022

@wanderine - thanks for letting us know. i don't know what made the switch to singularity. all of nobrainer can be run in a conda-forge environment (not anaconda - as most packages are not systematically updated there). the docker is only provided as a convenience function for those who are using it as a command line tool. i'm assuming you are running a python script for your training, hence the api refactor should really help you. also it will help us to know if our abstractions are useful.

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Problems creating a TFrecord dataset
5 participants