-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hard-Coded Scaling is removed for ProgressiveGANs, GanTrainer and ProgressiveAE Training and Model files #209
Conversation
Scaling of any form should be a part of data-loader (get_dataset function). Setting "normalizer" = True for mean-norm. minmaxIntensityScaling for [0,1] and CustomIntensityScaling augmentation for [-1,1].
Scaling in postprocessing should be done in accordance pre-processing.
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
We should use MinMaxIntensityScaling for [0,1] or CustomIntensityScaling for [-1,1] in "augment" option or set "normalizer = True" while loading the data. This also removes any need of post-processing in the generate function of ProgressiveGANs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's ok to remove instead of commenting. the details will remain in git history.
It seems to be a pytest issue. have you run pytest locally? |
yes and that is fine. |
I think that with these changes (if the real images are not scaled), during the training of the discriminator, the generated images will be in range [-1, 1] while real images will not be in that range. Also, the |
@wazeerzulfikar - normalization will be part of the get dataset step, in the sense that this tool can expect it's inputs to have been normalized in a [1, -1] range, and thus it's outputs should also be in that range. any mapping is thus external to this code. you can see an example through the change in the example notebook. @Aakanksha-Rana - i do think this would be a good class for the new API and it would be good to create an example. also in the notebook, does the generation part of the notebook still work with these changes? |
@wanderine - can you try this branch in this PR for testing? |
i'm going to merge this in and we can check it in relation to the new generation api. |
I pulled the latest docker container from docker hub and ran a training using the same training script as before, but the problem is still there. Do I need to change something more, redo the creation of TFrecords ? Replace some python file in the container? |
|
I will try that, thanks |
TypeError: in user code:
|
@wanderine - are you using the enh/api branch as noted in the other issue? the docker containers will not have all these changes. |
this generation example also includes scaling the data: https://github.com/neuronets/nobrainer/blob/enh/api/guide/api_train_generation_progressive.ipynb (but it will require installing that branch of nobrainer - |
Ah! by default normalizer is set to standardize the data in get_dataset. Can you just remove the "normalizer=None" in line 71 ? |
OK let me know when the Docker containers have the changes, because it seems too complicated to run it otherwise. |
@wanderine - could you please let us know as to why it is complicated to run otherwise? does the system you are working on not support conda or some such environment ? if this is something that can be improved with documentation, we would like to do so. |
There are too many abstraction layers, I can barely keep track of everything. For the old code I used anaconda, then I switched to singularity, now I should use anaconda again? |
@wanderine - thanks for letting us know. i don't know what made the switch to singularity. all of nobrainer can be run in a conda-forge environment (not anaconda - as most packages are not systematically updated there). the docker is only provided as a convenience function for those who are using it as a command line tool. i'm assuming you are running a python script for your training, hence the api refactor should really help you. also it will help us to know if our abstractions are useful. |
Types of changes
Summary
Checklist
Acknowledgment
Fixes Problems creating a TFrecord dataset #201