Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

ValueError while finetuning #147

Closed
Zumbalamambo opened this issue Feb 24, 2021 · 7 comments · Fixed by #149
Closed

ValueError while finetuning #147

Zumbalamambo opened this issue Feb 24, 2021 · 7 comments · Fixed by #149
Labels
bug / fix Something isn't working help wanted Extra attention is needed

Comments

@Zumbalamambo
Copy link

I'm using the following code to fine-tune embedder,


import flash
from flash import download_data
from flash.vision import ImageClassificationData, ImageEmbedder


datamodule = ImageClassificationData.from_folders(
    train_folder="assets/classes/train/",
    valid_folder="assets/classes/val/",
    test_folder="assets/classes/test/",
)

# 3. Build the model
embedder = ImageEmbedder(backbone="resnet18")

# 4. Create the trainer. Run once on data
trainer = flash.Trainer(max_epochs=1)

# 5. Train the model
trainer.finetune(embedder, datamodule=datamodule, strategy="freeze_unfreeze")

# 6. Test the model
trainer.test()

# 7. Save it!
trainer.save_checkpoint("image_embedder_model.pt")

Unfortunately it throws the following error,

/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/pl_bolts/utils/warnings.py:30: UserWarning: You want to use `wandb` which is not installed yet, install it with `pip install wandb`.
  stdout_func(
/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/pl_bolts/utils/warnings.py:30: UserWarning: You want to use `gym` which is not installed yet, install it with `pip install gym`.
  stdout_func(
GPU available: True, used: False
TPU available: None, using: 0 TPU cores
/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:49: UserWarning: GPU available but not used. Set the --gpus flag when calling the script.
  warnings.warn(*args, **kwargs)
Traceback (most recent call last):
  File "/home/oggie/Workspace/Python/KapschUI/embedded_train.py", line 20, in <module>
    trainer.finetune(embedder, datamodule=datamodule, strategy="freeze_unfreeze")
  File "/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/flash/core/trainer.py", line 90, in finetune
    return super().fit(model, train_dataloader, val_dataloaders, datamodule)
  File "/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 468, in fit
    self.accelerator_backend.setup(model)
  File "/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/pytorch_lightning/accelerators/legacy/cpu_accelerator.py", line 49, in setup
    self.setup_optimizers(model)
  File "/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/pytorch_lightning/accelerators/legacy/accelerator.py", line 140, in setup_optimizers
    optimizers, lr_schedulers, optimizer_frequencies = self.trainer.init_optimizers(model)
  File "/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/pytorch_lightning/trainer/optimizers.py", line 30, in init_optimizers
    optim_conf = model.configure_optimizers()
  File "/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/flash/core/model.py", line 153, in configure_optimizers
    return self.optimizer_cls(filter(lambda p: p.requires_grad, self.parameters()), lr=self.learning_rate)
  File "/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/torch/optim/sgd.py", line 68, in __init__
    super(SGD, self).__init__(params, defaults)
  File "/home/oggie/anaconda3/envs/pose/lib/python3.8/site-packages/torch/optim/optimizer.py", line 47, in __init__
    raise ValueError("optimizer got an empty parameter list")
ValueError: optimizer got an empty parameter list

Process finished with exit code 1

@Zumbalamambo Zumbalamambo added bug / fix Something isn't working help wanted Extra attention is needed labels Feb 24, 2021
@kaushikb11
Copy link
Contributor

kaushikb11 commented Feb 25, 2021

Hi @Zumbalamambo, this is because you are freezing the backbone (resnet18 here), hence the optimizer receives an empty parameter list while finetuning. A couple of points to note, the default epoch at which it unfreezes in the freeze_unfreeze strategy is 10. Also, you could add a head to your ImageEmbedder Model by

embedder = ImageEmbedder(backbone="resnet18", embedding_dim=1024)

@Zumbalamambo
Copy link
Author

Zumbalamambo commented Feb 27, 2021

@kaushikb11 thank you! Is it possible to increase the input size of the ImageEmbedder ?

@kaushikb11
Copy link
Contributor

@Zumbalamambo What do you mean by input size of the ImageEmbedder? If you mean the Input size of the images, we have default preprocessing transforms in place to resize your input image data :)

@Zumbalamambo
Copy link
Author

@kaushikb11 thank you! I have one another doubt, how do you determine the value of embedding_dim? I used vgg16 and selected an embedding_size of 4096 but then it threw the following error ,

RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x512 and 4096x4096)

@kaushikb11
Copy link
Contributor

Hi @Zumbalamambo, I have patched a fix in PR #154. Thanks!

@Zumbalamambo
Copy link
Author

Zumbalamambo commented Mar 2, 2021

@kaushikb11 thank you! :) May I know how I can apply the latest fix? I have installed lightning-flash using pip

@Borda Borda closed this as completed in #149 Mar 3, 2021
@kaushikb11
Copy link
Contributor

Hi @Zumbalamambo, follow the below commands :)

git clone https://github.com/PyTorchLightning/lightning-flash.git
cd lightning-flash
# install in editable mode
pip install -e .

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants