Manual vs Automatic Optimization Discrepancy in VAE Models #18571
              
                Unanswered
              
          
                  
                    
                      TTK95
                    
                  
                
                  asked this question in
                Lightning Trainer API: Trainer, LightningModule, LightningDataModule
              
            Replies: 1 comment
-
| 
         hi, I'm wondering if you were able to solve this issue?  | 
  
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've encountered an intriguing phenomenon while working with Variational Autoencoder (VAE) models trained using Lightning on PyTorch. Specifically, I've noticed significant differences in the loss curves when using manual optimization compared to automatic optimization.
To provide some context, I have two VAE models and I've attached a screenshot showcasing the distinct loss curves for both cases. This has piqued my curiosity and I'm eager to dive deeper into understanding the underlying mechanics driving this difference.
Here's what I'd like to discuss:
My opt_step code snippet in the training_step func:
vae_opt.zero_grad()
self.manual_backward(loss)
vae_opt.step()
loss is calculated in both models the same way!
Thank you in advance for your time and assistance.
Beta Was this translation helpful? Give feedback.
All reactions