-
Notifications
You must be signed in to change notification settings - Fork 19.6k
Closed
Description
Hi,
I jut ran a CNN built with Keras on a big training set, and I has weird loss values at each epoch (see below):
66496/511502 [==>...........................] - ETA: 63s - loss: 8.2800
66528/511502 [==>...........................] - ETA: 63s - loss: -204433556137039776.0000
345664/511502 [===================>..........] - ETA: 23s - loss: 8.3174
345696/511502 [===================>..........] - ETA: 23s - loss: -39342531075525840.0000
214080/511502 [===========>..................] - ETA: 41s - loss: 8.3406
214112/511502 [===========>..................] - ETA: 41s - loss: -63520753730220536.0000
How is that possible? The loss becomes suddenly to big and the value gets bigger than the double encoding?
Is there a way to avoid it?
Regards,
Metadata
Metadata
Assignees
Labels
No labels