Replies: 1 comment
-
Gradient descent is a fundamental optimization algorithm used in machine learning that involves calculus. The update rule for the weights is given by:
Where:
The gradient This GIF visually explains the convergence process in gradient descent. You can see how the algorithm iteratively adjusts the weights to minimize the loss function. |
Beta Was this translation helpful? Give feedback.
-
HI All
I have a question about
🗺 Chapter 1 – PyTorch Workflow
[5:49:31]
Video 44. Setting up a loss function and optimizer
In the Training loop, my starting random weight is 0.3367 and, after 1 epoch, the computer changes the weight to 0.3406
Can someone tell me if there is some mathematics behind this i.e. a formula/algorithm that the computer uses, that links 0.3367 to 0.3406, OR is the computer just picking a random value for weight (0.3406) close to the starting value of weight (0.3367) in the right direction (so increases the weight) in order to minimise the loss?
I think it's just a random number and there is no formula
Any advice would be greatly appreciated
Thanks
Jonathan
Beta Was this translation helpful? Give feedback.
All reactions