You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many optimization frameworks provide an optional argument - tol or tolerance that lets one stop optimization if the function, gradient or parameter does not change much for a given iteration. This is used in conjunction with a parameter max_iter for a hard upper bound on the number of iterations the optimizer is to be run for.
Are there plans of implementing this functionality?
If not, would it be a good idea for me to go ahead and implement this or do you feel that the increased computational complexity isn't worth it?
Thanks and great work on the package. Just used this for an assignment and it is really powerful!
A more navigable documentation would be great though.
The text was updated successfully, but these errors were encountered:
tol as a user passed argument. Actual computation and checking happens on each iteration
This is slower but provides a richer interface to the user
Return value of the callback function is checked as an exit criteria
Existing code is unaffected. Library users are free to implement any kind of convergence criteria
Many optimization frameworks provide an optional argument -
tol
ortolerance
that lets one stop optimization if the function, gradient or parameter does not change much for a given iteration. This is used in conjunction with a parametermax_iter
for a hard upper bound on the number of iterations the optimizer is to be run for.The optimizers in the autograd package currently do not support this.
Thanks and great work on the package. Just used this for an assignment and it is really powerful!
A more navigable documentation would be great though.
The text was updated successfully, but these errors were encountered: