Skip to content

MohamedLaghdafHABIBOULLAH/RegularizedOptimization.jl

This branch is 8 commits ahead of, 5 commits behind JuliaSmoothOptimizers/RegularizedOptimization.jl:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

03591a4 Β· Feb 26, 2025
Jan 22, 2025
Oct 6, 2023
Nov 15, 2021
Jan 17, 2025
Feb 26, 2025
Jan 19, 2025
Sep 22, 2021
Dec 30, 2024
Sep 15, 2024
Aug 21, 2021
Feb 20, 2025
Jul 29, 2022

Repository files navigation

RegularizedOptimization

CI codecov DOI

How to cite

If you use RegularizedOptimization.jl in your work, please cite using the format given in CITATION.bib.

Synopsis

This package contains solvers to solve regularized optimization problems of the form

minβ‚“ f(x) + h(x)

where f: ℝⁿ β†’ ℝ has Lipschitz-continuous gradient and h: ℝⁿ β†’ ℝ is lower semi-continuous and proper. The smooth term f describes the objective to minimize while the role of the regularizer h is to select a solution with desirable properties: minimum norm, sparsity below a certain level, maximum sparsity, etc. Both f and h can be nonconvex.

Installation

To install the package, hit ] from the Julia command line to enter the package manager and type

pkg> add https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl

What is Implemented?

Please refer to the documentation.

Related Software

References

  1. A. Y. Aravkin, R. Baraldi and D. Orban, A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization, SIAM Journal on Optimization, 32(2), pp.900–929, 2022. Technical report: https://arxiv.org/abs/2103.15993
  2. R. Baraldi, R. Kumar, and A. Aravkin (2019), Basis Pursuit De-noise with Non-smooth Constraints, IEEE Transactions on Signal Processing, vol. 67, no. 22, pp. 5811-5823.
@article{aravkin-baraldi-orban-2022,
  author = {Aravkin, Aleksandr Y. and Baraldi, Robert and Orban, Dominique},
  title = {A Proximal Quasi-{N}ewton Trust-Region Method for Nonsmooth Regularized Optimization},
  journal = {SIAM Journal on Optimization},
  volume = {32},
  number = {2},
  pages = {900--929},
  year = {2022},
  doi = {10.1137/21M1409536},
  abstract = { We develop a trust-region method for minimizing the sum of a smooth term (f) and a nonsmooth term (h), both of which can be nonconvex. Each iteration of our method minimizes a possibly nonconvex model of (f + h) in a trust region. The model coincides with (f + h) in value and subdifferential at the center. We establish global convergence to a first-order stationary point when (f) satisfies a smoothness condition that holds, in particular, when it has a Lipschitz-continuous gradient, and (h) is proper and lower semicontinuous. The model of (h) is required to be proper, lower semi-continuous and prox-bounded. Under these weak assumptions, we establish a worst-case (O(1/\epsilon^2)) iteration complexity bound that matches the best known complexity bound of standard trust-region methods for smooth optimization. We detail a special instance, named TR-PG, in which we use a limited-memory quasi-Newton model of (f) and compute a step with the proximal gradient method, resulting in a practical proximal quasi-Newton method. We establish similar convergence properties and complexity bound for a quadratic regularization variant, named R2, and provide an interpretation as a proximal gradient method with adaptive step size for nonconvex problems. R2 may also be used to compute steps inside the trust-region method, resulting in an implementation named TR-R2. We describe our Julia implementations and report numerical results on inverse problems from sparse optimization and signal processing. Both TR-PG and TR-R2 exhibit promising performance and compare favorably with two linesearch proximal quasi-Newton methods based on convex models. }
}

About

Algorithms for regularized optimization

Resources

License

Citation

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Julia 100.0%