- Select optimization problem and an algorithm and set the parameters in
make_config.py
. - Make config.json file using command
python make_config.py
. - Run numerical experiments using command
python main.py path/to/config.json
. - Check the results in
results/problem_name/problem_parameters/constraints_name/constraints_parameters/algorithm_name/algorithm_parameters
directory. - You can compare results using
python result_show.py
. with GUI interface.
You can reproduce the numerical experiment in the paper using json files in configs/CNN
and configs/MLPNET
.
not used
$$\min_{U,V} |\mathcal{P}{\Omega}(UV) - \mathcal{P}{\Omega}(X)|_F^2$$
data_name: only "movie", rank: the number of row of
not used
linear neural network:
utils/select.py
), criterion: type of loss function (only 'CrossEntropy')
convolutional neural network:
utils/select.py
),
criterion: type of loss function (only 'CrossEntropy')
minimizing softmax loss function.
data_name: "Scotus" or "news20"
minimizing logistic loss function
data_name:"rcv1" or "news20" or "random".
set problem_name = REGULARIZED + other_problem_name
.
minimizing regularized function
$${x| |x|1 \le s_1, \sum{i} |x_{i+1} - x_i|\le s_2}$$
threshold1:
True: use automatic differentiation (very fast, but memory leak sometimes happens (CNN)) DD: use directional derivative with automatic differentiation (efficiency depends on the dimension, no error) FD: use finite difference (efficiency depends on the dimension, error exists)
lr: step size,
eps: stop criteria,
linesearch: if true, use armijo line search with
SGD(Subspace gradient descent[https://arxiv.org/abs/2003.02684])
lr: step size,
eps: stop criteria,
reduced_dim: size of random matrix,
mode: only "random",
linesearch: if true, use armijo line search with
lr: step size, eps: stop criteria, restart: if true, use function value restart.
alpha: parameter of line search, beta: parameter of line search, eps: stop criteria
LimitedMemoryBFGS[https://en.wikipedia.org/wiki/Limited-memory_BFGS]
alpha: parameter of line search, beta: parameter of line search, eps: stop criteria memory_size: the number of past data.
alpha: parameter of line search, beta: parameter of line search, eps: stop criteria
alpha: parameter of line search, beta: parameter of line search, eps: stop criteria, restart: if true, use function value restart.
alpha: parameter of line search, beta: parameter of line search, eps: stop criteria,
SubspaceNewton[https://arxiv.org/abs/1905.10874]
dim: the dimension of problem, reduced_dim, the size of random matrix, mode: the type of random matrix(only "random"), alpha: parameter of line search, beta: parameter of line search, eps: stop criteria,
LimitedMemoryNewton[https://link.springer.com/article/10.1007/s12532-022-00219-z]
reduced_dim: the size of subspace matrix, threshold_eigenvalue: parameter of clipping eigenvalues, mode: the type of random matrix(only "LEESELECTION"), alpha: parameter of line search, beta: parameter of line search, eps: stop criteria,
SubspaceRNM(subspace regularized newton method[https://arxiv.org/abs/2209.04170])
reduced_dim:the size of random matrix, please refer to the paper for other parameters.
alpha: parameter of line search, beta: parameter of line search, eps: stop criteria, reduced_dim: the size of random matrix, matrix_size: the size of subspace matrix(not random), dim: the dimension of problem, lower_eigenvalue: the parameter of clipping eigenvalues, upper_eigenvalue: the parameter of clipping eigenvalues, mode: the type of random matrix(only "random")