CONESTA solver.
conesta.Rd
Solve the MGLasso optimization problem using CONESTA algorithm. Interface to the pylearn.parsimony python library.
Usage
conesta(
X,
lam1,
lam2,
beta_warm = c(0),
type_ = "initial",
W_ = NULL,
mean_ = FALSE,
max_iter_ = 10000,
prec_ = 0.01
)
Arguments
- X
Data matrix nxp.
- lam1
Sparsity penalty.
- lam2
Total variation penalty.
- beta_warm
Warm initialization vector.
- type_
Character scalar. By default set to initial version which doesn't use weights
- W_
Weights matrix for total variation penalties.
- mean_
Logical scalar. If TRUE weights the optimization function by the inverse of sample size.
- max_iter_
Numeric scalar. Maximum number of iterations.
- prec_
Numeric scalar. Tolerance for the stopping criterion (duality gap).
Value
Numeric matrix of size pxp. Line k
of the matrix represents
the coefficients obtained from the L1-L2 penalized regression of variable
k
on the others.
Details
COntinuation with NEsterov smoothing in a Shrinkage-Thresholding Algorithm (CONESTA, Hadj-Selem et al. 2018) <doi:10.1109/TMI.2018.2829802> is an algorithm design for solving optimization problems including group-wise penalties. This function is an interface with the python solver. The MGLasso problem is first reformulated in a problem of the form $$argmin 1/2 ||Y - \tilde{X} \tilde{\beta}||_2^2 + \lambda_1 ||\tilde{\beta}||_1 + \lambda_2 \sum_{i<j} ||\boldsymbol A_{ij} \tilde{\beta}||_2$$ where vector \(Y\) is the vectorized form of matrix \(X\).
See also
mglasso
for the MGLasso model estimation.
Examples
# \donttest{
install_conesta()
#> mglasso requires the r-reticulate virtual environment. Attempting to create...
#> Using Python: /usr/bin/python3.8
#> Creating virtual environment 'r-reticulate' ...
#> + '/usr/bin/python3.8' -m venv '/home/runner/.virtualenvs/r-reticulate'
#> Done!
#> Installing packages: 'pip', 'wheel', 'setuptools', 'numpy'
#> + '/home/runner/.virtualenvs/r-reticulate/bin/python' -m pip install --upgrade 'pip' 'wheel' 'setuptools' 'numpy'
#> Virtual environment 'r-reticulate' successfully created.
#> Using virtual environment 'r-reticulate' ...
#> + '/home/runner/.virtualenvs/r-reticulate/bin/python' -m pip install --upgrade 'scipy == 1.7.1' 'scikit-learn' 'six' 'matplotlib'
#> Installing pylearn-parsimony
#> pylearn-parsimony is installed.
n = 30
K = 2
p = 4
rho = 0.85
blocs <- list()
for (j in 1:K) {
bloc <- matrix(rho, nrow = p/K, ncol = p/K)
for(i in 1:(p/K)) { bloc[i,i] <- 1 }
blocs[[j]] <- bloc
}
mat.covariance <- Matrix::bdiag(blocs)
mat.covariance
#> 4 x 4 sparse Matrix of class "dgCMatrix"
#>
#> [1,] 1.00 0.85 . .
#> [2,] 0.85 1.00 . .
#> [3,] . . 1.00 0.85
#> [4,] . . 0.85 1.00
set.seed(11)
X <- mvtnorm::rmvnorm(n, mean = rep(0,p), sigma = as.matrix(mat.covariance))
X <- scale(X)
res <- conesta(X, 0.1, 0.1)
# }