Algorithms

The first argument of an Algorithm's constructor is an SModel. This is to ensure storage buffers are the correct size.

ProxGrad

SparseRegression.ProxGradType
ProxGrad(model, step = 1.0)

Proximal gradient method with step size step. Works for any loss and any penalty with a prox method.

Example

x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
strat = strategy(MaxIter(50), ProxGrad(s))
learn!(s, strat)
source

Fista

SparseRegression.FistaType
Fista(model, step = 1.0)

Accelerated proximal gradient method. Works for any loss and any penalty with a prox method.

source

AdaptiveProxGrad

SparseRegression.AdaptiveProxGradType
AdaptiveProxGrad(s, divisor = 1.5, init = 1.0)

Proximal gradient method with adaptive step sizes. AdaptiveProxGrad uses element-wise learning rates. Every time the sign of a coefficient switches, the step size for that coefficient is divided by divisor.

source

GradientDescent

SparseRegression.GradientDescentType
GradientDescent(model, step = 1.0)

Gradient Descent. Works for any loss and any penalty.

Example

x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
strat = strategy(MaxIter(50), GradientDescent(s))
learn!(s, strat)
source

Sweep

SparseRegression.SweepType
Sweep(model)

Linear/ridge regression via sweep operator. Works for (scaled) L2DistLoss with NoPenalty or L2Penalty. The Sweep algorithm has a closed form solution and is complete after one iteration. It therefore doesn't need additional learning strategies such as MaxIter, Converged, etc.

Example

x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
learn!(s, Sweep(s))
source

LinRegCholesky

SparseRegression.LinRegCholeskyType
LinRegCholesky(model)

Linear/ridge regression via cholesky decomposition. Works for (scaled) L2DistLoss with NoPenalty or L2Penalty. The LinRegCholesky algorithm has a closed form solution and is complete after one iteration. It therefore doesn't need additional learning strategies such as MaxIter, Converged, etc.

Example

x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
learn!(s, Sweep(s))
source

LineSearch

SparseRegression.LineSearchType
LineSearch(algorithm)

Use a line search in the update! of algorithm. Currently, ProxGrad, Fista, and GradientDescent are supported.

Example

x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
strat = strategy(MaxIter(50), LineSearch(ProxGrad(s)))
learn!(s, strat)
source