Algorithms
The first argument of an Algorithm
's constructor is an SModel
. This is to ensure storage buffers are the correct size.
ProxGrad
SparseRegression.ProxGrad
— Type.ProxGrad(model, step = 1.0)
Proximal gradient method with step size step
. Works for any loss and any penalty with a prox
method.
Example
x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
strat = strategy(MaxIter(50), ProxGrad(s))
learn!(s, strat)
Fista
SparseRegression.Fista
— Type.Fista(model, step = 1.0)
Accelerated proximal gradient method. Works for any loss and any penalty with a prox
method.
AdaptiveProxGrad
AdaptiveProxGrad(s, divisor = 1.5)
Proximal gradient method with adaptive step sizes. AdaptiveProxGrad uses element-wise learning rates. Every time the sign of a coefficient switches, the step size for that coefficient is divided by divisor
.
GradientDescent
SparseRegression.GradientDescent
— Type.GradientDescent(model, step = 1.0)
Gradient Descent. Works for any loss and any penalty.
Example
x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
strat = strategy(MaxIter(50), GradientDescent(s))
learn!(s, strat)
Sweep
SparseRegression.Sweep
— Type.Sweep(model)
Linear/ridge regression via sweep operator. Works for (scaled) L2DistLoss with NoPenalty or L2Penalty. The Sweep
algorithm has a closed form solution and is complete after one iteration. It therefore doesn't need additional learning strategies such as MaxIter
, Converged
, etc.
Example
x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
learn!(s, Sweep(s))
LinRegCholesky
SparseRegression.LinRegCholesky
— Type.LinRegCholesky(model)
Linear/ridge regression via cholesky decomposition. Works for (scaled) L2DistLoss with NoPenalty or L2Penalty. The LinRegCholesky
algorithm has a closed form solution and is complete after one iteration. It therefore doesn't need additional learning strategies such as MaxIter
, Converged
, etc.
Example
x, y, β = SparseRegression.fakedata(L2DistLoss(), 1000, 10)
s = SModel(x, y, L2DistLoss())
learn!(s, Sweep(s))