Skip to content

declearn.optimizer.regularizers.LassoRegularizer

Bases: Regularizer

L1 (Lasso) loss-regularization plug-in.

This regularizer implements the following term:

loss += alpha * l1_norm(weights)

To do so, it applies the following correction to gradients:

grads += alpha * sign(weights)
Source code in declearn/optimizer/regularizers/_base.py
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
class LassoRegularizer(Regularizer):
    """L1 (Lasso) loss-regularization plug-in.

    This regularizer implements the following term:

        loss += alpha * l1_norm(weights)

    To do so, it applies the following correction to gradients:

        grads += alpha * sign(weights)
    """

    name: ClassVar[str] = "lasso"

    def run(
        self,
        gradients: Vector,
        weights: Vector,
    ) -> Vector:
        correct = self.alpha * weights.sign()
        return gradients + correct