secure_learning.regularizers module¶
Contains penalty functions.
- class secure_learning.regularizers.BaseRegularizer[source]¶
Bases:
objectBase class for regularizations.
- class secure_learning.regularizers.DifferentiableRegularizer[source]¶
Bases:
ABC,BaseRegularizerDifferentiable regularizations can be included via their gradient.
- class secure_learning.regularizers.L1Regularizer(alpha)[source]¶
Bases:
NonDifferentiableRegularizerImplementation for L1 regularization: $f(w) = ||w||_1$.
- __call__(weights, eta)[source]¶
Apply the proximal function for the L1 regularizer.
This proximal function is more commonly known as the soft-thresholding algorithm. The soft-thresholding algorithm pulls every element of $w$ (weights vector) closer to zero. It does so in a component-wise fashion. More specifically: $$ textrm{new_}w_i = left{ begin{array}{cl} w_i - nu & : w_i > nu \ 0 & : -nu < w_i < nu \ w_i + nu & : -nu < w_i end{array} right. $$
Here, $nu$ is a value that depends on eta and the regularization constant $alpha$.
- Parameters:
weights (
List[SecureFixedPoint]) – Weight vector.eta (
Union[float,SecureFixedPoint]) – Learning rate.
- Return type:
List[SecureFixedPoint]- Returns:
Value of proximal function evaluated with the provided parameters.
- class secure_learning.regularizers.L2Regularizer(alpha)[source]¶
Bases:
DifferentiableRegularizerImplementation for L2 regularization: $$f(w) = frac{alpha}{2} times ||w||^2_2`.$$
- class secure_learning.regularizers.NonDifferentiableRegularizer[source]¶
Bases:
ABC,BaseRegularizerNon-differential regularization can be included via a proximal method.
- abstractmethod __call__(weights, eta)[source]¶
Apply the proximal function for this regularizer.
- Parameters:
weights (
List[SecureFixedPoint]) – Weight vector.eta (
Union[float,SecureFixedPoint]) – Learning rate.
- Return type:
List[SecureFixedPoint]- Returns:
Value of proximal function evaluated with the provided parameters.