secure_learning.regularizers module

Contains penalty functions.

class secure_learning.regularizers.BaseRegularizer[source]

Bases: object

Base class for regularizations.

class secure_learning.regularizers.DifferentiableRegularizer[source]

Bases: ABC, BaseRegularizer

Differentiable regularizations can be included via their gradient.

abstract __call__(coef_)[source]

Evaluate the regularization gradient function.

Parameters:

coef – Coefficient vector

Return type:

List[SecureFixedPoint]

Returns:

Value of regularization gradient function evaluated with the provided parameters.

class secure_learning.regularizers.L1Regularizer(alpha)[source]

Bases: NonDifferentiableRegularizer

Implementation for L1 regularization: \(f(w) = ||w||_1\).

__call__(coef_, eta)[source]

Apply the proximal function for the L1 regularizer.

This proximal function is more commonly known as the soft-thresholding algorithm. The soft-thresholding algorithm pulls every element of \(w\) (coefficient vector) closer to zero. It does so in a component-wise fashion. More specifically:

\[\begin{split} \textrm{new\_}w_i = \left\{ \begin{array}{cl} w_i - \nu & : \ w_i > \nu \\ 0 & : \ -\nu < w_i < \nu \\ w_i + \nu & : \ -\nu < w_i \end{array} \right. \end{split}\]

Here, \(\nu\) is a value that depends on eta and the regularization constant \(\alpha\).

Parameters:
  • coef – Coefficient vector.

  • eta (Union[float, SecureFixedPoint]) – Learning rate.

Return type:

List[SecureFixedPoint]

Returns:

Value of proximal function evaluated with the provided parameters.

__init__(alpha)[source]

Constructor method.

Parameters:

alpha (float) – Regularization parameter

class secure_learning.regularizers.L2Regularizer(alpha)[source]

Bases: DifferentiableRegularizer

Implementation for L2 regularization:

\[f(w) = \frac{\alpha}{2} \times ||w||^2_2`.\]

__call__(coef_)[source]

Apply the initialized L2 regularization.

Parameters:

coef – Coefficient vector to be regularized.

Return type:

List[SecureFixedPoint]

Returns:

Value of regularization gradient evaluated with the provided parameters.

__init__(alpha)[source]

Constructor method.

Parameters:

alpha (float) – Regularization parameter

class secure_learning.regularizers.NonDifferentiableRegularizer[source]

Bases: ABC, BaseRegularizer

Non-differential regularization can be included via a proximal method.

abstract __call__(coef_, eta)[source]

Apply the proximal function for this regularizer.

Parameters:
  • coef – Coefficient vector.

  • eta (Union[float, SecureFixedPoint]) – Learning rate.

Return type:

List[SecureFixedPoint]

Returns:

Value of proximal function evaluated with the provided parameters.