secure_learning.models.secure_logistic module

Implementation of Logistic regression model.

class secure_learning.models.secure_logistic.ClassWeightsTypes(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

Class to store whether class weights are equal or balanced.

BALANCED = 2
EQUAL = 1
class secure_learning.models.secure_logistic.ExponentiationTypes(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

Class to store whether exponentations are approximated or calculated exactly.

APPROX = 2
EXACT = 3
NONE = 1
class secure_learning.models.secure_logistic.Logistic(solver_type=SolverTypes.GD, exponentiation=ExponentiationTypes.EXACT, penalty=PenaltyTypes.NONE, class_weights_type=ClassWeightsTypes.EQUAL, **penalty_args)[source]

Bases: Model

Solver for logistic regression. Optimizes a model with objective function

\[\left(\frac{1}{2{n}_{\textrm{samples}}}\right) \sum_{i=1}^{{n}_{\textrm{samples}}}\left(\textrm{-}(1+y_i) \log(h_w(x_i)) - (1-y_i) \log(1-h_w(x_i))\right)\]

Here,

\[h_w(x) = \frac{1}{(1 + e^{-w^T x}}\]

Labels \(y_i\) are assumed to have value \(-1\) or \(1\).

The gradient is given by:

\[g(X, y, w) = \left(\frac{1}{2} \times {n}_{\textrm{samples}}\right) \sum_{i=1}^{{n}_\textrm{samples}} x_i^T \left( (2h_w(x_i) - 1) - y \right)\]

See secure_model.py docstrings for more information on solver types and penalties.

__init__(solver_type=SolverTypes.GD, exponentiation=ExponentiationTypes.EXACT, penalty=PenaltyTypes.NONE, class_weights_type=ClassWeightsTypes.EQUAL, **penalty_args)[source]

Constructor method.

Parameters:
  • solver_type (SolverTypes) – Solver type to use (e.g. Gradient Descent aka GD)

  • exponentiation (ExponentiationTypes) – Choose whether exponentiations are approximated or exactly calculated

  • penalty (PenaltyTypes) – Choose whether using L1, L2 or no penalty

  • class_weights_type (ClassWeightsTypes) – Class weights type, either balanced or equal

  • penalty_args (float) – Necessary arguments for chosen penalty

Raises:

ValueError – raised when exponentiation is of wrong type.

class_weights = None
class_weights_type = 1
gradient_function(X, y, coef_, grad_per_sample)[source]

Evaluate the gradient from the given parameters.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Independent variables

  • y (List[SecureFixedPoint]) – Dependent variables

  • coef – Current coefficients vector

  • grad_per_sample (bool) – Return a list with gradient per sample instead of aggregated (summed) gradient

Return type:

Union[List[List[SecureFixedPoint]], List[SecureFixedPoint]]

Returns:

Gradient of objective function as specified in class docstring, evaluated from the provided parameters

name = 'Logistic regression'
static predict(X, coef_, prob=0.5, **_kwargs)[source]

Predicts labels for input data to classification model. Label \(-1\) is assigned of the predicted probability is less then prob, otherwise label \(+1\) is assigned.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Input data with all features

  • coef – Coefficient vector of classification model

  • prob (float) – Threshold for labelling. Defaults to \(0.5\).

  • _kwargs (None) – Not used

Return type:

List[SecureFixedPoint]

Returns:

Target labels of classification model

reveal_class_weights(y)[source]

Reveals class weights.

Parameters:

y (List[SecureFixedPoint]) – Dependent variables

Return type:

Dict[int, float]

Returns:

Reveals and returns class weights

score(X, y, coef_)[source]

Compute the mean accuracy of the prediction.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Test data.

  • y (List[SecureFixedPoint]) – True label for \(X\).

  • coef – Coefficient vector.

Return type:

SecureFixedPoint

Returns:

Score of the model prediction.