secure_learning.models.secure_model module

Abstract class for secure-learning models.

class secure_learning.models.secure_model.Model(solver_type=SolverTypes.GD, penalty=PenaltyTypes.NONE, **penalty_args)[source]

Bases: ABC

Abstract secure-learn model class.

__init__(solver_type=SolverTypes.GD, penalty=PenaltyTypes.NONE, **penalty_args)[source]

Constructor method.

Parameters:
  • solver_type (SolverTypes) – Solver type to use (e.g. Gradient Descent aka GD)

  • penalty (PenaltyTypes) – penalty function (none, l1, l2, or elasticnet)

  • penalty_args (float) – the coefficient(s) of the given penalty

__str__()[source]

String representation of model

Return type:

str

Returns:

human readable name of the model

async compute_coef_mpc(X, y, tolerance=0.01, minibatch_size=None, coef_init=None, nr_maxiters=100, eta0=None, print_progress=False, secure_permutations=False)[source]

Train the model, compute and return the model coefficients.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Training data.

  • y (List[SecureFixedPoint]) – Target vector.

  • tolerance (float) – Threshold for convergence.

  • minibatch_size (Optional[int]) – The size of the minibatch.

  • coef_init (Optional[List[SecureFixedPoint]]) – Initial coefficient vector to use.

  • nr_maxiters (int) – Threshold for the number of iterations.

  • eta0 (Optional[float]) – Initial learning rate.

  • print_progress (bool) – Set to True to print progress every few iterations.

  • secure_permutations (bool) – Set to True to perform matrix permutation securely.

Raises:

SecureLearnTypeError – if the training or target data does not consist of secure numbers.

Return type:

List[float]

Returns:

Coefficient vector.

async cross_validate(X, y, tolerance=0.01, minibatch_size=None, coef_init=None, nr_maxiters=100, eta0=None, print_progress=False, secure_permutations=False, folds=5, random_state=None, shuffle=False)[source]

Evaluate metrics over the model prediction using CV.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Train data.

  • y (List[SecureFixedPoint]) – Target variable for X

  • tolerance (float) – Threshold for convergence

  • minibatch_size (Optional[int]) – The size of the minibatch

  • coef_init (Optional[List[SecureFixedPoint]]) – Initial coefficient vector to use

  • nr_maxiters (int) – Threshold for the number of iterations

  • eta0 (Optional[float]) – Initial learning rate

  • print_progress (bool) – Set to True to print progress

  • secure_permutations (bool) – Set to True to perform matrix permutation securely

  • folds (Union[int, List[Tuple[List[int], List[int]]]]) – Folding sets. If set to \(k\) (integer) then a KFold (from sklearn.model_selection) is used. If it is not set then KFold is called with \(k=5\). It also possible to pass custom folds as a list of tuples of train and test indexes: e.g. \([([2, 3], [0, 1, 4]), ([0, 1, 3], [2, 4]), ([0, 1, 2], [3, 4])]\) is a 3-fold of an array of five elements \(([2, 3] , [0, 1, 4] )\) -> 1st fold, elements with indexes \([2, 3]\) are used in the train set, while elements with indexes [0, 1, 4] are used in the test set \(([0, 1, 3] , [2, 4] )\) -> 2nd fold, elements with indexes \([0, 1, 3]\) are used in the train set, while elements with indexes [2, 4] are used in the test set \(([0, 1, 2] , [3, 4] )\) -> 3rd fold, elements with indexes \([0, 1, 2]\) are used in the train set, while elements with indexes \([3, 4]\) are used in the test set

  • random_state (Optional[int]) – parameters that control the randomness of each fold. Pass a value to obtain the same fold each time for reproducibility purposes.

  • shuffle (bool) – Whether to shuffle the data or not before splitting into batches.

Return type:

List[float]

Returns:

List of scores of the model prediction.

abstract gradient_function(X, y, coef_, grad_per_sample)[source]

Evaluate the gradient function.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Independent data.

  • y (List[SecureFixedPoint]) – Dependent data.

  • coef – Coefficient vector.

  • grad_per_sample (bool) – Return gradient per sample if True, return aggregated gradient of all data if False.

Return type:

Union[List[SecureFixedPoint], List[List[SecureFixedPoint]]]

Returns:

Value(s) of gradient evaluated with the provided parameters.

initialize_solver(solver_type, penalty, **penalty_args)[source]

Initialize solver.

Parameters:
  • solver_type (SolverTypes) – Type of the requested solver.

  • penalty (PenaltyTypes) – Type of penalties

  • penalty_args (float) – Coefficient(s) of the given penalty

Return type:

None

name = ''
abstract static predict(X, coef_, **kwargs)[source]

Predicts target values for input data.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Input data with all features

  • coef – Coefficient vector of the model

  • kwargs (Any) – Additional keyword arguments that are needed to predict

Return type:

List[SecureFixedPoint]

Returns:

Target values

abstract score(X, y, coef_)[source]

Compute the model score.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Test data.

  • y (List[SecureFixedPoint]) – True value for \(X\).

  • coef – Coefficient vector.

Return type:

SecureFixedPoint

Returns:

Score of the model prediction.

property solver: Solver

Return solver used by current model.

Raises:

SecureLearnUninitializedSolverError – raised when solver is not yet initiated

Returns:

Solver used by current model.

class secure_learning.models.secure_model.PenaltyTypes(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

The possible penalty types associated to models.

ELASTICNET = 4
L1 = 2
L2 = 3
NONE = 1
class secure_learning.models.secure_model.SolverTypes(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

The possible solver types associated to models.

GD = 1