secure_learning.solvers.solver module

Class for a solver, which can then be used to define other solvers such as gradient descent or SAG.

class secure_learning.solvers.solver.Solver[source]

Bases: ABC

Abstract class for a solver, which can then be used to define other solvers such as gradient descent or SAG.

__init__()[source]

Constructor method.

Notice that the relevant class variables are instantiated through init_solver.

__str__()[source]

Returns solver name

Return type:

str

Returns:

Solver name

add_gradient_penalty_function(function)[source]

Add gradient penalty function to the list of gradient penalty functions.

Parameters:

function (DifferentiableRegularizer) – Function that evaluates the gradient penalty function in a given point

Return type:

None

compute_aggregated_differentiable_regularizer_penalty(coef_, nr_samples_minibatch, nr_samples_total)[source]

Compute the aggregated penalty from all gradient penalty functions evaluated for the provided gradient. The penalty is weighted by the ratio of samples that were used for computing the provided gradient over the number of samples in the complete training data.

Parameters:
  • coef – Unpenalized objective gradient vector

  • nr_samples_minibatch (int) – Number of samples that were used for computing the given gradient

  • nr_samples_minibatch – Total number of samples in training data

  • nr_samples_total (int) – Number of samples

Raises:

MissingGradientFunctionError – No gradient function was initialized

Return type:

List[SecureFixedPoint]

Returns:

Penalized objective gradient vector

evaluate_gradient_function_for_minibatch(X, y, coef_, nr_samples_total, grad_per_sample=False)[source]

Evaluate the gradient function.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Independent data

  • y (List[SecureFixedPoint]) – Dependent data

  • coef – Coefficient vector

  • nr_samples_total (int) – Number of samples

  • grad_per_sample (bool) – Return gradient per sample if True, return aggregated gradient of all data if False

Raises:

MissingGradientFunctionError – No gradient function was initialized

Return type:

Union[List[SecureFixedPoint], List[List[SecureFixedPoint]]]

Returns:

Value(s) of gradient evaluated with the provided parameters

evaluate_proximal_function(coef_, eta)[source]

Evaluate the proximal function.

Parameters:
  • coef – Coefficient vector

  • eta (Union[float, SecureFixedPoint]) – Learning rate

Raises:

MissingFunctionError – No proximal function was initialized

Return type:

List[SecureFixedPoint]

Returns:

Value of proximal function evaluated with the provided parameters

property has_proximal_function: bool

Indicate whether the solver has a proximal function initialized.

Returns:

True if the proximal function has been initialized, False otherwise

init_solver(total_size, num_features, tolerance, sectype, coef_init=None, minibatch_size=None, eta0=None)[source]

Pass configuration to the solver.

Parameters:
  • total_size (int) – Number of samples in the training data.

  • num_features (int) – Number of features in the training data.

  • tolerance (float) – Training stops if the l2 norm of two subsequent coefficient vectors is less than the provided tolerance.

  • sectype (Type[SecureFixedPoint]) – Requested type of initial coefficients vector.

  • coef_init (Optional[List[SecureFixedPoint]]) – Initial coefficients vector. If None is passed, then initialize the coefficient vector as a vector of zeros.

  • minibatch_size (Optional[int]) – Size of minibatches. Defaults to full batch if None is passed.

  • eta0 (Optional[float]) – Initial learning rate.

Return type:

None

abstract inner_loop_calculation(X, y, coef_old, epoch)[source]

Performs one inner-loop iteration for the solver. Inner-loop refers to iteratively looping through the data in batches rather than looping over the complete data multiple times.

Parameters:
  • X (List[List[SecureFixedPoint]]) – Independent data

  • y (List[SecureFixedPoint]) – Dependent data

  • coef_old (List[SecureFixedPoint]) – Current iterative solution

  • epoch (int) – Number of times that the outer loop has completed

Return type:

List[SecureFixedPoint]

Returns:

Updated iterative solution

iterative_data_permutation(matrix)[source]

Permutes a matrix containing both the independent and dependent variables \(X\) and \(y\), respectively.

Parameters:

matrix (Sequence[Sequence[SecureFixedPoint]]) – The matrix as mentioned in the above summary

Return type:

List[List[SecureFixedPoint]]

Returns:

Permuted matrix

Raises:

ValueError – when the data permutator is not set yet before this function is called

name: str = ''
property nr_inner_iters: int

Return the number of iterations that the inner loop should perform.

Raises:

SecureLearnUninitializedSolverError – Occurs when a solver has not been fully initialised

Returns:

Number of iterations that the inner loop should perform

static postprocessing(coef_predict)[source]

Postprocess the predicted coefficients.

Parameters:

coef_predict (List[SecureFixedPoint]) – Predicted coefficient vector

Return type:

List[SecureFixedPoint]

Returns:

Postprocessed coefficient vector

abstract preprocessing(X_init, y_init)[source]

Preprocess obtained data.

May include centering and scaling.

Parameters:
  • X_init (List[List[SecureFixedPoint]]) – Independent data

  • y_init (List[SecureFixedPoint]) – Dependent data

Return type:

Tuple[List[List[SecureFixedPoint]], List[SecureFixedPoint]]

Returns:

Preprocessed independent and dependent data

set_gradient_function(function)[source]

Set the gradient function that is used by the solver.

Parameters:

function (GradientFunction) – Gradient function

Return type:

None

set_proximal_function(func)[source]

Set the proximal function that is used by the solver.

Parameters:

func (NonDifferentiableRegularizer) – A proximal function

Return type:

None