Module propinfer.experiment
Classes
class Experiment (generator, label_col, model, n_targets, n_shadows, hyperparams, n_queries=1024, n_classes=2, range=None)-
Object representing an experiment, based on its data generator and model pair
Args
generator:Generator- data abstraction used for this experiment
model:Model.__class__- a Model class that represents the model to be used
n_targets:int- the total number of target models
n_shadows:int- the total number of shadow models
hyperparams:dictorDictConfig- dictionary containing every useful hyper-parameter for the Model;
if a list is provided for some hyperparameter(s), we grid optimise between all given options (except for keyword
layers) n_queries:int- the number of queries used in the scope of grey- and black-box attacks. Must be strictly superior to
n_targets n_classes:int- the number of classes considered for property inference; if 1 then a regression is performed
range:tuple- the range of values accepted for regression tasks (needed for regression, ignored for classification) it is possible to pass an iterable of multiple ranges in order to perform multi-variable property inference regression, in which case the values of the variables are passed to the Generator as a list
Methods
def run_blackbox(self, n_outputs=1)-
Runs a blackbox attack on the target models, by using the result of random queries as features for a meta-classifier
Args
n_outputs:int- number of attack results to output, using multiple random subsets of the shadow models
Returns: Attack accuracy on target models for the classification task, or mean absolute error for the regression task
def run_loss_test(self)-
Runs a loss test attack on target models. Works only for the binary classification attack on a classifier.
Returns: Attack accuracy on target models
def run_shadows(self, model=None, hyperparams=None)-
Create and fit shadow models
Args
model:Model.__class__- a Model class that represents the model to be used. If None, will be the same as target models
hyperparams:dictorDictConfig- dictionary containing every useful hyper-parameter for the Model; Hyperparameters of shadow models will NOT be optimised. If None, will be the same as target models.
def run_targets(self)-
Create and fit target models
def run_threshold_test(self, n_outputs=1)-
Runs a threshold test attack on target models. Works only for the binary classification attack on a classifier.
Args
n_outputs:int- number of attack results to output, using multiple random subsets of the shadow models
Returns: Attack accuracy on target models
def run_whitebox_deepsets(self, hyperparams, n_outputs=1)-
Runs a whitebox attack on the target models using a DeepSets meta-classifier
Args
hyperparams:dictorDictConfig- Hyperparameters for the DeepSets meta-classifier. Accepted keywords are: latent_dim (default=5); epochs (default=20); learning_rate (default=1e-4); weight_decay (default=1e-4)
n_outputs:int- number of attack results to output, using multiple random subsets of the shadow models
Returns: Attack accuracy on target models
def run_whitebox_sort(self, sort=True, n_outputs=1)-
Runs a whitebox attack on the target models, by using the model parameters as features for a meta-classifier
Args
sort:bool- whether to perform node sorting (to be used for permutation-invariant DNN)
n_outputs:int- number of attack results to output, using multiple random subsets of the shadow models
Returns: Attack accuracy on target models for the classification task, or mean absolute error for the regression task