- Syft
- Job
- Job#accepted
- Job#rejected
- Job#error
- SyftModel
- Plan
- PlanTrainer
- PlanTrainer#start
- PlanTrainer#end
- PlanTrainer#stop
- PlanTrainer#epochStart
- PlanTrainer#epochEnd
- PlanTrainer#batchStart
- PlanTrainer#batchEnd
- PlanInputSpec
- PlanOutputSpec
- PlanTrainerCheckpoint
- Dataset
- DataLoader
Syft client for model-centric federated learning.
options
Object
const client = new Syft({url: "ws://localhost:5000", verbose: true})
const job = client.newJob({modelName: "mnist", modelVersion: "1.0.0"})
job.on('accepted', async ({model, clientConfig}) => {
// Execute training
const training = this.train('...', { ... })
training.on('end', async () => {
const diff = await model.createSerializedDiffFromModel(training.currentModel)
await this.report(diff)
}
})
job.on('rejected', ({timeout}) => {
// Retry later or stop
})
job.on('error', (err) => {
// Handle errors
})
job.request()
Instantiates the new Job with given options.
options
Object
Returns Job
Job represents a single training cycle done by the client.
plans
Object<string, Plan> Plans dictionary.protocols
Object<string, Protocol> [not implemented] Protocols dictionary.model
SyftModel Model.
Registers an event listener to the Job's event observer.
Available events: accepted
, rejected
, error
.
Starts the Job by executing following actions:
- Authenticates for given FL model.
- Meters connection speed to PyGrid (if requested by PyGrid).
- Registers into training cycle on PyGrid.
- Retrieves cycle and client parameters.
- Downloads the model, plans, protocols from PyGrid.
- Fires
accepted
event on success.
Returns Promise<void>
- **See: Job.start **
Alias for Job.start
Returns Promise<void>
Submits the model diff to PyGrid.
diff
ArrayBuffer Serialized difference between original and trained model parameters.
Returns Promise<void>
Trains the model against specified plan and using specified parameters.
Returns PlanTrainer
object to have a handle on training process.
trainingPlan
string Training Plan name.parameters
Object Dictionary of training parameters.parameters.inputs
[PlanInputSpec] List of training Plan input argumentsparameters.outputs
[PlanOutputSpec] List of training Plan outputsparameters.data
tf.Tensor Tensor containing training dataparameters.target
tf.Tensor Tensor containing training targetsparameters.epochs
number? Epochs to train (if not specified, taken from Job)parameters.batchSize
number? Batch size (if not specified, taken from Job)parameters.stepsPerEpoch
number? Max number of steps per epoch (if not specified, taken from Job)parameters.checkpoint
PlanTrainerCheckpoint? Checkpointparameters.events
Object? List of event listenersparameters.events.start
Function? On training start listenerparameters.events.end
Function? On training end listenerparameters.events.epochStart
Function? On epoch start listenerparameters.events.epochEnd
Function? On epoch end listenerparameters.events.batchStart
Function? On batch start listenerparameters.events.batchEnd
Function? On batch end listener
Returns PlanTrainer
accepted
event.
Triggered when PyGrid accepts the client into training cycle.
Type: Object
rejected
event.
Triggered when PyGrid rejects the client.
Type: Object
timeout
(number | null) Time in seconds to retry. Empty when the FL model is not trainable anymore.
error
event.
Triggered for plethora of error conditions.
Model parameters as stored in the PyGrid.
params
[tf.Tensor] Array of Model parameters.
Returns model serialized to protobuf.
Returns Promise<ArrayBuffer>
Calculates difference between 2 versions of the Model parameters
and returns serialized diff
that can be submitted to PyGrid.
updatedModelParams
Array<tf.Tensor> Array of model parameters (tensors).
Returns Promise<ArrayBuffer> Protobuf-serialized diff
.
Calculates difference between 2 versions of the Model
and returns serialized diff
that can be submitted to PyGrid.
model
SyftModel Model to compare with.
Returns Promise<ArrayBuffer> Protobuf-serialized diff
.
Plan stores a sequence of actions (ComputationAction) in its role. A worker is assigned plans and executes the actions stored in the plans.
Executes the Plan and returns its output.
The order, type and number of arguments must match to arguments defined in the PySyft Plan.
Returns Promise<Array<tf.Tensor>>
Class that contains training loop logic.
originalModel
SyftModel Original model.currentModel
SyftModel Trained model.epoch
number Current epoch.batchIdx
number Current batch.stopped
boolean Is the training currently stopped.
Registers an event listener to the PlanTrainer's event observer.
Available events: start
, end
, epochStart
, epochEnd
, batchStart
, batchEnd
.
Starts the training loop.
resume
(optional, defaultfalse
)
Stops training loop and returns training checkpoint.
Returns Promise<PlanTrainerCheckpoint>
Resume stopped training process.
Creates checkpoint using current training state.
Returns PlanTrainerCheckpoint
Restores PlanTrainer
state from checkpoint.
checkpoint
PlanTrainerCheckpoint
start
event.
Triggered on training start.
Type: Object
end
event.
Triggered after training end.
stop
event.
Triggered when training was stopped.
epochStart
event.
Triggered before epoch start.
Type: Object
epoch
number Current epoch.
epochEnd
event.
Triggered after epoch end.
epoch
number Current epoch.
batchStart
event.
Triggered before batch start.
Type: Object
batchEnd
event.
Triggered after batch end.
Type: Object
epoch
number Current epoch.batch
number Current batch.loss
number? Batch loss.metrics
Object? Dictionary containing metrics (if any defined in theoutputs
).
Object that describes Plan input.
Parameters known to PlanTrainer
(like training data, model parameters, batch size, etc.)
are mapped into Plan arguments according to this object.
type
string Input argument type.name
string? Optional argument name. (optional, defaultnull
)index
number? Optional argument index (to take from array). (optional, defaultnull
)value
any? Argument value. (optional, defaultnull
)
Represents training data (substituted with PlanTrainer's data
batch)
Represents training targets aka labels (substituted with PlanTrainer's target
batch)
Represents batch size (substituted with PlanTrainer's batchSize
).
Represents parameter from client config configured in FL model, name
argument is required (substituted with parameter from PlanTrainer's clientConfig
).
Represents any value, value
argument is required.
Represents model parameter (substituted with SyftModel
contents).
Object that describes Plan output.
Values returned from Plan
(like loss, accuracy, model parameters, etc.)
are mapped into PlanTrainer
's internal state according to this object.
type
string Output variable type.name
string? Optional name. (optional, defaultnull
)index
number? Optional index (to put into array). (optional, defaultnull
)
Represents loss value (maps to PlanTrainer's loss).
Represents metric value, name is required (maps to PlanTrainer's metrics dictionary).
Represents model parameter (maps to SyftModel
parameters)
Object that stores PlanTrainer
state, to resume training from it.
parameters
Object Dictionary of parametersparameters.epochs
number Total number of epochsparameters.stepsPerEpoch
number? Max steps per epochparameters.batchSize
number Batch sizeparameters.clientConfig
Object Client configparameters.epoch
number Current epochparameters.batch
number Current batch numberparameters.currentModel
SyftModel Current state of the Model
Returns PlanTrainerCheckpoint
serialized to plain Object.
Creates PlanTrainerCheckpoint
from object.
Returns PlanTrainerCheckpoint
Abstract class for Dataset.
getItem
method and length
getter must be defined in the child class.
class MyDataset extends Dataset {
constructor() {
super();
this.data = [1, 2, 3, 4, 5].map(i => tf.tensor(i));
this.labels = [0, 0, 1, 0, 1].map(i => tf.tensor(i));
}
getItem(index) {
return [this.data[index], this.labels[index]];
}
get length() {
return this.data.length;
}
}
const ds = new MyDataset();
ds[0][0].print() // => Tensor 1
ds[0][1].print() // => Tensor 0
DataLoader controls fetching the data from the Dataset, including shuffling and batching. Implements iterable protocol to iterate over data samples.
Note: currently it only supports tf.Tensor data in the dataset, and collates batches using TFJS.
parameters
Object
length
Number Data length.
const loader = new DataLoader({dataset, batchSize: 32})
consle.log('number of batches: ', loader.length)
for (let batch of loader) {
// ...
}
Iterator producing data batches.
Returns any