This repository implements the GreedyFed algorithm for accelerating convergence in federated learning and compares it against other baselines like UCB [5], FedAvg [1], FedProx [3], S-FedAvg [4], Power-Of-Choice [2] and Centralised training on the MNIST, FMINST, and CIFAR-10 datasets. Results are logged and visualized using W&B.
preprint on arXiv
To run these algorithms execute main.py
with the desired settings (edit the file).
Dataset Configuration:
- name of dataset (from
['fmnist','cifar10','mnist']
) - number of clients (
$N$ , any positive integer) - alpha (
$\alpha$ parameter for dirichlet distribution, not required for Synthetic dataset) (typically varied in powers of 10 from$10^{-4}$ to$10^4$ )
Algorithm Configuration:
- algorithms to execute (from
['greedyfed','fedavg','fedprox','sfedavg','ucb','centralised','poc']
) - client select fraction
$\frac{M}{N}$ - E, B, lr, momentum (epochs, batches, learning rate, SGD momentum)
- T (number of communication rounds)
- noise level (maximum client update noise in the privacy preserving setting)
Algorithm Hyperparameters
- S-FedAvg [4] (
$\alpha = 1- \beta$ ) - Power-Of-Choice [2] (decay factor
$\lambda$ ) - FedProx [3] (weight of proximal term
$\mu$ ) - GreedyFed (memory, weight for exponentially weighted average)
Logging results: if logging is set to True the runs are saved on W&B
You can set the above parameters to a single value or implement a hyperparameter sweep over a list of values. After selecting the desired values, execute the following
python main.py
To tabulate results, we download the runs from W&B into a Pandas DataFrame and calculate the test accuracy under various settings. The code directly produces the LaTeX code for tables used in our paper.
Set download = True
in plotting.py
if you wish to download results from your own runs instead of using existing logs. Set dataset
to one of ["mnist", "fmnist", "cifar10"]
python plotting.py
Implements the Server class with methods for:
- client model aggregation
- Shapley Value computation
- returning server model performance metrics (accuracy and loss)
Three different kinds of Shapley Value estimation have been implemented in server.py
:
- Truncated Monte Carlo (TMC) sampling
- GTG-Shapley (default) [7]
- True Shapley Value [6] (extremely expensive to compute, computes loss over all subsets)
Convergence criterion for Shapley Values is implemented in utils.py
Implements the Client class with methods for:
- client model training
- adding noise to updates
- returning client model performance metrics on client data (accuracy and loss)
Implements all the above-mentioned Federated Learning algorithms. Every method returns test_accuracy, train_accuracy, train_loss, validation_loss, test_loss, client_selections
and some additional algorithm-specific metrics.
FedProx and FedAvg loss are defined using nested functions. The returned loss functions have a slightly different signature from those in PyTorch.
Implements methods for downloading and splitting datasets into train-val-test and splitting data across clients using the power law and Dirichlet distribution.
Constructs server object and desired number of client objects with data and models allocated to each of them.
Implements two different models: a Multi-Layer-Perceptron (NN) and a Convolutional Neural Network (CNN)
implements some utility functions
[1] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Aguera y Arcas, "Communication-efficient learning of deep networks from decentralized data," in Artificial Intelligence and Statistics, PMLR, 2017, pp. 1273-1282.
[2] Y. J. Cho, J. Wang, and G. Joshi, "Client selection in federated learning: Convergence analysis and power-of-choice selection strategies," arXiv preprint arXiv:2010.01243, 2020.
[3] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, "Federated optimization in heterogeneous networks," in Proceedings of Machine Learning and Systems, vol. 2, 2020, pp. 429-450.
[4] L. Nagalapatti and R. Narayanam, "Game of gradients: Mitigating irrelevant clients in federated learning," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 10, 2021, pp. 9046-9054.
[5] S. R. Pandey, V. P. Bui, and P. Popovski, "Goal-Oriented Communications in Federated Learning via Feedback on Risk-Averse Participation," in 2023 IEEE 34th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2023, pp. 1-6. DOI: 10.1109/PIMRC56721.2023.10293926.
[6] L. S. Shapley, "Cores of convex games," in International Journal of Game Theory, vol. 1, 1971, pp. 11-26. Springer.
[7] Z. Liu, Y. Chen, H. Yu, Y. Liu, and L. Cui, "Gtg-shapley: Efficient and accurate participant contribution evaluation in federated learning," in ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 4, 2022, pp. 1-21. ACM New York, NY.
If you use this repo in your project or research, please cite as ***@software{singha23, author={Pranava Singhal, Shashi Raj Pandey and Petar Popovski}, title={greedyfed}, url={https://github.com/pringlesinghal/GreedyFed/}, year={2023} }