Skip to content

Implementation code for the IEEE TSP paper "State-Augmented Learnable Algorithms for Resource Management in Wireless Networks."

Notifications You must be signed in to change notification settings

navid-naderi/StateAugmented_RRM_GNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

State-Augmented Learnable Algorithms for Resource Management in Wireless Networks (IEEE Transactions on Signal Processing)

This repository contains the source code for learning state-augmented resource management algorithms in wireless networks via graph neural network (GNN) parameterizations. In particular, a GNN policy is trained, which takes as input both the network state at each time step (e.g., the channel gains across the network), as well as the dual variables indicating how much each user satisfies/violates its minimum-rate requirements over time. If run for a long-enough period of time and under mild assumptions, the algorithm is guaranteed to generated resource management decisions that are both feasible (i.e., satisfy the per-user minimum-rate requirements) and near-optimal (i.e., achieve network-wide performance within a constant additive gap of optimum). Please refer to the accompanying paper for more details.

Training, Evaluation, and Visualization of Results

To train the state-augmented power control algorithms and evaluate their performance on a set of test configurations, run the following command:

python main.py

The wireless network parameters (such as the number of transmitter-receiver pairs) and the learning hyperparameters (such as the numbers and sizes of GNN hidden layers) can be adjusted in the first few lines of main.py. Upon completion of the training and evaluation process, the generated dataset, the entire results, and the final model parameters are saved in separate folders named data, results, and models, respectively.

In order to visualize the results, as an example, the following code (in a Python environment) can be used to plot the results with the default set of parameters in main.py:

from plot_gen import plot_results
plot_results('./results/m_6_T_100_fmin_0.75_train_256_test_128_mode_var_density.json')

The result should look like the following figure:

sample_results_figure

Dependencies

Citation

Please use the following BibTeX citation to cite the accompanying paper if you use this repository in your work:

@article{StateAugmented_RRM_GNN_naderializadeh_TSP2022,
  title={State-Augmented Learnable Algorithms for Resource Management in Wireless Networks},
  author={NaderiAlizadeh, Navid and Eisen, Mark and Ribeiro, Alejandro},
  journal={IEEE Transactions on Signal Processing},
  year={2022},
  publisher={IEEE}
}

About

Implementation code for the IEEE TSP paper "State-Augmented Learnable Algorithms for Resource Management in Wireless Networks."

Topics

Resources

Stars

Watchers

Forks

Languages