- MPI 1.0
- MPICH 3.3.2
- NEURON > 7.6 (see installation instructions here.
- CoreNeuron (see installation instructions here.
- SLURM
- This is a limitation since it generally requires that you are running this code on a HPC cluster. If you'd like to try a version that doesn't require HPC, try this repo.
- If you do check out this sub-repo, please reach out to me at [email protected] and I am happy to help with any techinical issues in set up! Thank you.
- NeuroGPU
- CUDA (10 >)
- GCC (8.3.0 >)
- Other requirements like (BluePyOpt)[https://github.com/BlueBrain/BluePyOpt] and (eFEL)[https://efel.readthedocs.io/en/latest/] are listed in requirements.tst
-
conda env create -f env.yml --name benchmarking_env
(preferred)conda activate benchmarking_env
-
pip install -r requirements.txt
- You may encounter an issue with installing mpi4py if you do not have MPI and MPICH installed. See (this link).[https://stackoverflow.com/questions/28440834/error-when-installing-mpi4py]
-
Then you'll need to compile NEURON/neuroGPU on your system so:
sh compile_neuron.sh
cd
intoscripts/slurm_launch/
sh neuron_batch_simple.sh
- We create text files like the one below to specify experiments and these plans can be found in:
scripts/slurm_launch/plans
- They look like this:
nGens=1
offspring=3000,3000,3000,3000,3000,3000,3000,3000,3000
cpuTrials=80
N=1,2,4,8,16,32,64,128,256
n_stims=6,6,6,6,6,6,6,6,6
n_sfs=20,20,20,20,20,20,20,20,20
clean=False
- you can launch them by using:
- scripts/slurm_launch/meta_chain.sh
-
once you have ran some experiments you should have files the outputs folders.
-
cd scripts/plotting_scripts
-
sh plot_all_neuron.sh
-
This will parse the output logs from the experiments and then consolidate the results in the outputs folder.
-
If you want to compare between different experiments in different outputs folders you can use
meta_plot.py
which parses the outputs in subdirectories :Neuron_EA
,NeuroGPU_EA
andCoreNeuron_EA
.