Skip to content

Latest commit

 

History

History
64 lines (46 loc) · 2.48 KB

README.md

File metadata and controls

64 lines (46 loc) · 2.48 KB

EA_benchmarking

(Work In Progress) Documentation

Prerequisites

Installation

  • conda env create -f env.yml --name benchmarking_env (preferred)

    • conda activate benchmarking_env
  • pip install -r requirements.txt

  • Then you'll need to compile NEURON/neuroGPU on your system so:

    • sh compile_neuron.sh

Run Demo Experiment

  • cd into scripts/slurm_launch/
  • sh neuron_batch_simple.sh

Running Experiments

  • We create text files like the one below to specify experiments and these plans can be found in:
    • scripts/slurm_launch/plans
    • They look like this:
nGens=1
offspring=3000,3000,3000,3000,3000,3000,3000,3000,3000
cpuTrials=80
N=1,2,4,8,16,32,64,128,256
n_stims=6,6,6,6,6,6,6,6,6
n_sfs=20,20,20,20,20,20,20,20,20
clean=False 

Plotting / Parsing Experiments

  • once you have ran some experiments you should have files the outputs folders.

  • cd scripts/plotting_scripts

  • sh plot_all_neuron.sh

  • This will parse the output logs from the experiments and then consolidate the results in the outputs folder.

  • If you want to compare between different experiments in different outputs folders you can use meta_plot.py which parses the outputs in subdirectories : Neuron_EA, NeuroGPU_EA and CoreNeuron_EA.