Implementation of a GPU-parallel Genetic Algorithm using CUDA with python numba for significant speedup.
The provided python file serves as a basic template for using CUDA to parallelize the GA for enormous speedup. The provided file compares the time taken to run 5 generations of the GA non-parallel on the CPU vs. parallel on the GPU for an arbitrary (but expensive) evaluation task. The observed speedup on my machine is almost 20x (Using a GeForce GTX 970).