Skip to content

Project: parallelization via MPI

Anton Leykin edited this page Nov 20, 2021 · 3 revisions
  • Potential advisor/consultant(s): Anton Leykin
  • Goal: coarse parallelization using Message Passing Interface (MPI)
  • Current status: available! (some very basic functionality has been implemented already)
  • Macaulay2 skill level: intermediate (some C++ experience, if alterations of the core are necessary)
  • Mathematical experience: not important (undergraduate+, see "other info")
  • Reason(s) to participate: develop a package that uses core routines (already deve
  • Other info: an ideal contributor would be someone who has an M2 program that (badly!) needs supercomputing power

Project Description

MPI is one of the standard universal interfaces that enables distributed computing on supercomputing clusters (or any computer with multiple cores). The basic idea is to launch several M2 processes (e.g., one per node on a distributed network) and provide an easy mechanism to distribute tasks by exchanging messages between the processes.

Clone this wiki locally