-
Notifications
You must be signed in to change notification settings - Fork 248
Project: parallelization via MPI
Anton Leykin edited this page Nov 20, 2021
·
3 revisions
- Potential advisor/consultant(s): Anton Leykin
- Goal: coarse parallelization using Message Passing Interface (MPI)
- Current status: available! (some very basic functionality has been implemented already)
- Macaulay2 skill level: intermediate (some C++ experience, if alterations of the core become necessary)
- Mathematical experience: not important (undergraduate+, see "other info")
- Reason(s) to participate: develop a package that uses a handful of core routines (already in place)
- Other info: an ideal contributor would be someone who has an M2 program that (badly!) needs supercomputing power
MPI is one of the standard universal interfaces that enables distributed computing on supercomputing clusters (or any computer with multiple cores). The basic idea is to launch several M2 processes (e.g., one per node on a distributed network) and provide an easy mechanism to distribute tasks by exchanging messages between the processes.
Homepage | Projects | Packages | Documentation | Events | Google Group