laser-core
can be installed standalone with
python3 -m pip install laser-core
However, it may be more instructive to install one the disease packages built on laser-core
to understand what laser-core
provides and what is expected to be in a disease model. See laser-measles
.
Documentation can be found here at the moment.
- clone the
laser-core
repository with
git clone https://github.com/InstituteforDiseaseModeling/laser-core.git
- install
uv
- change to the
laser-core
directory with
cd laser-core
- create a virtual environment for development with
uv venv
- activate the virtual environment with
Mac or Linux:
source .venv/bin/activate
Windows:
.venv\bin\Activate
- install
tox
with thetox-uv
plugin with
uv tool install tox --with tox-uv
Now you can run tests in the tests
directory or run the entire check+docs+test suite with tox
. Running tox
will run several consistency checks, build documentation, run tests against the supported versions of Python, and create a code coverage report based on the test suite. Note that the first run of tox
may take a few minutes (~5). Subsequent runs should be quicker depending on the speed of your machine and the test suite (~2 minutes). You can use tox
to run tests against a single version of Python with, for example, tox -e py310
.
- firm up team/stakeholders/advisory committee: kmmcarthy, krosenfeld, clorton, jbloedow
- enumerate necessary features for reproducing/supporting previous and in-progress modeling efforts
enumerate necessary features for outstanding questions and issues
- "paper search" / investigate potential existing solutions
- technical considerations
- single laptop
- single laptop w/Nvidia GPU
- multicore
- single machine
- large machine (cloud)
- ¿beyond?
- Numpy
- NumPy + Numba
- NumPy + Numba + C/C++
- NumPy + Numba + CUDA
The problem is inherently an issue of heterogeneity. Spatial decomposition is the easiest consideration, but not sufficient - a model of N "independent" but identical communities is generally not useful.
Spatial connectivity and the associated latencies in transmission address one dimension of heterogeneity: how "close" is a given community to a potential source of imported contagion (exogenous to the model "world", locally endogenous, e.g., an adjacent community, endogenous but at a remove - rare transmission or multi-stop chain of transmission).
Community size in a spatial model is also a consideration - what is the configuration and connectivity of sub-CCS nodes to nodes at or above CCS for the given disease?
We need configurable characteristics of the individual communities which can vary, along with their interconnectedness, to capture additional heterogeneity.
What is the modeling of the individual communities? "Light-Agent" seems to limit us to an ABM, but we should consider cohorts of epidemiologically similar populations (polio >5, HIV <15, TB latents, etc.) as well as stochastic compartmental models (XLA - eXtremely Light Agents).
- Are the individual communities well-mixed or should we also provide for explicit networks at the local level?
- Python
- high performance computing:
- native code
- C++ (somewhat awkward interop with Python, but potentially accessible from other technologies, e.g., R)
- Rust (PyO3 is quite nice, but requires getting up to speed on Rust 😳)
- compute requirements:
- laptop 2010+? (might inform SIMD capabilities)
- GPU (CUDA) enabled machine laptop/desktop/cloud
- single core/multi-core
- largest scenarios?
- visualization
- cross-platform
- real-time
- existing file formats for input data
- existing file formats for output data (GeoTIFF? - works with ArcGIS?)
- community builder tool for given total population and community size distribution
- network builder given a set of communities (gravity, radiation, other algorithms in existing libraries/packages)
- independent populations w/in a community, e.g., mosquitoes or dogs along with humans
- independent or co-transmission, i.e. multiple "diseases"
- models need to be connected with real-world scenarios, not [just] hypothetical explorations
- "light" : How light is "light"?
- "agent" : Cohorts? Stochastic compartmental?
- "spatial" : How good are the individual community models? Good enough for non-spatial questions?
- dynamic properties (e.g. GPU flu simulation)
- ¿Ace/clorton-based state machines?
Superficial simplicity isn’t the goal of design. Some things are, by nature, complex. In such cases, you should aim for clarity rather than “simplicity.” Users will be better served if you strive to make complex systems more understandable and learnable than simply simple.
The code in this repository was developed by IDM and other collaborators to support our joint research on flexible agent-based modeling. We've made it publicly available under the MIT License to provide others with a better understanding of our research and an opportunity to build upon it for their own work. We make no representations that the code works as intended or that we will provide support, address issues that are found, or accept pull requests. You are welcome to create your own fork and modify the code to suit your own modeling needs as permitted under the MIT License.