An open-source toolkit for large-scale genomic analyes
Explore the docs »
Issues
·
Mailing list
·
Slack
Glow is an open-source toolkit to enable bioinformatics at biobank-scale and beyond.
The toolkit includes building blocks to perform common analyses right away:
- Load VCF, BGEN, and Plink files into distributed DataFrames
- Perform quality control and data manipulation with built-in functions
- Variant normalization and liftOver
- Perform genome-wide association studies
- Integrate with Spark ML libraries for population stratification
- Parallelize command line tools to scale existing workflows
Glow makes genomic data work with Spark, the leading engine for working with large structured datasets. It fits natively into the ecosystem of tools that have enabled thousands of organizations to scale their workflows. Glow bridges the gap between bioinformatics and the Spark ecosystem.
Glow works with datasets in common file formats like VCF, BGEN, and Plink as well as high-performance big data standards. You can write queries using the native Spark SQL APIs in Python, SQL, R, Java, and Scala. The same APIs allow you to bring your genomic data together with other datasets such as electronic health records, real world evidence, and medical images. Glow makes it easy to parallelize existing tools and libraries implemented as command line tools or Pandas functions.
This project is built using sbt and Java 8.
To build and run Glow, you must install conda and
activate the environment in python/environment.yml
.
conda env create -f python/environment.yml
conda activate glow
When the environment file changes, you must update the environment:
conda env update -f python/environment.yml
Start an sbt shell using the sbt
command.
FYI: The following SBT projects are built on Spark 3.2.1/Scala 2.12.8 by default. To change the Spark version and Scala version, set the environment variables
SPARK_VERSION
andSCALA_VERSION
.
To compile the main code:
compile
To run all Scala tests:
core/test
To test a specific suite:
core/testOnly *VCFDataSourceSuite
To run all Python tests:
python/test
These tests will run with the same Spark classpath as the Scala tests.
To test a specific Python test file:
python/pytest python/test_render_template.py
When using the pytest
key, all arguments are passed directly to the
pytest runner.
To run documentation tests:
docs/test
To run the Scala, Python and documentation tests:
test
To run Scala tests against the staged Maven artifact with the current stable version:
stagedRelease/test
To test your changes on a Databricks cluster, build and install Python and Scala artifacts.
To build an uber jar (Glow + dependencies) with your changes:
sbt core/assembly
The uber jar will be at a path like glow/core/target/${scala_version}/${artifact-name}-assembly-${version}-SNAPSHOT.jar
.
To build a wheel with the Python code:
- Activate the Glow dev conda environment (
conda activate glow
) cd
into thepython
directory- Run
python setup.py bdist_wheel
The wheel file will be at a path like python/dist/glow.py-${version}-py3-none-any.whl
.
You can then install these libraries on a Databricks cluster.
If you use IntelliJ, you'll want to:
- Download library and SBT sources; use SBT shell for imports and build from IntelliJ
- Set up scalafmt on save
To run Python unit tests from inside IntelliJ, you must:
- Open the "Terminal" tab in IntelliJ
- Activate the glow conda environment (
conda activate glow
) - Start an sbt shell from inside the terminal (
sbt
)
The "sbt shell" tab in IntelliJ will NOT work since it does not use the glow conda environment.
To test or testOnly in remote debug mode with IntelliJ IDEA set the remote debug configuration in IntelliJ to 'Attach to remote JVM' mode and a specific port number (here the default port number 5005 is used) and then modify the definition of options in groupByHash function in build.sbt to
val options = ForkOptions().withRunJVMOptions(Vector("-Xmx1024m")).withRunJVMOptions(Vector("-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"))