Skip to content

astrolabsoftware/spark-fits

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FITS Data Source for Apache Spark

Build Status codecov Maven Central Arxiv

Latest news

  • [01/2018] Launch: project starts!
  • [03/2018] Release: version 0.3.0
  • [04/2018] Paper: Arxiv
  • [05/2018] Release: version 0.4.0
  • [06/2018] New location: spark-fits is an official project of AstroLab!
  • [07/2018] Release: version 0.5.0, 0.6.0
  • [10/2018] Release: version 0.7.0, 0.7.1
  • [12/2018] Release: version 0.7.2
  • [03/2019] Release: version 0.7.3
  • [05/2019] Release: version 0.8.0, 0.8.1, 0.8.2
  • [06/2019] Release: version 0.8.3
  • [05/2020] Release: version 0.8.4
  • [07/2020] Release: version 0.9.0
  • [04/2021] Release: version 1.0.0

spark-fits

This library provides two different tools to manipulate FITS data with Apache Spark:

  • A Spark connector for FITS file.
  • A Scala library to manipulate FITS file.

The user interface has been done to be the same as other built-in Spark data sources (CSV, JSON, Avro, Parquet, etc). Note that spark-fits follows Apache Spark Data Source V1 (plan to migrate to V2). See our website for more information. To include spark-fits in your job:

# Scala 2.11
spark-submit --packages "com.github.astrolabsoftware:spark-fits_2.11:1.0.0" <...>

# Scala 2.12
spark-submit --packages "com.github.astrolabsoftware:spark-fits_2.12:1.0.0" <...>

or you can link against this library in your program at the following coordinates in your build.sbt

// Scala 2.11
libraryDependencies += "com.github.astrolabsoftware" % "spark-fits_2.11" % "1.0.0"

// Scala 2.12
libraryDependencies += "com.github.astrolabsoftware" % "spark-fits_2.12" % "1.0.0"

Currently available:

  • Read fits file and organize the HDU data into DataFrames.
  • Automatically distribute bintable rows over machines.
  • Automatically distribute image rows over machines.
  • Automatically infer DataFrame schema from the HDU header.

Header Challenge!

The header tested so far are very simple, and not so exotic. Over the time, we plan to add many new features based on complex examples (see here). If you use spark-fits, and encounter errors while reading a header, tell us (issues or PR) so that we fix the problem asap!

TODO list

  • Define custom Hadoop InputFile.
  • Migrate to Spark DataSource V2

Support