Gaia is an open source automation platform which makes it easy and fun to build powerful pipelines in any programming language. Based on HashiCorp's go-plugin and gRPC, gaia is efficient, fast, lightweight, and developer friendly. Gaia is currently alpha! Do not use it for mission critical jobs yet!
Develop powerful pipelines with the help of SDKs and simply check-in your code into a git repository. Gaia automatically clones your code repository, compiles your code to a binary, and executes it on-demand. All results are streamed back and formatted as a user-friendly graphical output.
Check out gaia-pipeline.io to learn more.
Automation Engineer, DevOps, SRE, Cloud Engineer, Platform Engineer - they all have one in common: The majority of tech people are not motivated to take up this work and they are hard to recruit.
One of the main reasons for this is the abstraction and poor execution of many automation tools. They come with their own configuration (YAML syntax) specification or limit the user to one specific programming language. Testing is nearly impossible because most automation tools lack the ability to mock services and subsystems. Even tiny things, for example parsing a JSON file, are sometimes really painful because external, outdated libraries were used and not included in the standard framework.
We believe it's time to remove all those abstractions and come back to our roots. Are you tired of writing endless lines of YAML-code? Are you sick of spending days forced to write in a language that does not suit you and is not fun at all? Do you enjoy programming in a language you like? Then Gaia is for you.
Gaia is based on HashiCorp's go-plugin. It's a plugin system that uses gRPC to communicate over HTTP/2. Initially, HashiCorp developed this tool for Packer but now it's heavily used by Terraform, Nomad, and Vault too.
Plugins, which we named pipelines, are applications which can be written in any programming language, as long as gRPC is supported. All functions, which we call jobs, are exposed to Gaia and can form up a dependency graph which describes the order of execution.
Pipelines can be compiled locally or simply over the build system. Gaia clones the git repository and automatically builds the included pipeline. If a change (git push) happened, Gaia will automatically rebuild the pipeline for you*.
After a pipeline has been started, all log output is returned back to Gaia and displayed in a detailed overview with their final result status.
Gaia uses boltDB for storage. This makes the installation step super easy. No external database is currently required.
* This requires polling or webhook to be activated.
The installation of gaia is simple and often takes a few minutes.
The following command starts gaia as a daemon process and mounts all data to the current folder. Afterwards, gaia will be available on the host system on port 8080. Use the standard user admin and password admin as initial login. It is recommended to change the password afterwards.
docker run -d -p 8080:8080 -v $PWD:/data gaiapipeline/gaia:latest
This uses the image with the latest tag which includes all required libraries and compilers for all supported languages. If you prefer a smaller image suited for your preferred language, have a look at the available docker image tags.
It is possible to install Gaia directly on the host system. This can be achieved by downloading the binary from the releases page.
Gaia will automatically detect the folder of the binary and will place all data next to it. You can change the data directory with the startup parameter -home-path if you want.
If you haven't got an ingress controller pod yet, make sure that you have kube-dns or coredns enabled, run this command to set it up.
make kube-ingress
To init helm:
helm init
To deploy gaia:
make deploy-kube
package main
import (
"log"
sdk "github.com/gaia-pipeline/gosdk"
)
// This is one job. Add more if you want.
func DoSomethingAwesome(args sdk.Arguments) error {
log.Println("This output will be streamed back to gaia and will be displayed in the pipeline logs.")
// An error occurred? Return it back so gaia knows that this job failed.
return nil
}
func main() {
jobs := sdk.Jobs{
sdk.Job{
Handler: DoSomethingAwesome,
Title: "DoSomethingAwesome",
Description: "This job does something awesome.",
},
}
// Serve
if err := sdk.Serve(jobs); err != nil {
panic(err)
}
}
from gaiasdk import sdk
import logging
def MyAwesomeJob(args):
logging.info("This output will be streamed back to gaia and will be displayed in the pipeline logs.")
# Just raise an exception to tell Gaia if a job failed.
# raise Exception("Oh no, this job failed!")
def main():
logging.basicConfig(level=logging.INFO)
myjob = sdk.Job("MyAwesomeJob", "Do something awesome", MyAwesomeJob)
sdk.serve([myjob])
package io.gaiapipeline;
import io.gaiapipeline.javasdk.*;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.logging.Logger;
public class Pipeline
{
private static final Logger LOGGER = Logger.getLogger(Pipeline.class.getName());
private static Handler MyAwesomeJob = (gaiaArgs) -> {
LOGGER.info("This output will be streamed back to gaia and will be displayed in the pipeline logs.");
// Just raise an exception to tell Gaia if a job failed.
// throw new IllegalArgumentException("Oh no, this job failed!");
};
public static void main( String[] args )
{
PipelineJob myjob = new PipelineJob();
myjob.setTitle("MyAwesomeJob");
myjob.setDescription("Do something awesome.");
myjob.setHandler(MyAwesomeJob);
Javasdk sdk = new Javasdk();
try {
sdk.Serve(new ArrayList<>(Arrays.asList(myjob)));
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
#include "cppsdk/sdk.h"
#include <list>
#include <iostream>
void DoSomethingAwesome(std::list<gaia::argument> args) throw(std::string) {
std::cerr << "This output will be streamed back to gaia and will be displayed in the pipeline logs." << std::endl;
// An error occurred? Return it back so gaia knows that this job failed.
// throw "Uhh something badly happened!"
}
int main() {
std::list<gaia::job> jobs;
gaia::job awesomejob;
awesomejob.handler = &DoSomethingAwesome;
awesomejob.title = "DoSomethingAwesome";
awesomejob.description = "This job does something awesome.";
jobs.push_back(awesomejob);
try {
gaia::Serve(jobs);
} catch (string e) {
std::cerr << "Error: " << e << std::endl;
}
}
require 'rubysdk'
class Main
AwesomeJob = lambda do |args|
STDERR.puts "This output will be streamed back to gaia and will be displayed in the pipeline logs."
# An error occurred? Raise an exception and gaia will fail the pipeline.
# raise "Oh gosh! Something went wrong!"
end
def self.main
awesomejob = Interface::Job.new(title: "Awesome Job",
handler: AwesomeJob,
desc: "This job does something awesome.")
begin
RubySDK.Serve([awesomejob])
rescue => e
puts "Error occured: #{e}"
exit(false)
end
end
end
const nodesdk = require('@gaia-pipeline/nodesdk');
function DoSomethingAwesome(args) {
console.error('This output will be streamed back to gaia and will be displayed in the pipeline logs.');
// An error occurred? Throw it back so gaia knows that this job failed.
// throw new Error('My error message');
}
// Serve
try {
nodesdk.Serve([{
handler: DoSomethingAwesome,
title: 'DoSomethingAwesome',
description: 'This job does something awesome.'
}]);
} catch (err) {
console.error(err);
}
Pipelines are defined by jobs and a function usually represents a job. You can define as many jobs in your pipeline as you want.
Every function accepts arguments. Those arguments can be requested from the pipeline itself and the values passed back in from the UI.
Some pipeline jobs need a specific order of execution. DependsOn allows you to declare dependencies for every job.
You can find real examples and more information on how to develop a pipeline in the docs.
See the Documentation located here: security-docs.
Please find the docs at https://docs.gaia-pipeline.io. We also have a tutorials section over there with examples and real use-case scenarios. For example, Kubernetes deployment with vault integration.
Literally every tool which were designed for automation, continuous integration (CI), and continuous deployment (CD) like Spinnaker, Jenkins, Gitlab CI/CD, TravisCI, CircleCI, Codeship, Bamboo and many more, introduced their own configuration format. Some of them don't even support configuration/automation as code. This works well for simple tasks like running a go install
or mvn clean install
but in the real world there is more to do.
Gaia is the first platform which does not limit the user and provides full support for almost all common programming languages without losing the features offered by todays CI/CD tools.
A pipeline is a real application with at least one function (we call it a Job). Every programming language can be used as long as gRPC is supported. We offer SDKs to support the development.
A job is a function, usually globally exposed to Gaia. Dependent on the dependency graph, Gaia will execute this function in a specific order.
The SDK implements the Gaia plugin gRPC interface and offers helper functions like serving the gRPC-Server. This helps you to focus on the real problem instead of doing the boring stuff.
We currently fully support Go, Java, Python, C++, Ruby and Node.JS.
We are working hard to support as much programming languages as possible but our resources are limited and we are also mostly no experts in all programming languages. If you are willing to contribute, feel free to open an issue and start working.
Gaia is currently available as alpha version. We extremely recommend to not use it for mission critical jobs and for production yet. Things will change in the future and essential features may break.
One of the main issues currently is the lack of unit- and integration tests. This is on our to-do list and we are working on this topic with high priority.
It is planned that other programming languages should be supported in the next few months. It is up to the community which languages will be supported next.
Gaia can only evolve and become a great product with the help of contributors. If you like to contribute, please have a look at our issues section. We do our best to mark issues for new contributors with the label good first issue.
If you think you found a good first issue, please consider this list as a short guide:
- If the issue is clear and you have no questions, please leave a short comment that you started working on this. The issue will be usually blocked for two weeks for you to solve it.
- If something is not clear or you are unsure what to do, please leave a comment so we can add more detailed description.
- Make sure your development environment is configured and set up. You need Go installed on your machine and also nodeJS for the frontend. Clone this repository and run the make command inside the cloned folder. This will start the backend. To start the frontend you have to open a new terminal window and go into the frontend folder. There you run npm install and then npm run serve. This should automatically open a new browser window.
- Before you start your work, you should fork this repository and push changes to your fork. Afterwards, send a merge request back to upstream.
If you have any questions feel free to contact us on slack.