This repository contains examples of Lithops applications in different ambits.
We have gathered a comprehensive collection of serverless pipelines implemented in Lithops (or related serverless projects, as Crucial) and characterized them. You can find the complete listing and their code references here.
AWS is the simplest cloud provider to test the applications on. We provide a set of publicly available datasets that can be easily imported to users' custom buckets. You simply need the AWS CLI installed and configured.
Once AWS CLI is correctly set up, execute import_datasets_aws.sh to import test inputs to your AWS S3 bucket or local directory. It uses "Requester pays" billing, so the client is billed for the data downloaded.
chmod +x import_datasets_aws.sh
./import_datasets_aws.sh MY_BUCKET
# Examples:
# ./import_datasets_aws.sh s3://bucket-name -> import to a AWS S3 bucket.
# ./import_datasets_aws.sh /home/user/lithops-data -> import to a local directory.
You need to configure Lithops with your own AWS account keys. Follow the configuration guide for aws_lambda and aws_s3.
We provide a public AMI to run out-of-the-box Lithops data analytics pipelines in AWS. Please refer to demo.