Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@
-

### Changed
-
- [AWS Lambda] Eliminate the need for access and secret keys in the configuration
- [AWS Batch] Eliminate the need for access and secret keys in the configuration
- [AWS S3] Eliminate the need for access and secret keys in the configuration

### Fixed
- [AWS Lambda] Fixed runtime deletion with "lithops runtime delete"
Expand Down
29 changes: 12 additions & 17 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,6 @@
What is Lithops?
****************

.. image:: source/images/lithops_logo_readme.png
:alt: Lithops
:align: center

|

**Lithops is a Python multi-cloud serverless computing framework. It allows to run unmodified local python code at massive scale in the main serverless computing platforms.**

Lithops delivers the user’s code into the cloud without requiring knowledge of how it is deployed and run.
Expand All @@ -28,6 +22,18 @@ analytics, to name a few.
Lithops abstracts away the underlying cloud-specific APIs for accessing storage and provides an intuitive and easy to use interface to process high volumes of data.


Use any Cloud
*************
**Lithops provides an extensible backend architecture that is designed to work with different compute and storage services available on Cloud providers and on-premise backends.**

In this sense, you can code your application in Python and run it unmodified wherever your data is located at: IBM Cloud, AWS, Azure, Google Cloud and Alibaba Aliyun...

.. image:: source/images/multicloud.jpg
:alt: Available backends
:align: center

|

Quick Start
***********

Expand All @@ -50,17 +56,6 @@ You're ready to execute a simple example!
fut = fexec.call_async(hello, 'World')
print(fut.result())

Use any Cloud
*************
**Lithops provides an extensible backend architecture that is designed to work with different compute and storage services available on Cloud providers and on-premise backends.**

In this sense, you can code your application in Python and run it unmodified wherever your data is located at: IBM Cloud, AWS, Azure, Google Cloud and Alibaba Aliyun...

.. image:: source/images/multicloud.jpg
:alt: Available backends
:align: center

|

Additional resources
********************
Expand Down
1 change: 0 additions & 1 deletion docs/source/compute_backends.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ Compute Backends
compute_config/oracle_functions.md
compute_config/aliyun_functions.md
compute_config/openwhisk.md
compute_config/ibm_cf.md

**Serverless (CaaS) Backends:**

Expand Down
4 changes: 2 additions & 2 deletions docs/source/compute_config/aws_batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ aws_batch:
|Group|Key|Default|Mandatory|Additional info|
|---|---|---|---|---|
|aws | region | |yes | AWS region name. For example `us-east-1` |
|aws | access_key_id | |yes | Account access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | secret_access_key | |yes | Account secret access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | access_key_id | |no | Account access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | secret_access_key | |no | Account secret access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | session_token | |no | Session token for temporary AWS credentials |
|aws | account_id | |no | *This field will be used if present to retrieve the account ID instead of using AWS STS. The account ID is used to format full image names for container runtimes. |

Expand Down
4 changes: 2 additions & 2 deletions docs/source/compute_config/aws_lambda.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,8 @@ aws_lambda:
|Group|Key|Default|Mandatory|Additional info|
|---|---|---|---|---|
|aws | region | |yes | AWS Region. For example `us-east-1` |
|aws | access_key_id | |yes | Account access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | secret_access_key | |yes | Account secret access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | access_key_id | |no | Account access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | secret_access_key | |no | Account secret access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | session_token | |no | Session token for temporary AWS credentials |
|aws | account_id | |no | *This field will be used if present to retrieve the account ID instead of using AWS STS. The account ID is used to format full image names for container runtimes. |

Expand Down
2 changes: 1 addition & 1 deletion docs/source/compute_config/azure_vms.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Azure Virtual Machines (Beta)
# Azure Virtual Machines

The Azure Virtual Machines client of Lithops can provide a truely serverless user experience on top of Azure VMs where Lithops creates new Virtual Machines (VMs) dynamically in runtime and scale Lithops jobs against them. Alternatively Lithops can start and stop an existing VM instances.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/compute_config/oracle_functions.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Oracle Functions (beta)
# Oracle Functions

Lithops with *Oracle Functions* as serverless compute backend.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/storage_config/aws_s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,8 @@ Lithops with AWS S3 as storage backend.
|Group|Key|Default|Mandatory|Additional info|
|---|---|---|---|---|
|aws | region | |yes | AWS Region. For example `us-east-1` |
|aws | access_key_id | |yes | Account access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | secret_access_key | |yes | Account secret access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | access_key_id | |no | Account access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | secret_access_key | |no | Account secret access key to AWS services. To find them, navigate to *My Security Credentials* and click *Create Access Key* if you don't already have one. |
|aws | session_token | |no | Session token for temporary AWS credentials |

### Summary of configuration keys for AWS S3:
Expand Down
38 changes: 24 additions & 14 deletions lithops/serverless/backends/aws_batch/aws_batch.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,34 +45,44 @@ def __init__(self, aws_batch_config, internal_storage):
self.name = 'aws_batch'
self.type = utils.BackendType.BATCH.value
self.aws_batch_config = aws_batch_config

self.user_key = aws_batch_config['access_key_id'][-4:]
self.package = f'lithops_v{__version__.replace(".", "-")}_{self.user_key}'
self.region_name = aws_batch_config['region']
self.region = aws_batch_config['region']
self.namespace = aws_batch_config.get('namespace')

self._env_type = self.aws_batch_config['env_type']
self._queue_name = f'{self.package}_{self._env_type.replace("_", "-")}_queue'
self._compute_env_name = f'{self.package}_{self._env_type.replace("_", "-")}_env'

logger.debug('Creating Boto3 AWS Session and Batch Client')
self.aws_session = boto3.Session(aws_access_key_id=aws_batch_config['access_key_id'],
aws_secret_access_key=aws_batch_config['secret_access_key'],
aws_session_token=aws_batch_config.get('session_token'),
region_name=self.region_name)
self.batch_client = self.aws_session.client('batch', region_name=self.region_name)
self.aws_session = boto3.Session(
aws_access_key_id=aws_batch_config.get('access_key_id'),
aws_secret_access_key=aws_batch_config.get('secret_access_key'),
aws_session_token=aws_batch_config.get('session_token'),
region_name=self.region
)
self.batch_client = self.aws_session.client('batch', region_name=self.region)

self.internal_storage = internal_storage

if 'account_id' in self.aws_batch_config:
self.account_id = self.aws_batch_config['account_id']
else:
sts_client = self.aws_session.client('sts', region_name=self.region_name)
sts_client = self.aws_session.client('sts', region_name=self.region)
self.account_id = sts_client.get_caller_identity()["Account"]

self.ecr_client = self.aws_session.client('ecr', region_name=self.region_name)
sts_client = self.aws_session.client('sts', region_name=self.region)
caller_id = sts_client.get_caller_identity()

if ":" in caller_id["UserId"]: # SSO user
self.user_key = caller_id["UserId"].split(":")[1]
else: # IAM user
self.user_key = caller_id["UserId"][-4:].lower()

self.ecr_client = self.aws_session.client('ecr', region_name=self.region)
package = f'lithops_v{__version__.replace(".", "")}_{self.user_key}'
self.package = f"{package}_{self.namespace}" if self.namespace else package

msg = COMPUTE_CLI_MSG.format('AWS Batch')
logger.info("{} - Region: {}".format(msg, self.region_name))
logger.info(f"{msg} - Region: {self.region}")

def _get_default_runtime_image_name(self):
python_version = utils.CURRENT_PY_VERSION.replace('.', '')
Expand All @@ -81,7 +91,7 @@ def _get_default_runtime_image_name(self):

def _get_full_image_name(self, runtime_name):
full_image_name = runtime_name if ':' in runtime_name else f'{runtime_name}:latest'
registry = f'{self.account_id}.dkr.ecr.{self.region_name}.amazonaws.com'
registry = f'{self.account_id}.dkr.ecr.{self.region}.amazonaws.com'
full_image_name = '/'.join([registry, self.package.replace('-', '.'), full_image_name]).lower()
repo_name = full_image_name.split('/', 1)[1:].pop().split(':')[0]
return full_image_name, registry, repo_name
Expand Down Expand Up @@ -585,7 +595,7 @@ def invoke(self, runtime_name, runtime_memory, payload):

def get_runtime_key(self, runtime_name, runtime_memory, version=__version__):
jobdef_name = self._format_jobdef_name(runtime_name, runtime_memory, version)
runtime_key = os.path.join(self.name, version, self.region_name, jobdef_name)
runtime_key = os.path.join(self.name, version, self.region, jobdef_name)
return runtime_key

def get_runtime_info(self):
Expand Down
3 changes: 0 additions & 3 deletions lithops/serverless/backends/aws_batch/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@ def load_config(config_data):
if 'aws' not in config_data:
raise Exception("'aws' section is mandatory in the configuration")

if not {'access_key_id', 'secret_access_key'}.issubset(set(config_data['aws'])):
raise Exception("'access_key_id' and 'secret_access_key' are mandatory under the 'aws' section of the configuration")

if not config_data['aws_batch']:
raise Exception("'aws_batch' section is mandatory in the configuration")

Expand Down
30 changes: 15 additions & 15 deletions lithops/serverless/backends/aws_lambda/aws_lambda.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,53 +57,53 @@ def __init__(self, lambda_config, internal_storage):
self.lambda_config = lambda_config
self.internal_storage = internal_storage
self.user_agent = lambda_config['user_agent']
self.region_name = lambda_config['region']
self.region = lambda_config['region']
self.role_arn = lambda_config['execution_role']
self.namespace = lambda_config.get('namespace')

logger.debug('Creating Boto3 AWS Session and Lambda Client')

self.aws_session = boto3.Session(
aws_access_key_id=lambda_config['access_key_id'],
aws_secret_access_key=lambda_config['secret_access_key'],
aws_access_key_id=lambda_config.get('access_key_id'),
aws_secret_access_key=lambda_config.get('secret_access_key'),
aws_session_token=lambda_config.get('session_token'),
region_name=self.region_name
region_name=self.region
)

self.lambda_client = self.aws_session.client(
'lambda', region_name=self.region_name,
'lambda', region_name=self.region,
config=botocore.client.Config(
user_agent_extra=self.user_agent
)
)

self.credentials = self.aws_session.get_credentials()
self.session = URLLib3Session()
self.host = f'lambda.{self.region_name}.amazonaws.com'
self.host = f'lambda.{self.region}.amazonaws.com'

if 'account_id' in self.lambda_config:
self.account_id = self.lambda_config['account_id']
else:
sts_client = self.aws_session.client('sts', region_name=self.region_name)
sts_client = self.aws_session.client('sts', region_name=self.region)
self.account_id = sts_client.get_caller_identity()["Account"]

sts_client = self.aws_session.client('sts', region_name=self.region_name)
sts_client = self.aws_session.client('sts', region_name=self.region)
caller_id = sts_client.get_caller_identity()

if ":" in caller_id["UserId"]: # SSO user
self.user_key = caller_id["UserId"].split(":")[1]
else: # IAM user
self.user_key = caller_id["UserId"][-4:].lower()

self.ecr_client = self.aws_session.client('ecr', region_name=self.region_name)
self.ecr_client = self.aws_session.client('ecr', region_name=self.region)
package = f'lithops_v{__version__.replace(".", "")}_{self.user_key}'
self.package = f"{package}_{self.namespace}" if self.namespace else package

msg = COMPUTE_CLI_MSG.format('AWS Lambda')
if self.namespace:
logger.info(f"{msg} - Region: {self.region_name} - Namespace: {self.namespace}")
logger.info(f"{msg} - Region: {self.region} - Namespace: {self.namespace}")
else:
logger.info(f"{msg} - Region: {self.region_name}")
logger.info(f"{msg} - Region: {self.region}")

def _format_function_name(self, runtime_name, runtime_memory, version=__version__):
name = f'{runtime_name}-{runtime_memory}-{version}'
Expand Down Expand Up @@ -357,7 +357,7 @@ def build_runtime(self, runtime_name, runtime_file, extra_args=[]):
finally:
os.remove(LITHOPS_FUNCTION_ZIP)

registry = f'{self.account_id}.dkr.ecr.{self.region_name}.amazonaws.com'
registry = f'{self.account_id}.dkr.ecr.{self.region}.amazonaws.com'

res = self.ecr_client.get_authorization_token()
if res['ResponseMetadata']['HTTPStatusCode'] != 200:
Expand Down Expand Up @@ -474,7 +474,7 @@ def _deploy_container_runtime(self, runtime_name, memory, timeout):
except botocore.exceptions.ClientError:
raise Exception(f'Runtime "{runtime_name}" is not deployed to ECR')

registry = f'{self.account_id}.dkr.ecr.{self.region_name}.amazonaws.com'
registry = f'{self.account_id}.dkr.ecr.{self.region}.amazonaws.com'
image_uri = f'{registry}/{repo_name}@{image_digest}'

env_vars = {t['name']: t['value'] for t in self.lambda_config['env_vars']}
Expand Down Expand Up @@ -628,7 +628,7 @@ def invoke(self, runtime_name, runtime_memory, payload):
headers = {'Host': self.host, 'X-Amz-Invocation-Type': 'Event', 'User-Agent': self.user_agent}
url = f'https://{self.host}/2015-03-31/functions/{function_name}/invocations'
request = AWSRequest(method="POST", url=url, data=json.dumps(payload, default=str), headers=headers)
SigV4Auth(self.credentials, "lambda", self.region_name).add_auth(request)
SigV4Auth(self.credentials, "lambda", self.region).add_auth(request)

invoked = False
while not invoked:
Expand Down Expand Up @@ -674,7 +674,7 @@ def get_runtime_key(self, runtime_name, runtime_memory, version=__version__):
in order to know which runtimes are installed and which not.
"""
action_name = self._format_function_name(runtime_name, runtime_memory, version)
runtime_key = os.path.join(self.name, version, self.region_name, action_name)
runtime_key = os.path.join(self.name, version, self.region, action_name)

return runtime_key

Expand Down
3 changes: 0 additions & 3 deletions lithops/serverless/backends/aws_lambda/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,6 @@ def load_config(config_data):
if 'aws' not in config_data:
raise Exception("'aws' section is mandatory in the configuration")

if not {'access_key_id', 'secret_access_key'}.issubset(set(config_data['aws'])):
raise Exception("'access_key_id' and 'secret_access_key' are mandatory under the 'aws' section of the configuration")

if not config_data['aws_lambda']:
raise Exception("'aws_lambda' section is mandatory in the configuration")

Expand Down
9 changes: 9 additions & 0 deletions lithops/storage/backends/aliyun_oss/aliyun_oss.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,15 @@ def _connect_bucket(self, bucket_name):
def get_client(self):
return self

def generate_bucket_name(self):
"""
Generates a unique bucket name
"""
key = self.config['access_key_id']
self.config['storage_bucket'] = f'lithops-{self.region}-{key[:6].lower()}'

return self.config['storage_bucket']

def create_bucket(self, bucket_name):
"""
Create a bucket if it doesn't exist
Expand Down
7 changes: 0 additions & 7 deletions lithops/storage/backends/aliyun_oss/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
#

import copy
import hashlib


CONNECTION_POOL_SIZE = 300
Expand Down Expand Up @@ -48,9 +47,3 @@ def load_config(config_data=None):
region = config_data['aliyun_oss']['region']
config_data['aliyun_oss']['public_endpoint'] = PUBLIC_ENDPOINT.format(region)
config_data['aliyun_oss']['internal_endpoint'] = INTERNAL_ENDPOINT.format(region)

if 'storage_bucket' not in config_data['aliyun_oss']:
ossc = config_data['aliyun_oss']
key = ossc['access_key_id']
endpoint = hashlib.sha1(ossc['public_endpoint'].encode()).hexdigest()[:6]
config_data['aliyun_oss']['storage_bucket'] = f'lithops-{endpoint}-{key[:6].lower()}'
Loading