Nick Buker
Nordata is a small collection of utility functions for accessing AWS S3 and AWS Redshift. It was written by a data scientist on the Nordstrom Analytics Team. The goal Nordata is to be a simple, robust package to ease data work-flow. It is not intended to handle every possible need (for example credential management is largely left to the user) but it is designed to streamline common tasks.
-
Redshift:
- Importing nordata Redshift functions
- Reading a SQL script into Python as a string
- Executing a SQL query that does not return data
- Executing a SQL query that returns data
- Executing a SQL query that returns data for pandas
- Creating a connection object (experienced users)
S3:
- Importing S3 functions
- Downloading a single file from S3
- Downloading with a profile name
- Downloading a list of files from S3
- Downloading files matching a pattern from S3
- Downloading all files in a directory from S3
- Uploading a single file to S3
- Uploading with a profile name
- Uploading a list of files to S3
- Uploading files matching a pattern to S3
- Uploading all files in a directory to S3
- Deleting a single file in S3
- Deleting with a profile name
- Deleting a list of files in S3
- Deleting files matching a pattern in S3
- Deleting all files in a directory in S3
- Creating a bucket object (experienced users)
Boto3 (experienced users):
Transferring data between Redshift and S3:
Nordata can be install via pip. As always, use of a project-level virtual environment is recommended.
Nordata requires Python >= 3.6.
$ pip install nordata
Nordata is designed to ingest your Redshift credentials as an environment variable in the below format. This method allows the user freedom to handle credentials a number of ways. As always, best practices are advised. Your credentials should never be placed in the code of your project such as in a Dockerfile
or .env
file. Instead, you may wish to place them in your .bash_profile
locally or take advantage of a key management service such as the one offered by AWS.
'host=my_hostname dbname=my_dbname user=my_user password=my_password port=1234'
If the user is running locally, their Home
directory should contain a .aws/
directory with a credentials
file. The credentials
file should look similar to the example below where the profile name is in brackets. Note that the specific values and region may vary. If the user is running on an EC2, instance permission to access S3 is handled by the IAM role for the instance.
[default]
aws_access_key_id=MYAWSACCESSKEY
aws_secret_access_key=MYAWSSECRETACCESS
aws_session_token="long_string_of_random_characters=="
aws_security_token="another_string_of_random_characters=="
region=us-west-2
Note the the profile name in brackets. If the profile name differs in your credentials file, you will likely need to pass this profile name to the S3 functions as an argument.
Importing nordata Redshift functions:
from nordata import read_sql, redshift_execute_sql, redshift_get_conn
Reading a SQL script into Python as a string:
sql = read_sql(sql_filename='../sql/my_script.sql')
Executing a SQL query that does not return data:
redshift_execute_sql(
sql=sql,
env_var='REDSHIFT_CREDS',
return_data=False,
return_dict=False)
Executing a SQL query that returns data as a list of tuples and column names as a list of strings:
data, columns = redshift_execute_sql(
sql=sql,
env_var='REDSHIFT_CREDS',
return_data=True,
return_dict=False)
Executing a SQL query that returns data as a dict for easy ingestion into a pandas DataFrame:
import pandas as pd
df = pd.DataFrame(**redshift_execute_sql(
sql=sql,
env_var='REDSHIFT_CREDS',
return_data=True,
return_dict=True))
Creating a connection object that can be manipulated directly by experienced users:
conn = redshift_get_conn(env_var='REDSHIFT_CREDS')
from nordata import s3_download, s3_upload, s3_delete, create_session, s3_get_bucket
Downloading a single file from S3:
s3_download(
bucket='my_bucket',
s3_filepath='tmp/my_file.csv',
local_filepath='../data/my_file.csv')
Downloading with a profile name:
s3_download(
bucket='my_bucket',
profile_name='my-profile-name',
s3_filepath='tmp/my_file.csv',
local_filepath='../data/my_file.csv')
Downloading a list of files from S3 (will not upload contents of subdirectories):
s3_download(
bucket='my_bucket',
s3_filepath=['tmp/my_file1.csv', 'tmp/my_file2.csv', 'img.png'],
local_filepath=['../data/my_file1.csv', '../data/my_file2.csv', '../img.png'])
Downloading files matching a pattern from S3 (will not upload contents of subdirectories):
s3_download(
bucket='my_bucket',
s3_filepath='tmp/*.csv',
local_filepath='../data/')
Downloading all files in a directory from S3 (will not upload contents of subdirectories):
s3_download(
bucket='my_bucket',
s3_filepath='tmp/*',
local_filepath='../data/')
Uploading a single file to S3:
s3_upload(
bucket='my_bucket',
local_filepath='../data/my_file.csv',
s3_filepath='tmp/my_file.csv')
Uploading with a profile name:
s3_upload(
bucket='my_bucket',
profile_name='my-profile-name',
local_filepath='../data/my_file.csv',
s3_filepath='tmp/my_file.csv')
Uploading a list of files to S3 (will not upload contents of subdirectories):
s3_upload(
bucket='my_bucket',
local_filepath=['../data/my_file1.csv', '../data/my_file2.csv', '../img.png'],
s3_filepath=['tmp/my_file1.csv', 'tmp/my_file2.csv', 'img.png'])
Uploading files matching a pattern to S3 (will not upload contents of subdirectories):
s3_upload(
bucket='my_bucket',
local_filepath='../data/*.csv',
s3_filepath='tmp/')
Uploading all files in a directory to S3 (will not upload contents of subdirectories):
s3_upload(
bucket='my_bucket',
local_filepath='../data/*'
s3_filepath='tmp/')
resp = s3_delete(bucket='my_bucket', s3_filepath='tmp/my_file.csv')
s3_upload(
bucket='my_bucket',
profile_name='my-profile-name',
s3_filepath='tmp/my_file.csv')
Deleting a list of files in S3:
resp = s3_delete(
bucket='my_bucket',
s3_filepath=['tmp/my_file1.csv', 'tmp/my_file2.csv', 'img.png'])
Deleting files matching a pattern in S3:
resp = s3_delete(bucket='my_bucket', s3_filepath='tmp/*.csv')
Deleting all files in a directory in S3:
resp = s3_delete(bucket='my_bucket', s3_filepath='tmp/*')
Creating a bucket object that can be manipulated directly by experienced users:
bucket = s3_get_bucket(
bucket='my_bucket',
profile_name='default',
region_name='us-west-2')
from nordata import boto_get_creds, boto_create_session
Retrieves Boto3 credentials as a string for use in COPY
and UNLOAD
SQL statetments:
creds = boto_get_creds(
profile_name='default',
region_name='us-west-2',
session=None)
Creating a boto3 session object that can be manipulated directly by experienced users:
session = boto_create_session(profile_name='default', region_name='us-west-2')
Transferring data from Redshift to S3 using an UNLOAD
statement (see Redshift UNLOAD documentation for more information):
from nordata import boto_get_creds, redshift_execute_sql
creds = boto_get_creds(
profile_name='default',
region_name='us-west-2',
session=None)
sql = f'''
unload (
'select
col1
,col2
from
my_schema.my_table'
)
to
's3://mybucket/unload/my_table/'
credentials
'{creds}'
parallel off header gzip allowoverwrite;
'''
redshift_execute_sql(
sql=sql,
env_var='REDSHIFT_CREDS',
return_data=False,
return_dict=False)
Transferring data from S3 to Redshift using a COPY
statement (see Redshift COPY documentation for more information):
from nordata import boto_get_creds, redshift_execute_sql
creds = boto_get_creds(
profile_name='default',
region_name='us-west-2',
session=None)
sql = f'''
copy
my_schema.my_table
from
's3://mybucket/unload/my_table/'
credentials
'{creds}'
ignoreheader 1 gzip;
'''
redshift_execute_sql(
sql=sql,
env_var='REDSHIFT_CREDS',
return_data=False,
return_dict=False)
For those interested in contributing to Nordata or forking and editing the project, pytest is the testing framework used. To run the tests, create a virtual environment, install the contents of dev-requirements.txt
, and run the following command from the root directory of the project. The testing scripts can be found in the test/
directory.
$ pytest