Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Native AWS backend for the audit log #1755

Closed
kontsevoy opened this issue Mar 8, 2018 · 11 comments
Closed

Native AWS backend for the audit log #1755

kontsevoy opened this issue Mar 8, 2018 · 11 comments
Assignees

Comments

@kontsevoy
Copy link
Contributor

Proposal

  • Lets use DynamoDB for audit log
  • S3 for session replay

Config

teleport:
  storage:
    type: dynamodb
    region: eu-west-1
    table_name: tablename
    audit_table_name: tablename.audit # if missing - use filesystem
    audit_sessions_uri: s3://<bucket>  # if missing - use filesystem
    access_key: <key>
    secret_key: <key>
@klizhentas klizhentas added this to the 2.5.1 "Portland" milestone Mar 8, 2018
@klizhentas
Copy link
Contributor

@kontsevoy it turned out to be a big changeset, I'm thinking of moving to 2.6.0 instead

@klizhentas klizhentas modified the milestones: 2.5.1 "Portland", 2.6.0 "Austin" Mar 14, 2018
@klizhentas
Copy link
Contributor

klizhentas commented Mar 14, 2018

Here are working combinations for documentation:

Upload from Nodes and proxies to NFS directly

# Single-node Teleport cluster called "one" (runs all 3 roles: proxy, auth and node)
teleport:
  storage:
      audit_sessions_uri: file:///tmp

Upload to records S3 and events in Dynamo

# Single-node Teleport cluster called "one" (runs all 3 roles: proxy, auth and node)
teleport:
  storage:
      type: dynamodb
      table_name: test_grv8
      region: us-west-1
      audit_table_name: test_grv8_events
      audit_sessions_uri: s3://testgrv8records

NOT SUPPORTED

This configuration wont't be accepted as we require external uploader when using external dynamo db event storage (just simplifies our internal design)

# Single-node Teleport cluster called "one" (runs all 3 roles: proxy, auth and node)
teleport:
  storage:
      type: dynamodb
      table_name: test_grv8
      region: us-west-1
      audit_table_name: test_grv8_events
      # missing audit_sesions_uri

@kontsevoy
Copy link
Contributor Author

@klizhentas Question: can I do this?

teleport:
  storage:
      audit_sessions_uri: s3://s3.gravitational.io/ssh-sessions

i.e. the filesystem is used for the audit and secrets, and S3 is only used for the sessions.

@klizhentas
Copy link
Contributor

yes

@klizhentas
Copy link
Contributor

Also, in DynamoDB events are stored with default TTL of 1 year.

klizhentas added a commit that referenced this issue Mar 15, 2018
Updates #1755

Design
------

This commit adds support for pluggable events and
sessions recordings and adds several plugins.

In case if external sessions recording storage
is used, nodes or proxies depending on configuration
store the session recordings locally and
then upload the recordings in the background.

Non-print session events are always sent to the
remote auth server as usual.

In case if remote events storage is used, auth
servers download recordings from it during playbacks.

DynamoDB event backend
----------------------

Transient DynamoDB backend is added for events
storage. Events are stored with default TTL of 1 year.

External lambda functions should be used
to forward events from DynamoDB.

Parameter audit_table_name in storage section
turns on dynamodb backend.

The table will be auto created.

S3 sessions backend
-------------------

If audit_sessions_uri is specified to s3://bucket-name
node or proxy depending on recording mode
will start uploading the recorded sessions
to the bucket.

If the bucket does not exist, teleport will
attempt to create a bucket with versioning and encryption
turned on by default.

Teleport will turn on bucket-side encryption for the tarballs
using aws:kms key.

File sessions backend
---------------------

If audit_sessions_uri is specified to file:///folder
teleport will start writing tarballs to this folder instead
of sending records to the file server.

This is helpful for plugin writers who can use fuse or NFS
mounted storage to handle the data.

Working dynamic configuration.
@klizhentas
Copy link
Contributor

Turning this into documentation ticket as the feature has landed in 2.6.0-alpha.0

@sds
Copy link

sds commented May 31, 2018

Hey @klizhentas, really excited about this feature!

Does this allow you to encrypt the recorded sessions before sending them to S3? Not referring to built-in S3 encryption—would want to encrypt locally before sent to AWS servers.

@klizhentas
Copy link
Contributor

we don't support client side encryption, what kind of encryption do you have in mind?

@sds
Copy link

sds commented May 31, 2018

The basic goal is to not have AWS control the keys to encrypt/decrypt the sessions. So specifically looking for a solution supporting Option 2

@kontsevoy
Copy link
Contributor Author

@klizhentas I propose we close this one (because it's implemented)

@klizhentas
Copy link
Contributor

@kontsevoy this is a documentation issue, if you have done everything wrt documentation, sure, close it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants