-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Native AWS backend for the audit log #1755
Comments
@kontsevoy it turned out to be a big changeset, I'm thinking of moving to 2.6.0 instead |
Here are working combinations for documentation: Upload from Nodes and proxies to NFS directly # Single-node Teleport cluster called "one" (runs all 3 roles: proxy, auth and node)
teleport:
storage:
audit_sessions_uri: file:///tmp Upload to records S3 and events in Dynamo # Single-node Teleport cluster called "one" (runs all 3 roles: proxy, auth and node)
teleport:
storage:
type: dynamodb
table_name: test_grv8
region: us-west-1
audit_table_name: test_grv8_events
audit_sessions_uri: s3://testgrv8records NOT SUPPORTED This configuration wont't be accepted as we require external uploader when using external dynamo db event storage (just simplifies our internal design) # Single-node Teleport cluster called "one" (runs all 3 roles: proxy, auth and node)
teleport:
storage:
type: dynamodb
table_name: test_grv8
region: us-west-1
audit_table_name: test_grv8_events
# missing audit_sesions_uri |
@klizhentas Question: can I do this?
i.e. the filesystem is used for the audit and secrets, and S3 is only used for the sessions. |
yes |
Also, in DynamoDB events are stored with default TTL of 1 year. |
Updates #1755 Design ------ This commit adds support for pluggable events and sessions recordings and adds several plugins. In case if external sessions recording storage is used, nodes or proxies depending on configuration store the session recordings locally and then upload the recordings in the background. Non-print session events are always sent to the remote auth server as usual. In case if remote events storage is used, auth servers download recordings from it during playbacks. DynamoDB event backend ---------------------- Transient DynamoDB backend is added for events storage. Events are stored with default TTL of 1 year. External lambda functions should be used to forward events from DynamoDB. Parameter audit_table_name in storage section turns on dynamodb backend. The table will be auto created. S3 sessions backend ------------------- If audit_sessions_uri is specified to s3://bucket-name node or proxy depending on recording mode will start uploading the recorded sessions to the bucket. If the bucket does not exist, teleport will attempt to create a bucket with versioning and encryption turned on by default. Teleport will turn on bucket-side encryption for the tarballs using aws:kms key. File sessions backend --------------------- If audit_sessions_uri is specified to file:///folder teleport will start writing tarballs to this folder instead of sending records to the file server. This is helpful for plugin writers who can use fuse or NFS mounted storage to handle the data. Working dynamic configuration.
Turning this into documentation ticket as the feature has landed in 2.6.0-alpha.0 |
Hey @klizhentas, really excited about this feature! Does this allow you to encrypt the recorded sessions before sending them to S3? Not referring to built-in S3 encryption—would want to encrypt locally before sent to AWS servers. |
we don't support client side encryption, what kind of encryption do you have in mind? |
The basic goal is to not have AWS control the keys to encrypt/decrypt the sessions. So specifically looking for a solution supporting Option 2 |
@klizhentas I propose we close this one (because it's implemented) |
@kontsevoy this is a documentation issue, if you have done everything wrt documentation, sure, close it! |
Proposal
Config
The text was updated successfully, but these errors were encountered: