Logging Scenario - Send Local Logs to S3
Scenario: you want to save gateway/relay logs to an S3 bucket. This guide presents a simple method to send all gateway/relay logs to S3.
As with all gateway/relay logs, the logs stored on the gateway/relay will not include Admin UI activities, which can be accessed via the sdm audit activities
command. The following script includes an additional step to run that command and export those logs concurrently.
Setting up the export
Enable relay logging in the Admin UI under Settings / Log Encryption & Storage. Ensure logging is set to STDOUT.
Create an admin token with only the Audit/Activities permission. Save this token to add to the script below.
Generate an AWS access key and AWS secret access key from the AWS GUI.
Ensure the gateway or relay has the
aws-cli
tools installedSave the following script as
s3export.sh
. This script exports in 15-minute intervals; if you prefer to do it more or less frequently change the FROMTIME and TOTIME variables.#!/bin/bash# day, hour, minute timestampTIMESTAMP=`date +'%Y%m%d%H%M'`# to prevent overlapping records, do 16 min ago to 1 min agoFROMTIME=`date --date="16 minutes ago" +'%Y-%m-%d %H:%M:%S'`TOTIME=`date --date="1 minutes ago" +'%Y-%m-%d %H:%M:%S'`# this token needs only audit/activities permissionexport SDM_ADMIN_TOKEN=[token]S3NAME=strongdm-log-$TIMESTAMP.gzS3ACTIVITIESNAME=strongdm-activities-$TIMESTAMP.gzS3PATH=s3://bucket/path/to/logs # no trailing slashexport AWS_ACCESS_KEY_ID=[token]export AWS_SECRET_ACCESS_KEY=[token]# ensure AWS environment variables are in placejournalctl -q -o cat --since "$FROMTIME" --until "$TOTIME" -u sdm-proxy | \gzip | aws s3 cp - $S3PATH/$S3NAMEsdm audit activities --from "$FROMTIME" --to "$TOTIME" | \gzip | aws s3 cp - $S3PATH/$S3ACTIVITIESNAMEAdd the following line to
/etc/crontab
. If you changed the export interval above, change the cron interval here to match.0,15,30,45 * * * * root /home/ubuntu/s3export.shVerify that files are being generated every 15 minutes in your S3 bucket.
SSH session extraction
To extract SSH sessions from these logs, you will need to first determine the ID of the session you want to view. This can most easily be done by running
sdm audit ssh
with the relevant--from
and--to
flags.$ sdm audit ssh --from "2018-03-20" --to "2018-03-22"Time,Server ID,Server Name,User ID,User Name,Duration (ms),Capture ID,Hash2018-03-21 20:51:16.098221 +0000 UTC,1334,prod-312-test,1016,Joe Admin,8572,4516ae2e-5d55-4559-a08c-8a0f514b579c,afb368770931a2aae89e6a8801b40eac44569d932018-03-21 20:53:01.4391 +0000 UTC,1334,prod-312-test,1016,Joe Admin,7515,fbd50897-1359-4b55-a103-68e4dafa494b,aa4aa0646469757df9f0b92fb5ca39a9c1bfd38d2018-03-22 21:57:10.920914 +0000 UTC,1334,prod-312-test,1016,Joe Admin,10440,aa8dab30-685d-4180-a86b-bb1794d23756,aa4aa0646469757df9f0b92fb5ca39a9c1bfd38d2018-03-22 23:16:40.170815 +0000 UTC,1334,prod-312-test,1016,Joe Admin,5433,7a8735cf-05c8-4840-89ae-42c6ad750136,883b03873229301e58fb6c9ccf1a3f584953d13c2018-03-22 23:21:49.987304 +0000 UTC,1334,prod-312-test,1016,Joe Admin,4529,2324e5d7-398b-47cd-ace6-78b33f813e3f,883b03873229301e58fb6c9ccf1a3f584953d13cNext, copy the logs from the relevant timeframe back down from S3. Please note that an SSH session may span several logs, so pay attention to the duration of the session as revealed in step 1.
Unzip the logs and compile them into a single file.
$ cat log1 log2 log3 > combined-logsRun
sdm ssh split <logfile>
to extract all SSH sessions from this log. They will be named after the session ID, at which point you can view the relevant session file (in typescript format).$ sdm ssh split combined-logs5783cb5e-e1c8-44ba-b8ee-4bc4d8c28c7d.ssh9d880e13-f608-4fe0-b1e7-deeb35bb9f2c.ssh
SSH session extraction prior to S3 delivery
An alternate method exists to extract SSH sessions prior to shipping the logs to S3.
This method has known limitations: if SSH sessions span log copy/delivery intervals, there may be duplicated or incomplete SSH session recordings.
- Save extracted log files locally before sending to S3
- Extract SSH sessions from saved files
- Increase cron interval to decrease likelihood of incomplete SSH sessions
#!/bin/bash# day, hour, minute timestampTIMESTAMP=`date +'%Y%m%d%H%M'`# to prevent overlapping records, do 61 min ago to 1 min agoFROMTIME=`date --date="61 minutes ago" +'%Y-%m-%d %H:%M:%S'`TOTIME=`date --date="1 minutes ago" +'%Y-%m-%d %H:%M:%S'`SSHDIR=/path/to/save/ssh/sessionsTEMPDIR=/tmp# this token needs only audit/activities permissionexport SDM_ADMIN_TOKEN=<token>S3NAME=strongdm-log-$TIMESTAMP.gzS3ACTIVITIESNAME=strongdm-activities-$TIMESTAMP.gzS3PATH=s3://bucket/path/to/logs # no trailing slashexport AWS_ACCESS_KEY_ID=<token>export AWS_SECRET_ACCESS_KEY=<token># ensure AWS environment variables are in placejournalctl -q -o cat --since "$FROMTIME" --until "$TOTIME" -u sdm-proxy > $TEMPDIR/sdmaudit.logcd $SSHDIR; sdm ssh split $TEMPDIR/sdmaudit.loggzip $TEMPDIR/sdmaudit.log | aws s3 cp - $S3PATH/$S3NAMEsdm audit activities --from "$FROMTIME" --to "$TOTIME" | \gzip | aws s3 cp - $S3PATH/$S3ACTIVITIESNAME
Configure this script to run every hour in cron.