Close
logodocs

Logging Scenario - Send Local Logs to S3

Scenario: You want to save gateway/relay logs to an Amazon S3 bucket. This guide presents a simple method to send all gateway/relay logs to S3.

As with all gateway/relay logs, the logs stored on the gateway/relay will not include Admin UI activities, which can be accessed via the sdm audit activities command. The following script includes an additional step to run that command and export those logs concurrently.

Setting up the export

  1. Enable relay logging in the Admin UI under Settings / Log Encryption & Storage. Ensure logging is set to STDOUT.

  2. Create an admin token with only the Audit/Activities permission. Save this token to add to the script below.

  3. Generate an AWS access key and AWS secret access key from the AWS GUI.

  4. Ensure the gateway or relay has the aws-cli tools installed

  5. Save the following script as s3export.sh. This script exports in 15-minute intervals; if you prefer to do it more or less frequently change the FROMTIME and TOTIME variables.

    #!/bin/bash
    # day, hour, minute timestamp
    TIMESTAMP=`date +'%Y%m%d%H%M'`
    # to prevent overlapping records, do 16 min ago to 1 min ago
    FROMTIME=`date --date="16 minutes ago" +'%Y-%m-%d %H:%M:%S'`
    TOTIME=`date --date="1 minutes ago" +'%Y-%m-%d %H:%M:%S'`
    # this token needs only audit/activities permission
    export SDM_ADMIN_TOKEN=[token]
    S3NAME=strongdm-log-$TIMESTAMP.gz
    S3ACTIVITIESNAME=strongdm-activities-$TIMESTAMP.gz
    S3PATH=s3://bucket/path/to/logs # no trailing slash
    export AWS_ACCESS_KEY_ID=[token]
    export AWS_SECRET_ACCESS_KEY=[token]
    # ensure AWS environment variables are in place
    journalctl -q -o cat --since "$FROMTIME" --until "$TOTIME" -u sdm-proxy | \
    gzip | aws s3 cp - $S3PATH/$S3NAME
    sdm audit activities --from "$FROMTIME" --to "$TOTIME" | \
    gzip | aws s3 cp - $S3PATH/$S3ACTIVITIESNAME
  6. Add the following line to /etc/crontab. If you changed the export interval above, change the cron interval here to match.

    0,15,30,45 * * * * root /home/ubuntu/s3export.sh
  7. Verify that files are being generated every 15 minutes in your S3 bucket.

SSH session extraction

  1. To extract SSH sessions from these logs, you will need to first determine the ID of the session you want to view. This can most easily be done by running sdm audit ssh with the relevant --from and --to flags.

    $ sdm audit ssh --from "2018-03-20" --to "2018-03-22"
    Time,Server ID,Server Name,User ID,User Name,Duration (ms),Capture ID,Hash
    2018-03-21 20:51:16.098221 +0000 UTC,1334,prod-312-test,1016,Joe Admin,8572,4516ae2e-5d55-4559-a08c-8a0f514b579c,afb368770931a2aae89e6a8801b40eac44569d93
    2018-03-21 20:53:01.4391 +0000 UTC,1334,prod-312-test,1016,Joe Admin,7515,fbd50897-1359-4b55-a103-68e4dafa494b,aa4aa0646469757df9f0b92fb5ca39a9c1bfd38d
    2018-03-22 21:57:10.920914 +0000 UTC,1334,prod-312-test,1016,Joe Admin,10440,aa8dab30-685d-4180-a86b-bb1794d23756,aa4aa0646469757df9f0b92fb5ca39a9c1bfd38d
    2018-03-22 23:16:40.170815 +0000 UTC,1334,prod-312-test,1016,Joe Admin,5433,7a8735cf-05c8-4840-89ae-42c6ad750136,883b03873229301e58fb6c9ccf1a3f584953d13c
    2018-03-22 23:21:49.987304 +0000 UTC,1334,prod-312-test,1016,Joe Admin,4529,2324e5d7-398b-47cd-ace6-78b33f813e3f,883b03873229301e58fb6c9ccf1a3f584953d13c
  2. Next, copy the logs from the relevant timeframe back down from S3. Please note that an SSH session may span several logs, so pay attention to the duration of the session as revealed in step 1.

  3. Unzip the logs and compile them into a single file.

    $ cat log1 log2 log3 > combined-logs
  4. Run sdm ssh split <logfile> to extract all SSH sessions from this log. They will be named after the session ID, at which point you can view the relevant session file (in typescript format).

    $ sdm ssh split combined-logs
    5783cb5e-e1c8-44ba-b8ee-4bc4d8c28c7d.ssh
    9d880e13-f608-4fe0-b1e7-deeb35bb9f2c.ssh

SSH session extraction prior to S3 delivery

An alternate method exists to extract SSH sessions prior to shipping the logs to S3.

This method has known limitations: if SSH sessions span log copy/delivery intervals, there may be duplicated or incomplete SSH session recordings.

This variant modifies the above script in the following manner:
  • Save extracted log files locally before sending to S3
  • Extract SSH sessions from saved files
  • Increase cron interval to decrease likelihood of incomplete SSH sessions
#!/bin/bash
# day, hour, minute timestamp
TIMESTAMP=`date +'%Y%m%d%H%M'`
# to prevent overlapping records, do 61 min ago to 1 min ago
FROMTIME=`date --date="61 minutes ago" +'%Y-%m-%d %H:%M:%S'`
TOTIME=`date --date="1 minutes ago" +'%Y-%m-%d %H:%M:%S'`
SSHDIR=/path/to/save/ssh/sessions
TEMPDIR=/tmp
# this token needs only audit/activities permission
export SDM_ADMIN_TOKEN=<token>
S3NAME=strongdm-log-$TIMESTAMP.gz
S3ACTIVITIESNAME=strongdm-activities-$TIMESTAMP.gz
S3PATH=s3://bucket/path/to/logs # no trailing slash
export AWS_ACCESS_KEY_ID=<token>
export AWS_SECRET_ACCESS_KEY=<token>
# ensure AWS environment variables are in place
journalctl -q -o cat --since "$FROMTIME" --until "$TOTIME" -u sdm-proxy > $TEMPDIR/sdmaudit.log
cd $SSHDIR; sdm ssh split $TEMPDIR/sdmaudit.log
gzip $TEMPDIR/sdmaudit.log | aws s3 cp - $S3PATH/$S3NAME
sdm audit activities --from "$FROMTIME" --to "$TOTIME" | \
gzip | aws s3 cp - $S3PATH/$S3ACTIVITIESNAME

Configure this script to run every hour in cron.

Installation — Previous
Logging Scenario - Send Local Logs to CloudWatch
Next — Installation
Logging Scenario - Send Local Logs to Filebeat