This document explains how to set up a regular export of captured SSH sessions for your organization using the
sdm audit ssh CLI command. Instructions are included for local export and for exporting to either AWS S3 cloud storage or Google Cloud Platform (GCP) cloud storage. For more command information, see the CLI Command Reference.
If you need assistance to export your captured SSH sessions to a different cloud storage provider, please contact strongDM Support.
Create a new Linux system user with restricted permissions to run the audit. In this example, we use
sdm. Download and install the Linux SDM client.
You do not need to log into the SDM client. The admin token serves as authentication.
Create an Admin Token
To create an admin token, sign into the strongDM Admin UI and go to Audit > API & Admin Tokens. From there you can create an admin token with the specific rights you require, which in this case is the Audit > SSH Captures permission only.
After you click Create, a dialog pops with the admin token. Copy the token, and save it for later use in
/etc/sdm-admin.token in the format
This file must be owned by your user.
chown sdm:sdm /etc/sdm-admin.token
For more details on creating admin tokens, see Create Admin Tokens.
Export to a JSON File
Set up a script to run a periodic SSH export. In the following example SSH export script, captured SSH sessions write to a JSON document every five minutes.
#!/bin/bashexport SDM_ADMIN_TOKEN=<insert admin token here>START=$(date -d "5 minutes ago" '+%Y-%m-%dT%H:%M:00') # start of audit slice, defaulting to 5 minutes agoFN=$(date -d "yesterday 00:00" '+%Y%m%d%H%M') # timestamp string to append to output filenameEND=$(date '+%Y-%m-%d%TH:%M:00') # end of audit slice, defaulting to now, at the top of the minuteTARGET=/var/log/sdm # location where JSON files will be written/opt/strongdm/bin/sdm audit ssh --from "$START" --to "$END" -j > "$TARGET/ssh.$FN.json"
Add a crontab entry
Although most Linux systems have locations to place scripts that run daily, weekly, or so on, the script is configured by default to run every five minutes. As such, our best bet is to place it directly into the crontab file for a user or for the system.
Add this line to the crontab of your choice, modifying the interval to match what you set in the script:
*/5 * * * * /path/to/script.sh
Export to Cloud Storage
If you configured logging to a cloud environment, use the following methods to extract SSH captures before or after log export.
SSH session extraction prior to export
Set up and run a periodic export in order to extract SSH sessions prior to shipping the logs to your cloud storage. The SSH captures are compressed and exported every hour.
This method has known limitations: if SSH sessions span log copy/delivery intervals, there may be duplicated or incomplete SSH session recordings.
#!/bin/bash# day, hour, minute timestampTIMESTAMP=`date +'%Y%m%d%H%M'`# to prevent overlapping records, do 61 min ago to 1 min agoFROMTIME=`date --date="61 minutes ago" +'%Y-%m-%d %H:%M:%S'`TOTIME=`date --date="1 minutes ago" +'%Y-%m-%d %H:%M:%S'`SSHDIR=/path/to/save/ssh/sessionsTEMPDIR=/tmp# this token needs only audit/ssh captures permissionexport SDM_ADMIN_TOKEN=<token>CLOUD_LOG_NAME=strongdm-log-$TIMESTAMP.gzCLOUD_SSH_NAME=strongdm-ssh-$TIMESTAMP.gzCLOUD_PATH=<scheme>://bucket/path/to/logs # change the cloud path <scheme> depending on the cloud (for example, s3 or gcp); note there is no trailing slash at the end of the pathexport CLOUD_ACCESS_KEY_ID=<key>export CLOUD_SECRET_ACCESS_KEY=<key># Ensure your environment variables are in place and gzip the data into either S3 (aws s3) or GCP (gsutil); this example uses S3journalctl -q -o cat --since "$FROMTIME" --until "$TOTIME" -u sdm-proxy > $TEMPDIR/sdmaudit.logcd $SSHDIR; sdm ssh split $TEMPDIR/sdmaudit.loggzip $TEMPDIR/sdmaudit.log | aws s3 cp - $CLOUD_PATH/$CLOUD_LOG_NAMEsdm audit ssh --from "$FROMTIME" --to "$TOTIME" | \gzip | aws s3 cp - $CLOUD_PATH/$CLOUD_SSH_NAME
Configure this script to run every hour in cron.
SSH session extraction after export
To extract SSH sessions from exported logs, first determine the ID of the session you want to view. Do this by running
sdm audit sshwith the relevant
--toflags, as in the following example.$ sdm audit ssh --from "2018-03-20" --to "2018-03-22"Time,Server ID,Server Name,User ID,User Name,Duration (ms),Capture ID,Hash2018-03-21 20:51:16.098221 +0000 UTC,1334,prod-312-test,1016,Joe Admin,8572,4516ae2e-5d55-4559-a08c-8a0f514b579c,afb368770931a2aae89e6a8801b40eac44569d932018-03-21 20:53:01.4391 +0000 UTC,1334,prod-312-test,1016,Joe Admin,7515,fbd50897-1359-4b55-a103-68e4dafa494b,aa4aa0646469757df9f0b92fb5ca39a9c1bfd38d2018-03-22 21:57:10.920914 +0000 UTC,1334,prod-312-test,1016,Joe Admin,10440,aa8dab30-685d-4180-a86b-bb1794d23756,aa4aa0646469757df9f0b92fb5ca39a9c1bfd38d2018-03-22 23:16:40.170815 +0000 UTC,1334,prod-312-test,1016,Joe Admin,5433,7a8735cf-05c8-4840-89ae-42c6ad750136,883b03873229301e58fb6c9ccf1a3f584953d13c2018-03-22 23:21:49.987304 +0000 UTC,1334,prod-312-test,1016,Joe Admin,4529,2324e5d7-398b-47cd-ace6-78b33f813e3f,883b03873229301e58fb6c9ccf1a3f584953d13c
Next, copy the logs from the relevant timeframe back down from your cloud storage. Please note that an SSH session may span several logs, so pay attention to the duration of the session as revealed in step 1.
Unzip the logs and compile them into a single file.cat log1 log2 log3 > combined-logs
sdm ssh split <logfile>to extract all SSH sessions from this log. They are named after the session ID. At this point, you can view the relevant session file (in JSON format).$ sdm ssh split combined-logs5783cb5e-e1c8-44ba-b8ee-4bc4d8c28c7d.ssh9d880e13-f608-4fe0-b1e7-deeb35bb9f2c.ssh
If you have questions about this process or run into trouble, please contact strongDM Support.