Close
logodocs

strongDM Client Container

There are two primary methods to add the strongDM client container to your containerized deployment. You can either run the SDM container as a service so that it is always available, or else deploy the SDM container on an as-needed basis.

Authentication

A service account token will need to be added as an environment variable to the container. This token acts as your container's login credentials, and allows you to restrict access at any time from the Admin UI. If you have not already set up a service token please use the instructions at Service Accounts.

For the service account to work effectively, Port Overrides and Auto-Connect should both be enabled for your organization. Enabling these settings ensures a consistent login procedure for your container during runtime. As these changes will affect your entire organization, please review our documentation and contact strongDM support before making any changes if you have any questions.

Autoconnect
Autoconnect

Native SDM container

The SDM client container is a lightweight Ubuntu 19.04-based Docker image with the SDM binary pre-installed. This image can be obtained from quay.io by running the following Docker command.

$ docker pull quay.io/sdmrepo/client

Persistent

For this example the persistent container maps a container port (13307) to the same port on the host machine.

$ docker run -d -e SDM_SERVICE_TOKEN=$SERVICE_TOKEN -p 13307:13307 quay.io/sdmrepo/client

Validate the container connected successfully with sdm status from the container.

$ docker exec container-name sdm status
DATASOURCE NAME STATUS PORT TYPE
zd918-ssms connected 11521 mssql
strongDM-datasource1-sfo2-main_sdm_db connected 13306 mysql
strongDM-datasource1-sfo2-world connected 13307 mysql
SERVER STATUS PORT TYPE
i-09284a37e194e4a9d connected 14645 ssh
i-094451c7ae299e46f connected 38982 ssh
strongDM-client1-sfo2 connected 43264 ssh
strongDM-database1-sfo2 connected 43577 ssh
strongDM-gateway2-sfo2 connected 30572 ssh

By running sdm status we can see that the published port, in this case 13307, refers to the datasource strongDM-datasource1-sfo2-world.

The service token will automatically connect to all available datasources and servers. To avoid unnecessary connections, limit the service token's access.

At this point the container should be operational and ready to accept connections on the exposed port.

DB connections

Use your normal DB client to connect to the host port that is mapped to the container. The following output is trimmed for readability.

$ docker ps
CONTAINER ID IMAGE PORTS
551cc9c06734 quay.io/sdmrepo/client 127.0.0.1:13307->13307/tcp
$ docker exec 551cc9c06734 sdm status
DATASOURCE NAME STATUS PORT TYPE
strongDM-datasource1-sfo2-world connected 13307 mysql
$ mysql -h 127.0.0.1 -P 13307

Check the strongDM connection matrix if you're unsure of the proper connection settings for your database client.

SSH connections

Similar to the DB connection, if the SSH connection port is exposed to the host machine any SSH attempts to that port will get routed through SDM binary in the container. The following output is trimmed for readability.

$ docker ps
CONTAINER ID IMAGE PORTS
551cc9c06734 quay.io/sdmrepo/client 127.0.0.1:30572->30572/tcp
$ docker exec 551cc9c06734 sdm status
SERVER STATUS PORT TYPE
strongDM-gateway2-sfo2 connected 30572 ssh
$ ssh localhost -p 30572
[stronguser@strongdm-gateway2-sfo2 ~]$ # we are now connected via strongDM

Running as a service

To simplify the deployment process we recommend deploying the SDM container as a service. Below is a basic strondDM service file to be used as an example. This file can be added to your systemd folder structure such as /lib/systemd/system/strongdm.service.

[Unit]
Description=StrongDM
Wants=network-online.target
After=network-online.target
Requires=docker.service
[Service]
User=username
Group=groupname
Type=simple
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill %n
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull quay.io/sdmrepo/client:latest
ExecStart=/usr/bin/docker run \
--name=%n \
--rm \
-p DATASOURCE_PORT:DATASOURCE_PORT \
-e SDM_SERVICE_TOKEN=YOUR_SERVICE_TOKEN \
quay.io/sdmrepo/client:latest
ExecStop=/usr/bin/docker kill %n
[Install]
WantedBy=multi-user.target

With the service file added it can now be used with normal systemctl commands, or the equivalent in your distro.

$ sudo systemctl start strongdm

To enable the service so that it is started automatically when the system boots up, use the enable command.

$ sudo systemctl enable strongdm

Per-job deployment

The SDM container lifecycle can also be automated to run on-demand. When taking this approach be heedful of loading times as they may vary depending on the environment. The following examples show how to add availability validations into a BASH script.

Starting the container

In the example below the Docker binary is invoked to start the SDM client container. Running the SDM container starts the SDM binary automatically, but not instantly. Using the until command is one way to check that the SDM binary is available; if it is not then the script will sleep for 1 second and try again until successful.

# Start strongDM client container
/usr/bin/docker run -d \
--name=strongdm \
--rm \
-p 15432:15432 \
-e SDM_SERVICE_TOKEN=service_account_token \
quay.io/sdmrepo/client:latest
# Wait for sdm binary to be available
until docker exec -it strongdm sdm status &>/dev/null;
do
sleep 1
done

Waiting for datasources to connect

This same until logic can be added to check if a datasource is ready. In the following example the psql -l invocation will check if the connection is available without fully connecting. Once it returns successfully, you can run normal database operations.

# Wait for datasource connection to be ready
until psql -h 127.0.0.1 -l &>/dev/null;
do
sleep 1
done
# Execute database operation
psql -h 127.0.0.1 << EOF >> /var/log/etl.log
SELECT first_name,
last_name
FROM users u
WHERE u.created_at > current_date - '1 day'::interval;
EOF

Waiting for the SSH server

Similarly, any SSH connections may have a slight delay between the SDM binary being ready and the connection status becoming available.

# Wait for server to be ready
until ssh localhost -p 43577 exit &>/dev/null;
do
sleep 1
done
# Execute ssh commands
ssh localhost -p 43577 << EOF
uptime
cat /etc/os-release
exit
EOF

This approach only works if the known_hosts file already contains an entry for this connection. This can be done by manually connecting before running the script.

Terminating the strongDM container

To avoid overlapping containers, perform docker kill on the SDM container before ending the script.

# Terminate with name specified during creation
docker kill strongdm

Putting it all together

#!/usr/bin/env bash
# Start strongDM client container
/usr/bin/docker run -d \
--name=strongdm \
--rm \
-p 15432:15432 \
-e SDM_SERVICE_TOKEN=service_account_token \
quay.io/sdmrepo/client:latest
# Wait for sdm binary to be available
until docker exec -it strongdm sdm status &>/dev/null;
do
sleep 1
done
# Wait for datasource connection to be ready
until psql -l -h 127.0.0.1 -p 15432 &>/dev/null;
do
sleep 1
done
# Execute database operation
psql -h 127.0.0.1 -p 15432 <<EOF >> /var/log/etl.log
SELECT first_name,
last_name
FROM users u
WHERE u.created_at > current_date - '1 day'::interval;
EOF
# Terminate container
docker kill strongdm

Avoiding loops

To make your script a bit more robust the number of connection attempts can be limited to prevent infinite loops. To do so, replace the relevant section in the script above with the following loop:

# Wait for sdm binary to be available
for i in {1..60};
do
if psql -l -h 127.0.0.1 &>/dev/null;
then
break
else
sleep 1
fi
done

If you have any questions about the steps listed above, or suggestions on how it can be improved, please reach out to support@strongdm.com.

Automation — Previous
Containers
Next — Automation
Self registering relay