Advanced Deployment Considerations

The CloudFormation template can optionally be customized beyond the default settings.

The 5 fields discussed in Steps to Deploy are the minimum fields that have to be set to get the product successfully deployed. For many deployments, the default settings will work well. With that said, the other options in the CloudFormation Template could be beneficial for you to configure and therefore could warrant your attention.

To view details for configuring additional settings within your CloudFormation Template you can review the sections below on this page. Feel free to SKIP this article if you just want to start with the defaults. However, please consider two exceptions:

  1. By default, the stack will deploy without the use of a Load Balancer. There are times when this will not work for your use case. Two such scenarios are: you want to use your own DNS and SSL certificate or you want to deploy in a private subnet behind a NAT. These are both great reasons to leverage a load balancer.

    If you would like to include a load balancer at the time of deployment you can do so in your CloudFormation Template. You can also add a load balancer to an existing deployment without one. And you can remove the load balancer after the fact as well. Simply run a Stack Update and add or remove the load balancer. This must be done manually through the CloudFormation Console. Specific steps to setting up a load balancer in your CloudFormation Template can be viewed below.

  2. Resources can only be renamed to match your particular naming scheme at the time of deployment. Changing these values later will result in errors. The default values are fine to leave but if you feel the need to rename values such as your quarantine bucket you should do so now before you've deployed.

Console and Agent Sizing - vCPU and Memory

The default values provided for both the Console and the Agents will meet the needs of the vast majority of customers. If any changes are made initially, you could up the Console settings if you plan to do large sets of Scheduled Scanning or regular assessment reports. You could also adjust the Agents down to 1vCPU and 3GB Memory for similar performance and about half the cost. All of the performance testing we have done has been with 2/4, but we saw very little impact on CPU overhead and memory that reducing both of those should work. Additional testing will come with different configurations in the near future.

If you do decide to make changes for either task, you must specify the proper vCPU:Memory ratio. The memory must be in the range of 2x to 8x of the vCPU.

vCPU = .5, then 1gb <= memValue <= 4gb

vCPU = 1, then 3gb <= memValue <= 8gb

vCPU = 2, then 4gb <= memValue <= 16gb

vCPU = 3, then 6gb <= memValue <= 24gb

vCPU = 4, then 8gb <= memValue <= 32gb

The new minimum memory requirement for agents is 3gb. We no longer offer a 2gb memory choice as we saw unstable scanning results with this value.

Agent vCPU and Memory recommendations have changed. We recommend to run the scanning agents with 1 vCPU and 3gb Memory. The defaults have been 2 vCPU and 4gb Memory, but we are finding there is no advantage to those settings at the moment. Switching to 1 and 3 is a worthwhile cost savings while having no impact on performance. In the Performance Throughput Table the ClamAV numbers were produced with 2vCPU and 4gb Mem, but the Sophos numbers were run with 1vCPU and 3gb Mem. In subsequent testing, we saw no noticeable reduction in performance with ClamAV as well.

If the memory specified for the Console and Agent are not within the proper range, you will see the following:

These values can be changed after the fact. Go to the Console Settings page to modify the console specs. Go to the Agent Settings page to modify the agent specs.

Access to KMS Encryption Keys

The default value is Yes and this is our preference to simplify scanning for you. In order to scan objects in buckets with a KMS Key assigned, the Scanning Agent Role requires access to the key. This setting gives the scanner role access to all KMS keys, but with only the scope to use with Amazon S3. This is good for when new buckets come online and use new keys for the encryption or you change the keys on existing buckets. The scanning agent will automatically be ready to go without a hitch.

Alternatively, you can select No and then manually, individually assign keys to the scanner role. Check this trouble shooting writeup for more information. This value can be changed post-deployment as well by following the trouble shooting link.

Auto Assign Public IPs - Console and Scanning Agents

The default value is Enabled and this will assign public IPs to the Console and Scanning Agent tasks. For the scenarios where you will not be leveraging the public IP or do not want the IP assigned at all, you can switch these values (one or both) to Disabled and we will not assign a public IP.

Agent Auto-Scaling Configuration

The default selections will set you up to have an scanning agent running 24x7x365. Depending on your workflow and your time-to-verdict requirements, you may not need a scanning agent running all the time. Changing the Only Run Scanning Agents When Files are in Queue option to Yes will set the scanning agents to shut down completely when there is no work to be done. We call this feature Smart Scan This is great for cost efficiencies, but does slow down how quickly a file is scanned as it takes 60-90 seconds to spin a scanning agent up. Decide what your requirements are and then fill this section out.

When you switch Only Run ... to Yes, you must change the Minimum Number of Agents Per Region to 0. Maximum Number of Running Agents Per Region can be set to any value. It can be left high or reduced down to some smaller number so you know exactly how many agents will spin up at any given time.

Number of Messages in Queue to Trigger Scaling is used to determine when the first or next scanning agent will spin up. If you are in Smart Scan mode then it will determine when the first agent spins up. Most leverage Smart Scan with a value of 1, but some like to let the work build up a bit before they spin a scanning agent up. Choose a number that makes sense for your organization.

Alternatively, you can create Scheduled Scans that allow you to only spin up scanning agents on a schedule basis to scan your objects (all or new).

Note: all of these settings can be changed after the fact from our Management Console Agent Settings page

Optional Load Balancer Configuration

By default, the stack will deploy without the use of a Load Balancer. Instead if you deploy using public subnets, we register your console IP with a subdomain tied to the domain in a Route53 hosted zone inside one of our Cloud Storage Security AWS accounts. If you deploy using private subnets, you will need to use the private IP that is generated by the ECS task that is stood up to host the console as mentioned here.

There are times when this will not work for your use case. Two such scenarios are:

  1. You want to use your own DNS and SSL cert

  2. You want to deploy using private subnets behind a NAT but you also want a persistent URL that you can use to access the management console, instead of the private IP address of the console service task which will change anytime the console service reboots.

These are both great reasons to leverage a load balancer.

Required Fields

There are 5 fields you must provide values for in your CFT in order to stand up a load balancer:

Use a Load Balancer

  • Use a Load Balancer for the Console - change this to Yes

You'll need to specify that you want to use a load balancer. If this option is set to No the load balancer will not be deployed.

SSL Certificate

  • SSL Certificate ARN - specify the ARN to a certificate accessible in-region

While load balancers in AWS may not require SSL certificates in order to be stood up, because we are security-centric we require an SSL certificate as part of the LB setup process. Usually, we recommend using a wildcard SSL certificate since most of our customers access their console through a subdomain. This SSL certificate needs to either be requested through AWS Certificate Manager (ACM) OR created elsewhere with any third-party you may use for your certificate authority and then imported into ACM.

If you are unfamiliar with ACM or importing certificates into ACM, this section of the AWS ACM User Guide discusses importing certificates.

If you are exporting the certificate from a third-party it may not be in the PEM format AWS requires for importing certificates. If that's the case, AWS offers a blog article that discusses converting the certificate file to the PEM format using OpenSSL. You can read that article here. While this article discusses converting from PFX to PEM format, other common formats for SSL certificates do exist and you'll be able to find resources online for converting the format using OpenSSL.

Once you have an SSL certificate that you can use in ACM you will need to take the ARN value of the certificate and paste it into the SSL Certificate ARN field of your CFT.

Load Balancer Scheme

  • Should the load balancer be internet-facing or internal?

If you are using public subnets for your Load Balancer then it would make sense to set this to internet-facing. If you are using private subnets then you should set this to internal.

Subnet A ID and Subnet B ID

  • Load Balancer Subnet A ID - specify the Subnet ID found in the AWS console

  • Load Balancer Subnet B ID - specify a different Subnet ID found in the AWS console

Similar to your Management Console, you'll need to select two public or private subnets for your Load Balancer. You can choose the same subnets that you used for your Management Console if you want to reuse them.

Load Balancer subnets and the Console subnets must be in the same Availability Zones (this is an AWS requirement). For example, the Console gets put in private-subnet1 in AZ-a and private-subnet2 in AZ-b, then the load balancer should be placed in public-subnet1 in AZ-a and public-subnet2 in AZ-b. If the subnets do not match availability zones the load balancer will not be able to communicate with the console

Submit the CFT and optionally Configure a Custom URL

Once the above fields have been configured you are ready to submit the CFT and let it run. After the load balancer has been stood up there will be a load balancer URL that is generated.

If you use the Load Balancer URL you may receive a browser warning that the SSL certificate is not valid, since the certificate is associated with a different domain/subdomain. It is perfectly fine to continue using just the Load Balancer URL, however if you'd like to use a custom URL you can configure a CNAME record, either in Route53 or wherever you manage your DNS records, that points a custom URL to the Load Balancer URL.

If you use Route53 you can setup the custom URL as part of the optional fields mentioned below.

Optional Fields

The remaining fields are optional. If you happen to be using Route53 for your domain, then you can leverage these fields to setup the DNS value (although this can be done after deployment sa well). If you are managing DNS in some other way, you can skip these fields and setup DNS how you normally would to point at the Load Balancer URL.

  • Register a subdomain on Route53 - change this to Yes

  • Hosted Zone Name - specify the base domain value

  • Subdomain - specify the unique meaningful name to access the Antivirus for Amazon S3 application

  • Info Opt-Out - do not send any DNS info and version information to us

  • Container Security Group ID - You may specify your own security groups for us to utilize. We will add the necessary ingress routes to them. This is generally only necessary if you have an internal policy that requires all resources to be tagged at creation time

Optional Local Image Repository

If you have a requirement to host the repo for the container images, you can specify an account number in this field to instruct the Console and Scanning Agents where to look to retrieve their images. Simply create the repos in your own account, name them as required (cloudstoragesecurity/console and cloudstoragesecurity/agent), download the images from our repos and place them into your own repo. When the console or scanning agents boot up they will look to your local repo for updates.

How to Setup Local ECR Mirroring For CSS Console & Agent Repositories

  1. Go to CodeBuild

  2. Go to "Build projects"

  3. Click "Create build project"

  4. Provide name, such as "ReplicateCloudStorageSecEcr"

  5. Choose "No source" for Source provider

  6. Select Managed image - Ubuntu - Standard - aws/codebuild/standard:5.0 - Always use latest

  7. Check box for "Enable this flag if you want to build Docker images..."

  8. Either create a new service role or point at an existing one you have (that you can modify to add additional permissions to)

  9. Expand Additional configuration.

  10. Add environment variables:

  11. Name: AWS_DEFAULT_REGION Value: the region name you are placing this project and the ECR repos in (for example, us-east-1, but your specific region)

  12. Name: AWS_ACCOUNT_ID Value: the account ID you are placing this project and the ECR repos in

  13. You can leave everything else in the additional config as default settings, or customize as desired (vpcs/subnets/etc)

  14. Buildspec - switch to editor, and paste in the provided code

  15. Logs - customize as desired

  16. Click "Create build project"

  17. In the new build project, go to "Build triggers" and if desired, add a trigger for the project to run at the desired interval (recommended to run daily)

  18. In IAM, go to the role that was created or the existing role you pointed to, and add a new inline policy matching the one provided. Keep in mind that you have to fill the <region> and <your_account_id> pieces.


version: 0.2

      - echo Logging in to CloudStorageSec Amazon ECR repos...
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin 564477214187.dkr.ecr.$

      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin 564477214187.dkr.ecr.$

      - echo Logging in to local Amazon ECR repos...
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$

      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$

      - echo Build started on `date`
      - echo Getting latest tags...
      - LATEST_CSS_CONSOLE_TAG=$(aws ecr list-images --registry-id 564477214187 --repository-name cloudstoragesecurity/console --region $AWS_DEFAULT_REGION | jq -r '.imageIds | sort_by(.imageTag) | reverse | .[] | select(.imageTag != null) | .imageTag ' | head -n 1)
      - LATEST_CSS_AGENT_TAG=$(aws ecr list-images --registry-id 564477214187 --repository-name cloudstoragesecurity/agent --region $AWS_DEFAULT_REGION | jq -r '.imageIds | sort_by(.imageTag) | reverse | .[] | select(.imageTag != null) | .imageTag ' | head -n 1)
      - LATEST_CUSTOM_CONSOLE_TAG=$(aws ecr list-images --registry-id $AWS_ACCOUNT_ID --repository-name cloudstoragesecurity/console --region $AWS_DEFAULT_REGION | jq -r '.imageIds | sort_by(.imageTag) | reverse | .[] | select(.imageTag != null) | .imageTag ' | head -n 1)
      - LATEST_CUSTOM_AGENT_TAG=$(aws ecr list-images --registry-id $AWS_ACCOUNT_ID --repository-name cloudstoragesecurity/agent --region $AWS_DEFAULT_REGION | jq -r '.imageIds | sort_by(.imageTag) | reverse | .[] | select(.imageTag != null) | .imageTag ' | head -n 1)
      - |-
            echo "Newer Console image available. Copying image..."
            docker pull 564477214187.dkr.ecr.$$LATEST_CSS_CONSOLE_TAG

            docker tag 564477214187.dkr.ecr.$$LATEST_CSS_CONSOLE_TAG

            docker push $AWS_ACCOUNT_ID.dkr.ecr.$$LATEST_CSS_CONSOLE_TAG

            echo "Console $LATEST_CSS_CONSOLE_TAG Copied"
            echo "Console image is up to date."
      - |-
            echo "Newer Agent image available. Copying image..."
            docker pull 564477214187.dkr.ecr.$$LATEST_CSS_AGENT_TAG

            docker tag 564477214187.dkr.ecr.$$LATEST_CSS_AGENT_TAG

            docker push $AWS_ACCOUNT_ID.dkr.ecr.$$LATEST_CSS_AGENT_TAG

            echo "Agent $LATEST_CSS_AGENT_TAG Copied"
            echo "Agent image is up to date."
      - echo Build completed on `date`

You are now responsible for ensuring your local repos get updated. Our solution will now only look to your repo for updates.


    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

Optional AWS Resource Renaming

Do not change after the initial deployment. If you feel you need to rename any resources after deployment, please Contact Us. There are ways to rename most services, but you need to be aware of the impacts.

By default, we uniquely name each resource we deploy with CloudStorageSec-. We often include the resource type (i.e. Queue) and append the unique application ID. When customers have a formal naming standard they would like to see our resource naming follow suite. Leveraging the fields in this section will allow you to rename all resources. Be aware, you should do this at the time of deployment. It can be done after, but it has to be done with care.

We are calling these "prefix naming" because we must still append the unique appID to each resource.

Optional Security Group ID

You may specify your own security groups for us to utilize. We will add the necessary ingress routes to them. This is generally only necessary if you have an internal policy that requires all resources to be tagged at creation time.

We do tag all resources, but CloudFormation will perform that tagging action in a separate step for Security Groups, so there is a short period of time during which they are untagged.

Last updated