Cloud Storage Security Help Docs
Release Notes
  • Introduction
  • Getting Started
    • How to Subscribe
      • Pay-As-You-Go (PAYG)
      • Bring Your Own License/GovCloud (BYOL)
      • AWS Transfer Family
    • How to Deploy
      • Steps to Deploy
      • Advanced Deployment Considerations
      • AWS Transfer Family
    • How to Configure
  • Console Overview
    • Dashboard
    • Malware Scanning
      • AWS
        • Buckets
        • Amazon EBS Volumes
        • Amazon EFS Volumes
        • Amazon FSx Volumes
        • WorkDocs Connections
      • Azure
        • Blob Containers
      • GCP
        • GCP Buckets
    • See What's Infected
      • Findings
      • Malware History
      • Results
    • Schedules
    • Monitoring
      • Error Logs
      • Bucket Settings
      • Deployment
      • Jobs
      • Notifications
      • Storage Assessment
      • Usage
    • Configuration
      • Classification Rule Sets
      • Classification Custom Rules
      • Scan Settings
      • Console Settings
      • AWS Integrations
      • Job Networking
      • API Agent Settings
      • Proactive Notifications
      • License Management
      • Event Agent Settings
    • Access Management
      • Manage Users
      • Manage Accounts
        • Linking an AWS Account
        • Linking an Azure Account
        • Linking a GCP Account
      • Manage Groups
    • Support
      • Getting Started
      • Stay Connected
      • Contact Us
      • Documentation
  • Product Updates
  • How It Works
    • Scanning Overview
      • Event Driven Scanning for New Files
      • Retro Scanning for Pre-Existing Files
      • API Driven Scanning
    • Architecture Overview
    • Deployment Details
    • Sizing Discussion
    • Integrations
      • AWS Security Hub
      • AWS CloudTrail Lake
      • AWS Transfer Family
      • Amazon GuardDuty
      • Amazon Bedrock
    • Demo Videos
    • Scanning APIs
    • SSO Integrations
      • Entra ID SSO Integration
      • Okta SSO Integration
  • Frequently Asked Questions
    • Getting Started
    • Product Functionality
    • Architecture Related
    • Supported File Types
  • Troubleshooting
    • CloudFormation Stack failures
    • Cross-Region Scanning on with private network
    • API Scanning: Could not connect to SSL/TLS (v7)
    • Password not received after deployment
    • Conflicted buckets
    • Modifying scaling info post-deployment
    • Objects show unscannable with access denied
    • Remote account objects not scanning
    • My scanning agents keep starting up and immediately shutting down
    • I cannot access the management console
    • Linked Account Out of Date
    • Rebooting the Management Console
    • Error when upgrading to the latest major version
    • I Cannot Create/Delete an API Agent
  • Release Notes
    • Latest (v8)
    • v7
    • v6 and older
  • Contact Us & Support
  • Data Processing Agreement
  • Privacy Policy
Powered by GitBook
On this page
  • Standard Deployment
  • Private Deployment
  • Public Load Balancer Option
  • Private Load Balancer Option
  • Leveraging VPC Endpoints
  • Using our Private Deployment Template
  • API Endpoint
  1. How It Works

Deployment Details

Learn more about our flexible deployment options.

PreviousArchitecture OverviewNextSizing Discussion

Last updated 1 month ago

We offer a number of flexible deployment options based on what you need. This includes:

  • The Standard Deployment which requires filling out only 5 fields in the CloudFormation Template

  • This also includes a completely Private Deployment where all of our components run in private VPCs and private Subnets with no public IPs assigned at all

  • An in-between option where you have public access (public Load Balancer) while still running all solution components in a private VPC and Subnets

You can mix and match as well as incorporate VPC Endpoints to keep as much traffic as possible going over the AWS backbone.

All components deployed, created and installed run inside of your account. We do not host any of them and we never send any of your objects/files outside of your account. All scanning is performed close to the data inside your account(s) and in-region.

Our console requires specific permissions to manage its own infrastructure and integrate with your AWS services. To provide the most secure environment, we recommend deploying it in a dedicated AWS account. This isolates the product's permissions and prevents any unintended impact on other resources.

As you'll learn below, we may send some data (IP address, account email, version numbers) to a Cloud Storage Security AWS account to assist you and your users with accessing your application. You can opt out of this if you don't want that information to be reported.

No matter the deployment option you choose, you can also leverage local signature updates for both the Sophos engine and the ClamAV engine through our .

Standard Deployment

The Standard Deployment is the simplest deployment. After providing only 5 inputs to the CloudFormation Template you will have a deployed and running solution in ~5 minutes. This deployment expects the Console and the Agent(s) to be placed in VPCs and Subnets that have an Internet Gateway (IGW) allowing for outbound traffic.

This a typical setup for VPCs and Subnets.The outbound routing allows ECS to pull down images from ECR, allows the Console and Agent(s) to communicate with required AWS Services and provides access to the management UI. Although public IPs are assigned, control is still done through Security Groups and access can be limited through IP ranges.

With this deployment, we register your application subdomain with a Route53 Hosted Zone we host and manage in one of the Cloud Storage Security AWS accounts. This allows you to have consistent access to your application. If you'd prefer to manage the domain and SSL cert yourself, you can leverage the Application Load Balancer options discussed below.

Private Deployment

Private Deployments are defined by locking down the solution components (Console and Agents) such that they do not have public IPs. You can still provide public access if desired while locking everything else down. Whether it is best practices, compliance or internal rules you can deploy and leverage the solution as needed.

You'll see the deployment options below leveraging Application Load Balancers. These can be internet-facing or internal and can even be leveraged with the Standard Deployment. You do not have to leverage an ALB in a private deployment, but there are a number of reasons you might want to.

First off, AWS Fargate tasks do not get assigned persistent IP addresses. As a result, the IP address can change underneath you requiring you to look it up. You may also decide you'd like to manage or apply your own domain and SSL certificate for accessing the application. A load balancer allows you to accomplish all of these things: a persistent access point, apply your own domain and leverage your own certificates.

Public Load Balancer Option

With this option we deploy an internet-facing load balancer on your behalf that will be publicly available based on your Security Group rules. Easy access over HTTPs.

Private Load Balancer Option

With this option we deploy an internal load balancer on your behalf that will be assigned only internal/private IPs. You must be able to access this network, typically through VPN or Direct Connect, in order to access the application.

Leveraging VPC Endpoints

In the previous deployment options you either assigned public or private IPs to the solution components and you controlled their privacy by utilizing either an Internet Gateway or a NAT Gateway. In either scenario, all AWS API calls went out over the internet. There are times when you may not want this to happen ... at all. AWS does provide mechanisms (VPC Endpoints) to keep the API call traffic on the AWS backbone and not travel over the internet. VPC Endpoints can be mixed and matched into any of the deployment options, public or private. As seen below, not all services the Console and Agent use, can leverage VPC Endpoints. This is overall a great reduction in the amount of traffic over the public internet so it is worth considering.

Important: The Security Group associated with the endpoints ecr.dkr and ecr.api must have an inbound rule allowing all traffic from the CIDR range of the VPC. Also, the subnets used for the console must be the ones included in the VPC Endpoints,.

The Console leverages 3 services that do not have VPC Endpoints today: AWS Marketplace, Amazon Cognito, and AWS AppConfig. To use these services you will be required to provide either a NAT Gateway or a Proxy Server to allow for an outbound route. Need for access to AWS Marketplace can be bypassed if you choose to subscribe to our BYOL listing.

A VPC Endpoint for Security Hub is optional and only required if you choose to use our Security Hub integration.

  1. The user must configure either AWS Direct Connect, a VPN, or a jumpbox to access the private subnet.

  2. The majority of AWS services can be accessible over VPC Endpoint. The Console and Agent will attempt to contact all endpoint-enabled services using that method. All requests going through endpoints will be traveling over the AWS global network infrastructure.

  3. For the services that do not have endpoints available, the Console and Agent will forward those requests to a proxy, if it is configured. The proxy will send requests over the configured NAT Gateway and make calls over the internet to the remaining non-endpoint AWS Services.

You can also use the following script to setup the Endpoints needed:

Replace variables and awscli profile name

import boto3

def create_vpc_endpoints(vpc_id, subnets, interface_service_names, gateway_service_names, route_table_id):
#Replace the aws-cli-profile-name with your profile name
    boto3.setup_default_session(profile_name='aws-cli-profile-name')
    ec2_client = boto3.client('ec2')

    try:
        vpc_endpoint_ids_interface = []

        for service_name in interface_service_names:

            response = ec2_client.create_vpc_endpoint(
                VpcId=vpc_id,
                ServiceName=service_name,
                SubnetIds= subnets,
                VpcEndpointType='Interface'
            )
            vpc_endpoint_ids_interface.append(response['VpcEndpoint']['VpcEndpointId'])
            print(f"VPC Endpoint for {service_name} created with ID: {response['VpcEndpoint']['VpcEndpointId']}")

        print("Interface VPC Endpoints created successfully!")

        vpc_endpoint_ids_gateway = []

        for service_name in gateway_service_names:
            response = response = ec2_client.create_vpc_endpoint(
                VpcId=vpc_id,
                ServiceName=service_name,
                VpcEndpointType='Gateway',
                RouteTableIds= [route_table_id]
            )
            vpc_endpoint_id = response['VpcEndpoint']['VpcEndpointId']
            vpc_endpoint_ids_gateway.append(vpc_endpoint_id)
            print(f"VPC Endpoint for {service_name} created with ID: {vpc_endpoint_id}")

        print("Gateway VPC Endpoints Created Succesfully!")

        print("All VPC Endpoints created and associated with Route Table successfully!")     

        return vpc_endpoint_ids_interface, vpc_endpoint_ids_gateway
    
    except Exception as e:
        print("Error creating VPC endpoints:", str(e))
        return None

if __name__ == "__main__":
    # Replace these variables with your actual VPC ID and the desired regions
    vpc_id = 'vpc-id'
    subnets_id = ['subnet-id-A', 'subnet-id-B', 'subnet-id-C' ]
    console_region = 'console-region'
    route_table = 'route-table-id'

    interface_services_to_create = [ 
                        f'com.amazonaws.{console_region}.s3', f'com.amazonaws.{console_region}.cloudformation',f'com.amazonaws.{console_region}.logs', 
                        f'com.amazonaws.{console_region}.ssm',f'com.amazonaws.{console_region}.application-autoscaling', f'com.amazonaws.{console_region}.ecs',
                        f'com.amazonaws.{console_region}.ecs-agent', f'com.amazonaws.{console_region}.kms', f'com.amazonaws.{console_region}.sts',  
                        f'com.amazonaws.{console_region}.ecr.dkr', f'com.amazonaws.{console_region}.autoscaling',f'com.amazonaws.{console_region}.ecr.api', 
                        f'com.amazonaws.{console_region}.sns', f'com.amazonaws.{console_region}.sqs', 
                        f'com.amazonaws.{console_region}.ec2', f'com.amazonaws.{console_region}.elasticloadbalancing',
                        f'com.amazonaws.{console_region}.monitoring', f'com.amazonaws.{console_region}.ebs',
                        f'com.amazonaws.{console_region}.securityhub', f'com.amazonaws.{console_region}.appconfig']
    
    #Securityhub endpoint could be taken off if not using Security Hub
    
    gateway_services_to_create = [f'com.amazonaws.{console_region}.s3', f'com.amazonaws.{console_region}.dynamodb']

    created_endpoint_ids = create_vpc_endpoints(vpc_id, subnets_id, interface_services_to_create, gateway_services_to_create, route_table)

Replace variables and awscli profile name

import boto3

def create_vpc_endpoints(vpc_id, subnets, interface_service_names, gateway_service_names, route_table_id):
#Replace the aws-cli-profile-name with your profile name
    boto3.setup_default_session(profile_name='aws-cli-profile-name')
    ec2_client = boto3.client('ec2')

    try:
        vpc_endpoint_ids_interface = []

        for service_name in interface_service_names:

            response = ec2_client.create_vpc_endpoint(
                VpcId=vpc_id,
                ServiceName=service_name,
                SubnetIds= subnets,
                VpcEndpointType='Interface'
            )
            vpc_endpoint_ids_interface.append(response['VpcEndpoint']['VpcEndpointId'])
            print(f"VPC Endpoint for {service_name} created with ID: {response['VpcEndpoint']['VpcEndpointId']}")

        print("Interface VPC Endpoints created successfully!")

        vpc_endpoint_ids_gateway = []

        for service_name in gateway_service_names:
            response = response = ec2_client.create_vpc_endpoint(
                VpcId=vpc_id,
                ServiceName=service_name,
                VpcEndpointType='Gateway',
                RouteTableIds= [route_table_id]
            )
            vpc_endpoint_id = response['VpcEndpoint']['VpcEndpointId']
            vpc_endpoint_ids_gateway.append(vpc_endpoint_id)
            print(f"VPC Endpoint for {service_name} created with ID: {vpc_endpoint_id}")

        print("Gateway VPC Endpoints Created Succesfully!")

        print("All VPC Endpoints created and associated with Route Table successfully!")     

        return vpc_endpoint_ids_interface, vpc_endpoint_ids_gateway
    
    except Exception as e:
        print("Error creating VPC endpoints:", str(e))
        return None

if __name__ == "__main__":
    # Replace these variables with your actual VPC ID and the desired regions
    vpc_id = 'vpc-id'
    subnets_id = ['subnet-id-A', 'subnet-id-B', 'subnet-id-C' ]
    agent_region = 'agent-region'
    route_table = 'route-table-id'

    interface_services_to_create = [ 
                        f'com.amazonaws.{agent_region}.s3',f'com.amazonaws.{agent_region}.logs', 
                        f'com.amazonaws.{agent_region}.ssm',f'com.amazonaws.{agent_region}.application-autoscaling', f'com.amazonaws.{agent_region}.ecs',
                        f'com.amazonaws.{agent_region}.ecs-agent', f'com.amazonaws.{agent_region}.kms', f'com.amazonaws.{agent_region}.sts',  
                        f'com.amazonaws.{agent_region}.ecr.dkr', f'com.amazonaws.{agent_region}.autoscaling',f'com.amazonaws.{agent_region}.ecr.api', 
                        f'com.amazonaws.{agent_region}.sns', f'com.amazonaws.{agent_region}.sqs', f'com.amazonaws.{agent_region}.ec2',
                        f'com.amazonaws.{agent_region}.ebs', f'com.amazonaws.{agent_region}.securityhub'f', com.amazonaws.{console_region}.appconfig']
    #Securityhub endpoint could be taken off if not using Security Hub

    gateway_services_to_create = [f'com.amazonaws.{agent_region}.s3', f'com.amazonaws.{agent_region}.dynamodb']

    created_endpoint_ids = create_vpc_endpoints(vpc_id, subnets_id, interface_services_to_create, gateway_services_to_create, route_table)

Using our Private Deployment Template

If you need a way to deploy privately but aren't sure which subnets to use from your current VPCs or which VPC endpoints to setup, you can use our private deployment CFT to create a new VPC with all of the resources (subnets, VPC endpoints, etc.) needed to deploy your Management Console and scanning agents in a fully private environment.

This template will first deploy the following networking pieces before deploying the software:

  • A VPC and public + private subnets

  • All necessary VPC endpoints in a single region

  • An AWS network firewall

  • An application load balancer that will sit in front of your management console. You will need a valid SSL certificate from AWS Certificate Manager to be able to deploy.

AWS Network Firewalls can be costly. If you are only performing testing make sure you don't forget about the deployment and properly tear it down.

Below is a screenshot of the parameters you can expect within the private deployment CloudFormation template.

API Endpoint

We are not showing a multi-region deployment below, but if your requirements dictate it, you can deploy as many API Endpoints as needed so scanning can be close to you users/applications/data.

  1. Users can configure their application to upload their file to an API endpoint that the Scanning Agent sits behind via an HTTPS request.

  2. The request and file is processed through the Internet Gateway sitting at the entrance of the VPC where the API Agent resides.

  3. The request and file reach the Application Load Balancer which resides in the public subnet.

  4. The file included in the request is scanned with the API Agent and a verdict is rendered by the Agent.

  5. Using the same channels, the Agent returns a JSON response via HTTPS with the decision on the file: Infected, Clean, etc.

  6. (Optional) The API Agent can upload Clean files to the S3 bucket of your choice.

  1. The user must configure either AWS Direct Connect, a VPN, or a jumpbox to access the private subnet. Using a method of choice, the user uploads the file to an API endpoint that the Scanning Agent sits behind via an HTTPS request.

  2. The request and file is processed through the Internet Gateway sitting at the entrance of the VPC where the API Agent resides.

  3. The request and file reach the Application Load Balancer which resides in the private subnet.

  4. The file included in the request is scanned with the API Agent and a verdict is rendered by the Agent.

  5. Using the same channels, the Agent returns a JSON response via HTTPS with the decision on the file: Infected, Clean, etc.

  6. (Optional) The API Agent can upload Clean files to the S3 bucket of your choice.

You can always find the latest version of our private deployment template .

The API Endpoint is another Agent Service that allows for an of files and objects. It can be mixed into any of the deployment options above. It earns a special call out because it has its own Application Load Balancer deployed for and associated with it. So you can have a deployment that has an ALB fronting the Console and then another ALB fronting the API Endpoint. Like previously discussed load balancers, you can choose to make the ALB internet-facing or internal, it all depends on your needs.

Read more about our API Scanning .

here
here
API Driven scan
private mirror functionality
Standard Deployment Single Region
Standard Deployment Multi Region
Private LB Single Region
Private LB Multi Region
Private LB Single Region
Private LB Multi Region
These are the Endpoints we recommend you configure if you need to use VPC Endpoints
Private Deployment with VPC Endpoints
Select the image to expand it
Public API
Private API