Deployment Details

Learn more about our flexible deployment options.

We offer a number of flexible deployment options based on what you need. This includes:

  • The Standard Deployment which requires filling out only 5 fields in the CloudFormation Template

  • This also includes a completely Private Deployment where all of our components run in private VPCs and private Subnets with no public IPs assigned at all

  • An in-between option where you have public access (public Load Balancer) while still running all solution components in a private VPC and Subnets

You can mix and match as well as incorporate VPC Endpoints to keep as much traffic as possible going over the AWS backbone.

All components deployed, created and installed run inside of your account. We do not host any of them and we never send any of your objects/files outside of your account. All scanning is performed close to the data inside your account(s) and in-region.

As you'll learn below, we may send some data (IP address, account email, version numbers) to a Cloud Storage Security AWS account to assist you and your users with accessing your application. You can opt out of this if you don't want that information to be reported.

No matter the deployment option you choose, you can also leverage local signature updates for both the Sophos engine and the ClamAV engine through our private mirror functionality.

Standard Deployment

The Standard Deployment is the simplest deployment. After providing only 5 inputs to the CloudFormation Template you will have a deployed and running solution in ~5 minutes. This deployment expects the Console and the Agent(s) to be placed in VPCs and Subnets that have an Internet Gateway (IGW) allowing for outbound traffic.

This a typical setup for VPCs and Subnets.The outbound routing allows ECS to pull down images from ECR, allows the Console and Agent(s) to communicate with required AWS Services and provides access to the management UI. Although public IPs are assigned, control is still done through Security Groups and access can be limited through IP ranges.

With this deployment, we register your application subdomain with a Route53 Hosted Zone we host and manage in one of the Cloud Storage Security AWS accounts. This allows you to have consistent access to your application. If you'd prefer to manage the domain and SSL cert yourself, you can leverage the Application Load Balancer options discussed below.

Private Deployment

Private Deployments are defined by locking down the solution components (Console and Agents) such that they do not have public IPs. You can still provide public access if desired while locking everything else down. Whether it is best practices, compliance or internal rules you can deploy and leverage the solution as needed.

You'll see the deployment options below leveraging Application Load Balancers. These can be internet-facing or internal and can even be leveraged with the Standard Deployment. You do not have to leverage an ALB in a private deployment, but there are a number of reasons you might want to.

First off, AWS Fargate tasks do not get assigned persistent IP addresses. As a result, the IP address can change underneath you requiring you to look it up. You may also decide you'd like to manage or apply your own domain and SSL certificate for accessing the application. A load balancer allows you to accomplish all of these things: a persistent access point, apply your own domain and leverage your own certificates.

Public Load Balancer Option

With this option we deploy an internet-facing load balancer on your behalf that will be publicly available based on your Security Group rules. Easy access over HTTPs.

Private Load Balancer Option

With this option we deploy an internal load balancer on your behalf that will be assigned only internal/private IPs. You must be able to access this network, typically through VPN or Direct Connect, in order to access the application.

Leveraging VPC Endpoints

In the previous deployment options you either assigned public or private IPs to the solution components and you controlled their privacy by utilizing either an Internet Gateway or a NAT Gateway. In either scenario, all AWS API calls went out over the internet. There are times when you may not want this to happen ... at all. AWS does provide mechanisms (VPC Endpoints) to keep the API call traffic on the AWS backbone and not travel over the internet. VPC Endpoints can be mixed and matched into any of the deployment options, public or private. As seen below, not all services the Console and Agent use, can leverage VPC Endpoints. This is overall a great reduction in the amount of traffic over the public internet so it is worth considering.

Important: The Security Group associated with the endpoints ecr.dkr and ecr.api must have an inbound rule allowing all traffic from the CIDR range of the VPC. Also, the subnets used for the console must be the ones included in the VPC Endpoints,.

The Console leverages 3 services that do not have VPC Endpoints today: AWS Marketplace, Cognito and AppConfig. To use these services you will be required to provide either a NAT Gateway or a Proxy Server to allow for an outbound route. Need for access to AWS Marketplace can be bypassed if you choose to subscribe to our BYOL listing.

The Agent requires only one service: AppConfig.

A VPC Endpoint for Security Hub is optional and only required if you choose to use our Security Hub integration.

You can also use the following script to setup the Endpoints needed:

Replace variables and awscli profile name

import boto3

def create_vpc_endpoints(vpc_id, subnets, interface_service_names, gateway_service_names, route_table_id):
#Replace the aws-cli-profile-name with your profile name
    boto3.setup_default_session(profile_name='aws-cli-profile-name')
    ec2_client = boto3.client('ec2')

    try:
        vpc_endpoint_ids_interface = []

        for service_name in interface_service_names:

            response = ec2_client.create_vpc_endpoint(
                VpcId=vpc_id,
                ServiceName=service_name,
                SubnetIds= subnets,
                VpcEndpointType='Interface'
            )
            vpc_endpoint_ids_interface.append(response['VpcEndpoint']['VpcEndpointId'])
            print(f"VPC Endpoint for {service_name} created with ID: {response['VpcEndpoint']['VpcEndpointId']}")

        print("Interface VPC Endpoints created successfully!")

        vpc_endpoint_ids_gateway = []

        for service_name in gateway_service_names:
            response = response = ec2_client.create_vpc_endpoint(
                VpcId=vpc_id,
                ServiceName=service_name,
                VpcEndpointType='Gateway',
                RouteTableIds= [route_table_id]
            )
            vpc_endpoint_id = response['VpcEndpoint']['VpcEndpointId']
            vpc_endpoint_ids_gateway.append(vpc_endpoint_id)
            print(f"VPC Endpoint for {service_name} created with ID: {vpc_endpoint_id}")

        print("Gateway VPC Endpoints Created Succesfully!")

        print("All VPC Endpoints created and associated with Route Table successfully!")     

        return vpc_endpoint_ids_interface, vpc_endpoint_ids_gateway
    
    except Exception as e:
        print("Error creating VPC endpoints:", str(e))
        return None

if __name__ == "__main__":
    # Replace these variables with your actual VPC ID and the desired regions
    vpc_id = 'vpc-id'
    subnets_id = ['subnet-id-A', 'subnet-id-B', 'subnet-id-C' ]
    console_region = 'console-region'
    route_table = 'route-table-id'

    interface_services_to_create = [ 
                        f'com.amazonaws.{console_region}.s3', f'com.amazonaws.{console_region}.cloudformation',f'com.amazonaws.{console_region}.logs', 
                        f'com.amazonaws.{console_region}.ssm',f'com.amazonaws.{console_region}.application-autoscaling', f'com.amazonaws.{console_region}.ecs',
                        f'com.amazonaws.{console_region}.ecs-agent', f'com.amazonaws.{console_region}.kms', f'com.amazonaws.{console_region}.sts',  
                        f'com.amazonaws.{console_region}.ecr.dkr', f'com.amazonaws.{console_region}.autoscaling',f'com.amazonaws.{console_region}.ecr.api', 
                        f'com.amazonaws.{console_region}.sns', f'com.amazonaws.{console_region}.sqs', f'com.amazonaws.{console_region}.ec2', f'com.amazonaws.{console_region}.elasticloadbalancing',
                        f'com.amazonaws.{console_region}.monitoring', f'com.amazonaws.{console_region}.ebs', f'com.amazonaws.{console_region}.securityhub']
    
    #Securityhub endpoint could be taken off if not using Security Hub
    
    gateway_services_to_create = [f'com.amazonaws.{console_region}.s3', f'com.amazonaws.{console_region}.dynamodb']

    created_endpoint_ids = create_vpc_endpoints(vpc_id, subnets_id, interface_services_to_create, gateway_services_to_create, route_table)

API Endpoint

The API Endpoint is another Agent Service that allows for an API Driven scan of files and objects. It can be mixed into any of the deployment options above. It earns a special call out because it has its own Application Load Balancer deployed for and associated with it. So you can have a deployment that has an ALB fronting the Console and then another ALB fronting the API Endpoint. Like previously discussed load balancers, you can choose to make the ALB internet-facing or internal, it all depends on your needs.

We are not showing a multi-region deployment below, but if your requirements dictate it, you can deploy as many API Endpoints as needed so scanning can be close to you users/applications/data.

Last updated