Deployment Details
Learn more about our flexible deployment options.
Last updated
Learn more about our flexible deployment options.
Last updated
We offer a number of flexible deployment options based on what you need. This includes:
The Standard Deployment
which requires filling out only 5 fields in the CloudFormation Template
This also includes a completely Private Deployment
where all of our components run in private VPCs and private Subnets with no public IPs assigned at all
An in-between option where you have public access (public Load Balancer) while still running all solution components in a private VPC and Subnets
You can mix and match as well as incorporate VPC Endpoints to keep as much traffic as possible going over the AWS backbone.
All components deployed, created and installed run inside of your account. We do not host any of them and we never send any of your objects/files outside of your account. All scanning is performed close to the data inside your account(s) and in-region.
Our console requires specific permissions to manage its own infrastructure and integrate with your AWS services. To provide the most secure environment, we recommend deploying it in a dedicated AWS account. This isolates the product's permissions and prevents any unintended impact on other resources.
As you'll learn below, we may send some data (IP address, account email, version numbers) to a Cloud Storage Security AWS account to assist you and your users with accessing your application. You can opt out of this if you don't want that information to be reported.
No matter the deployment option you choose, you can also leverage local signature updates for both the Sophos engine and the ClamAV engine through our private mirror functionality.
The Standard Deployment
is the simplest deployment. After providing only 5 inputs to the CloudFormation Template you will have a deployed and running solution in ~5 minutes. This deployment expects the Console and the Agent(s) to be placed in VPCs and Subnets that have an Internet Gateway (IGW) allowing for outbound traffic.
This a typical setup for VPCs and Subnets.The outbound routing allows ECS to pull down images from ECR, allows the Console and Agent(s) to communicate with required AWS Services and provides access to the management UI. Although public IPs are assigned, control is still done through Security Groups and access can be limited through IP ranges.
With this deployment, we register your application subdomain with a Route53 Hosted Zone we host and manage in one of the Cloud Storage Security AWS accounts. This allows you to have consistent access to your application. If you'd prefer to manage the domain and SSL cert yourself, you can leverage the Application Load Balancer options discussed below.
Private Deployments
are defined by locking down the solution components (Console and Agents) such that they do not have public IPs. You can still provide public access if desired while locking everything else down. Whether it is best practices, compliance or internal rules you can deploy and leverage the solution as needed.
You'll see the deployment options below leveraging Application Load Balancers. These can be internet-facing
or internal
and can even be leveraged with the Standard Deployment
. You do not have to leverage an ALB in a private deployment, but there are a number of reasons you might want to.
First off, AWS Fargate tasks do not get assigned persistent IP addresses. As a result, the IP address can change underneath you requiring you to look it up. You may also decide you'd like to manage or apply your own domain and SSL certificate for accessing the application. A load balancer allows you to accomplish all of these things: a persistent access point, apply your own domain and leverage your own certificates.
With this option we deploy an internet-facing
load balancer on your behalf that will be publicly available based on your Security Group rules. Easy access over HTTPs.
With this option we deploy an internal
load balancer on your behalf that will be assigned only internal/private IPs. You must be able to access this network, typically through VPN or Direct Connect, in order to access the application.
In the previous deployment options you either assigned public or private IPs to the solution components and you controlled their privacy by utilizing either an Internet Gateway or a NAT Gateway. In either scenario, all AWS API calls went out over the internet. There are times when you may not want this to happen ... at all. AWS does provide mechanisms (VPC Endpoints) to keep the API call traffic on the AWS backbone and not travel over the internet. VPC Endpoints can be mixed and matched into any of the deployment options, public or private. As seen below, not all services the Console and Agent use, can leverage VPC Endpoints. This is overall a great reduction in the amount of traffic over the public internet so it is worth considering.
Important: The Security Group associated with the endpoints ecr.dkr and ecr.api must have an inbound rule allowing all traffic from the CIDR range of the VPC. Also, the subnets used for the console must be the ones included in the VPC Endpoints,.
The Console leverages 3 services that do not have VPC Endpoints today: AWS Marketplace, Amazon Cognito, and AWS AppConfig. To use these services you will be required to provide either a NAT Gateway or a Proxy Server to allow for an outbound route. Need for access to AWS Marketplace can be bypassed if you choose to subscribe to our BYOL listing.
A VPC Endpoint for Security Hub is optional and only required if you choose to use our Security Hub integration.
The user must configure either AWS Direct Connect, a VPN, or a jumpbox to access the private subnet.
The majority of AWS services can be accessible over VPC Endpoint. The Console and Agent will attempt to contact all endpoint-enabled services using that method. All requests going through endpoints will be traveling over the AWS global network infrastructure.
For the services that do not have endpoints available, the Console and Agent will forward those requests to a proxy, if it is configured. The proxy will send requests over the configured NAT Gateway and make calls over the internet to the remaining non-endpoint AWS Services.
Replace variables and awscli profile name
If you need a way to deploy privately but aren't sure which subnets to use from your current VPCs or which VPC endpoints to setup, you can use our private deployment CFT to create a new VPC with all of the resources (subnets, VPC endpoints, etc.) needed to deploy your Management Console and scanning agents in a fully private environment.
This template will first deploy the following networking pieces before deploying the software:
A VPC and public + private subnets
All necessary VPC endpoints in a single region
An AWS network firewall
An application load balancer that will sit in front of your management console. You will need a valid SSL certificate from AWS Certificate Manager to be able to deploy.
AWS Network Firewalls can be costly. If you are only performing testing make sure you don't forget about the deployment and properly tear it down.
Below is a screenshot of the parameters you can expect within the private deployment CloudFormation template.
You can always find the latest version of our private deployment template here.
The API Endpoint is another Agent Service that allows for an API Driven scan of files and objects. It can be mixed into any of the deployment options above. It earns a special call out because it has its own Application Load Balancer deployed for and associated with it. So you can have a deployment that has an ALB fronting the Console and then another ALB fronting the API Endpoint. Like previously discussed load balancers, you can choose to make the ALB internet-facing
or internal
, it all depends on your needs.
We are not showing a multi-region deployment below, but if your requirements dictate it, you can deploy as many API Endpoints as needed so scanning can be close to you users/applications/data.
Users can configure their application to upload their file to an API endpoint that the Scanning Agent sits behind via an HTTPS request.
The request and file is processed through the Internet Gateway sitting at the entrance of the VPC where the API Agent resides.
The request and file reach the Application Load Balancer which resides in the public subnet.
The file included in the request is scanned with the API Agent and a verdict is rendered by the Agent.
Using the same channels, the Agent returns a JSON response via HTTPS with the decision on the file: Infected, Clean, etc.
(Optional) The API Agent can upload Clean files to the S3 bucket of your choice.
Read more about our API Scanning here.