Skip to content

Frequently Asked Questions

Getting Started

Getting Started

Do my objects ever leave my account?

Object residence?

No. Antivirus for Amazon S3 is designed and deployed in such a way that your Amazon S3 objects never leave your account(s).


If you are utilizing Linked Accounts, the objects will be pulled from Account X to the deployed account for scanning. But these are maintained within your realm of linked accounts.

Is there a free trial?

Free Trial?

Yes, we offer a 30 day period of time or up to 500GB of scanning (whichever comes first) to try the product. Trial extensions may be requested, please Contact Us.

Start a Trial


Like AWS' trial policy, the trial is good for only one deployment within an account. Any following deployments will result in immediate charges for any data scanned by those deployments. The initial deployment will remain within the trial period and data.


You can see the status of your trial on the Config->License Management page. You will also be warned with a banner warning on the main dashboard when you are within 7 days of the trial ending or within 20% of the trial data allotment.

You can also subscribe to the Proactive Notifications to specifically receive emails when the free trial is approaching its end due to date or data.

How do you charge for the product?


We leverage a true consumption model. We charge for each gigabyte you scan with the product. This may be from one object or one thousand objects.

Review the public pricing on the AWS Marketplace Listing. Please Contact Us for custom pricing.

Do you have a detailed deployment guide?

Deployment Guide?

You can follow allow the Getting Started contained within the Help Docs you are currently within or you can download the PDF Deployment Guide if that is easier to follow. There are a number of topics that are covered in the deployment guide (more details on TCO and recovery strategies if ever needed) that you will not find within the Help Docs.

Detailed Deployment Guide

Do you support AWS GovCloud?

GovCloud Support?

Yes, we now have an option to deploy Antivirus for Amazon S3 inside of GovCloud. You can leverage the BYOL and GovCloud Listing in the Amazon Marketplace or launch the template directly from here.


Amazon Marketplace doesn't currently support metering for Fargate containers inside of GovCloud, so you must purchase a license (pre-purchase GBs) to operate within GovCloud. Please contact us at or one of our partners to procure a license.

Amazon Cognito is only supported in GovCloud US West, so the console must be deployed in this region. Scanning can be done in West or East, but the console deployment must be done in West.

Is this software as a service (SaaS)?


No, this solution is installed within your AWS account. Please refer to the Architecture section for more details.

We are exploring a SaaS version for those who are willing, but there is still a majority of companies who want their objects to stay "within their 4 walls" (in this case their own VPC) for the scanning process. Cloud Storage Security has delivered the solution to meet this initial need. We do see SaaS as a viable alternative for those that are willing so we are pursuing it.


There is a mechanism where we could offer this to you as a SaaS today. If interested, please contact us. If you're willing to work with us, we can explore delivering it to you this way.

Which browsers are supported?

Browser Support?

Any modern browser (chrome, firefox, edge, safari).

Where can I get the CloudFormation Template to deploy the product?

CloudFormation Template?

As seen in the How to Subscribe section, you'll be directly linked to the deployment CloudFormation Template. You can go to the Manage Subscriptions within the AWS Console and launch additional software from there.


  • PAYG Deployments:
    • Download the CloudFormation Template here or launch it directly here.
  • BYOL or GovCloud Deployments:
  • Cross-Account Role:
    • Download the CloudFormation Template here or launch it directly here.


You will be able to launch and deploy the Antivirus for Amazon S3 product from the above templates, but the product will not run unless you are subscribed.

What do all the CloudFormation parameters mean?

Cloud Formation Parameters?

Parameter                                                         Description
Stack Name           Name to identify this particular stack
Network Configuration        
Virtual Private Cloud (VPC) ID         Choose which VPC the Console should be deployed to
Subnet A ID             Choose the first Subnet the Console could be deployed to.
Subnet B ID             Choose the second Subnet the Console could be deployed to.
*Make sure the second subnet is different from the first
Console Security Group CIDR Block         The IP address range that can access the Console management website (e.g. X.X.X.X/24 for a single given IP, for open access)
It is always a good idea to specify a network tied to your company as opposed to being wide open.
Console Configuration        
Console vCPU           CPU desired for the Console container. There isn't much overhead to this container, so try the minimums and grow up as needed.
Console Memory           Memory desired for the Console container. There isn't much overhead to this container, so try the minimums and grow up as needed.

Memory Requirement: Allowed memory size is a factor of the selected vCPU size. You must pick a value that is 2x - 8x of the vCPU selection.
Example: .5vCPU, then memory must be between 1GB and 4GB in memory.
UserName           Name used to login to the Management Console
Email           This email address will be sent the initial password and all subsequent password reset requests. Ensure you can get to this email address.
Console Auto Assign Public IP           Allow public IP addresses to be assigned to the Console
  • Enabled - assign public IP
  • Disabled - do not assign public IP
Note: If you disable, your must still have access to the VPC private network or you will be unable to access the console
Agent Configuration        
Agent vCPU           CPU desired for the Agent container. Sizing the Agents can vary based on load volumes, object sizes or scan windows. Refer to the Sizing Discussion for more details.
Note: There is no reason to increase the agent vCPU at this time. 1 vCPU is enough for scanning at this time
Agent Memory           Memory desired for the Agent container. Sizing the Agents can vary based on load volumes, object sizes or scan windows. Refer to the Sizing Discussion for more details.
Note: At this time the default of 3GB memory is a good working amount. From testing, we do not see a need to go up. You'll scale out with more aents running before scaling up provides more value. In the future, tweaks may be made where more memory makes sense.

Memory Requirement: Allowed memory size is a factor of the selected vCPU size. You must pick a value that is 2x - 8x of the vCPU selection.
Example: 1vCPU, then memory must be between 2GB and 8GB in memory.
Allow Access to All KMS Keys           Allows the solution access to any KMS key only within the context of the Amazon S3 service. Permissions will be put in place so that we can decrypt and encrypt objects as needed during the scanning process.
Agent Auto Assign Public IP           Allow public IP addresses to be assigned to the scanning agents
  • Enabled - assign public IP
  • Disabled - do not assign public IP
Note: The agent has no real need for a public IP (unlike the Console for access), but the network it resides in must have "public" routing or enough routing to execute AWS API calls. This could be through a VPC Endpoint for supported services or by locating behind a NAT Gateway. Without that routing the agent will fail to spin up
Agent Auto-Scaling Configuration        
Only Run Scanning Agents When Files are in Queue?            A Yes/No answer that will impact how the scanning agents will run. The default of No will deploy and run the minimum number of agents defined below 24x7 so you have an agent(s) up and running and ready to scan. Setting this to Yes will require you to set the Minimum Number of Running Agents to 0 and agents will spin up only when there is work to do in the queue that surpasses the Number of Messages in Queue to Trigger Auto-Scaling. When select Yes, the number of messages in queue should typically be set to a smaller number like 1. This would indicate any time items come into the queue, meaning any time there is work to do, spin up an agent and scan it. But, when there is no work to be done it will spin all agents down for efficiencies. You can read more about this here.
Minimum Number of Running Agents Per Region            Minimum number of Agents you'd like running. This will be determined by scan volumes and scan windows. Refer to the Sizing Discussion for more details.
Setting this above 1 will incur more infrastructure costs as more agents will be running full time
Maximum Number of Running Agents Per Region             Maximum number of Agents you'd like running. This will be determined by scan volumes and scan windows. Refer to the Sizing Discussion for more details.
Default value of 12 is an arbitrary number at this time, change this as needed. Smaller if you want to ensure to never scale above a certain number (to lock down on possible costs) and larger if you need more agents running to process the load
Number of Messages in Queue to Trigger Agent Auto-Scaling             The number of entries that should sit in the queue for at least 1 minute before more Agents are triggered to scale-up. Based on how long it is taking to process the individual objects, you may make this number larger or smaller so you don't have too much scaling activity. Refer to the Sizing Discussion for more details.
Optional Load Balancer Configuration       
Use a Load Balancer for the Console?            A Yes/No answer determining whether or not to deploy a load balancer
SSL Certificate ARN            In order to create a secure connection, you must provide an SSL certificate to register with the load balancer. This can be one created inside AWS Certificate Manager or from a third party
Load Balancer Subnet A ID             Choose the first Subnet the Load Balancer could be deployed to.
NOTE: The load balancer subnets need to reside in the same Availability Zones as the Console subnets for them to properly communicate
Load Balancer Subnet B ID             Choose the second Subnet the Load Balancer could be deployed to.
*Make sure the second subnet is different from the first
NOTE: The load balancer subnets need to reside in the same Availability Zones as the Console subnets for them to properly communicate
Register a Subdomain on Route53             A Yes/No answer for whether the domain associated to the load balancer is managed by Route53. If yes, then you can directly register it from the CloudFormation deployment
Hosted Zone Name             The Route53 hosted zone value for the domain.
Subdomain             The value created as a subdomain on the hosted zone for application access.
Info Opt-Out             The option to not register with our Route53 and check in with our Free Trial. This is only available with the Load Balancer deployment option, otherwise we couldn't ensure access to your Console.
Note: As a result of not communicating with our backend service, your Free Trial will be shown as having ended. Follow these instructions to get your trial reinstated.
Optional Custom Hosting of Docker Container Images       
Custom ECR Account            The value placed here should be the AWS Account Number where you are hosting the Console and Scanning Agent images.
Note: When this field is given a value, the Console and Scanning Agents will only look to the specified account repo for updates (and for the initial install). You are responsible for updating this repo
Optional AWS Resource Renaming       
Various Resources             Rename deployed resources to match your defined naming scheme.

Resources to Rename (Prefix):
  • DynamoDB Tables
  • Quarantine Bucket Name
  • AppConfig Application
  • AppConfig Environment
  • AppConfig Deployment Strategy
  • AppConfig Document
  • AppConfig Document Schema
  • AppConfig Document Role
  • AppConfig Document Policy
  • User Pool
  • User Pool Client
  • User Pool Role
  • User Pool Policy
  • Console Task Role
  • Console Task Policy
  • Agent Task Role
  • Agent Task Policy
  • Cross Account Role
  • Cross Account Policy
  • Execution Role
  • Cluster Name
  • Service Name
  • Task Definition
  • Console Security Group
  • Load Balancer Name
  • Target Group Name
  • Load Balancer Group Name
  • Parameters
  • Notifications Topic
  • Event Based Scan Topic
  • Event Based Scan Queue
  • Retro Scan Queue
  • Event Agent Task
  • Event Agent Service
  • Retro Agent Task
  • Retro Agent Service
  • Large Event Queue Alarm
  • Small Event Queue Alarm
  • Decrease Agent Scaling Policy
  • Increase Agents Scaling Policy
  • Retro Queue Not Empty Alarm
  • Retro Queue Empty Alarm
  • Remove Retro Agents Scaling Policy
  • Set Retro Agent Scaling Policy
  • Agent Security Group Name
Product Functionality

Product Functionality

Which of my data do you scan: new or existing?

New or Existing data?

Both. Brand new objects will be scanned via an event-based trigger as soon as they arrive in the bucket. Existing objects will be scanned via the Retro Scanning feature where you can define to look back at all objects within the bucket or a subset based on date-time.

For more information, check out the Object Scanning overview.

You can also setup scanning (new or existing files) based on a schedule. Review the Schedule Scanning documentation.

Do I have to scan all the objects in my buckets or can I scan a subset?

Data - all or subset?

You can scan all objects, existing or newly coming in, or you can scan a subset. Antivirus for Amazon S3 provides Scan Lists and Skip Lists that will allow you to create Bucket Path Definitions to determine which folders within buckets you'd like to include for scanning (Scan List) or exclude from scanning (Skip List).

You can also pick a subset of items based on time with the Scan Existing Objects feature.

Can I scan the files before I write them to Amazon S3?

Scan before S3?

Yes, with API Driven Scanning you can send files for scanning directly. This is useful if you have a workflow where the file scan dictates whether the file should be stored in Amazon S3. There are a number of additional useful scenarios where an API comes in handy: scan on read (leveraging Amazon S3 Object Lambda)

Can I scan files even if I don't use Amazon S3?

Scan outside of S3?

Yes, with API Driven Scanning you can send files for scanning directly. Whether you intend to use this file within Amazon S3 or not. You could have an on-prem workflow or one such that Amazon S3 is not your end destination, you can still leverage the API scan to return a file verdict.

Do you have an API for file scanning?

API for file Scanning?

Yes, with API Driven Scanning you can send files for scanning directly. This is useful if you have a workflow where the file scan dictates whether the file should be stored in Amazon S3. There are a number of additional useful scenarios where an API comes in handy: scan on read (leveraging Amazon S3 Object Lambda)

Do you have an API for management and configuration?

Management API?

Yes, it is a limited set at this time, but more will be rolled out as new releases are pushed out. Check out the Management API page for more details.


The most current list available will be within your own deployment. Navigate to your deployment Console and add /swagger to the end of the URL as seen here: https://<deployment-URL\>/swagger/index.html

Can I organize linked accounts into groups?

Group Organization?

Yes, Antivirus for Amazon S3 allows for the organization and logical separation of linked accounts. This allows you to create a made-for-you organization structure to ease tracking, usage and where issues are originating. Groups also allow you to tie Antivirus for Amazon S3 users down to views and activities for specific sets of linked accounts.

Check out the Manage Groups documentation for more details.

How can I tell if files are being scanned?

Scan Results?

There are 6 ways to tell files are being scanned: the Dashboard, the Problem Files page, Tags on the objects themselves, CloudWatch Logs, AWS Security Hub integration and Proactive Notifications.

The dashboard is a simple view into how much data you have scanned, how many objects you have scanned and whether you've found any infected files. Often when testing you are using small numbers of objects that are of a smaller size. It can be difficult to see those reflected on the charts, but you will see data points that you can zoom in on. They are there, just hard to see sometimes.

The problem files page is used to identify the infected, unscannable and errored files.

We do not have a page identifying all the clean files, so it is best to look directly at the object itself to see if it has had Object Tags applied to it.


You may also subscribe to the Notifications SNS Topic to be informed in real-time of the scan results.

You can review the CloudWatch Logs - Agent.ScanResults

Will I be notified of infected and other problem files?


Yes. You can always monitor the Dashboard for any updates that come in regarding infected files along with the other scan results: unscannable, error and clean.

Proactively, you can subscribe to the Notifications SNS Topic to get real-time updates sent directly to you or the destination of your choice. More information on Proactive Notifications is located here.

Can I send Scan Results to Slack / Teams / other services?

Slack Teams Updates?

Yes. It is a simple process (that may sound more complicated than it is) that took under 10 minutes to setup. Simply follow the process laid out in the AWS blogpost talking about how to leverage webhooks seen here: AWS SNS + Slack / Teams / Chime setup

What it looks like in Slack. You can modify the format with Slack Message Layouts. slack scan results

If you run into any trouble please Contact Us.

Can I only run the scanning agent when needed to save on costs?

Cost Optimization?

Yes. You can change all aspects of the scaling setup on the Agent Settings page. There is specifically a Smart Scan option that will change the scaling values for this very configuration with the default Scaling Threshold of 1. With the value being set to 1, that would mean any time an object is placed in the queue the agent would spin up and process it (and whatever other work may show up) and then spin down once the work is completed. If you were to set this value to 50, the agent would wait for 50 new objects to show up before spinning up to process the work. Click here to see how to modify the scaling settings.

There is also a scheduling option that allows you to define when the agents should run. This can be used for new objects or all objects in the bucket(s).

Can I setup schedules to scan my objects?

Scheduled Scanning?

The ability to scan all files or new files (since last scan) based on a schedule. Whether it is because compliance is driving you to scan on a regular basis or it is that your workflow allows for non-real-time scanning, scheduled scanning provides flexibility for how you scan your data. You get to decide whether you want real-time, one-off on demand, schedule driven scanning or a combination of all the above.

Scheduled Scanning allows you to determine when your agents run. Your workflow and requirements will determine what you pick. If you need real-time scanning then a schedule is not for you. If you can have delays in your scanning Smart Scan or Scheduled Scans could fit your workflow well.

Do you offer an overview of the Antivirus for Amazon S3 deployment?

Deployment Overview?

Yes. The Deployment Overview page quickly and easily shows you which regions have infrastructure installed and buckets being protected.

How do I cleanup certain aspects of the product or do a complete uninstall?

Uninstall Product?

Yes. The Deployment Overview page gives you the option to uninstall a particular aspect of the product (like event scanning) in a particular region, cleanup the entire region or completely uninstall the product.

How often do you get new signature definitions?

Virus Signature Updates?

The product pulls new signature updates every 1 hour for the ClamAV engine and every 15 minutes for the Sophos engines and with each time the agents come online or reboot. Generally, we have seen ClamAV update once per day (typically mornings) and Sophos about 4 times per day. This is not a hard and fast rule, but what we are seeing regularly.

Can I switch to 'local' repository for signature updates?

Local Virus Signature Updates?

It may be the case where you do not want each scanning agent to reach out to the internet to gather signature updates. Rather, you would like them to be able to retrieve updates locally from within your account. This could be you do not want the agents, that touch your data, to also have an internet connection. The Private Mirror / Local Updates feature supports changing the internet calls to local lookups within a specified S3 bucket. For more information, check out the Private Mirror (local updates) help page.

Can I eliminate public internet access to the solution? Can I run it completely privately?

Private Deployment?

Yes . . . for the most part. You can eliminate all non-AWS services public internet connections, but there are still three AWS services (Marketplace, AppConfig and Cognito) that you must have outbound internet access to interact with. The Console VPC will require access to these three services, but the agents do not. So you can lock the more prevalent agent VPCs down to have no outbound internet access.

Of course your Subnets can sit behind a NAT Gateway to help control this.

Check out the Deployment Options help page for more details.

Can you scan encrypted objects?

Scan Encrypted Objects?

Yes. The Agent Role will need to be granted access to the keys. This can be done in two main ways: one-off direct access or global, but limited access.

One-off access can be given to individual keys. The process can be seen here.

Global access grants the solution access to all KMS keys, but only in the context of Amazon S3. Leveraging the viaService option in the permissions gives us access to the keys, but only while using the Amazon S3 service. Granting access this way will allow the solution to decrypt and encrypt objects even as keys changes. This option is available to set during the CloudFormation deployment. If you'd like to change the value afterwards, please update the stack with the steps found here.

What AV engine(s) do you scan with?

AV Engines?

We support both ClamAV and Sophos!

Check out the Scan Engines section for more details.

Please Contact Us if you'd like to see additional engines added.

Can you scan with multiple AV engines at the same time?

Multi-engine Scanning?

Yes, we do offer multi-engine scanning with two triggers: All Files and By File Size. All Files indicates every file that is event-based scanned or on-demand/schedule scanned will be processed by both engines. By File Size indicates smaller files (<2gb) will be scanned by ClamAV-only (since ClamAV has a size limitation and can't scan above 2gb) and larger files (>2gb) will be scanned by Sophos-only. This allows you to take advantage of the scan cost savings offered by ClamAV for part of your files, but still allow for those files that are too big for ClamAV to still be scanned.


If scanning files larger than 15gb, you will need to increase the disk size of the scanning agents, either globally or regionally, in the Event Agent Settings page.

Check out more details in the Scan Settings → Scan Engine section.

Please Contact Us if you have additional scenarios for how multiple engines could be used.

What is the max file size you can process?

Max File Size?

It depends on which engine you are leveraging. There are two engines that can be used with the Antivirus for Amazon S3 solution. The Sophos engine will currently scan up to the max allowed Amazon S3 object size (5TB). While the ClamAV engine can process up to 2GB for any individual file. If you need to process files larger than 2GB then you have to go with the Sophos engine. So depending on the engine any file under the max cap will be scanned. Any file over the cap will be tagged as unscannable.

Update June 2021
As AWS Fargate increases its native disk size, we will increase the overall scanned file size to match accordingly. With the 200GB bump that happened in May, we can comfortably do 195GB file sizes. Please Contact Us if this is important to you so we can understand the urgency of the need.

Update December 2021
We have now added support for multi-TB sized files with our Extra Large File Scanning feature. Files over 195GB will not be treated as individual jobs that can be monitored on the Jobs page.

Note: we know we can leverage EFS to do even larger file sizes, but testing and development work done so far reveals this isn't the best route to take for performance to cost ratio.

Can I setup MFA for my user?

MFA Support?

Yes, we do support user MFA. Check out the User Management page for details on how to set this up.

Can I detonate files? Do you have a cloud sandbox?

Cloud Sandbox?

Yes, we have an OEM'd slice of the Sophos Cloud Sandbox where we can do static and dynamic analysis of files. Check out the Static and Dynamic Analysis page for more details

Architecture Related
Do I need to make any changes to my application to use the product?

Application Changes?

No. The Antivirus for Amazon S3 solution will fit into your existing workflow. You do not have to make any changes to your current workflow.

Can I scan S3 objects from more than one account from within the same deployment?

Cross Account Scanning?

Yes, Antivirus for Amazon S3 supports cross-account scanning. This means you can centrally install the console and scanning agents to protect not only the account you are deployed within, but also any other AWS account where you can install a cross-account role.

Check out the Linked Accounts documentation for more details.

Can I setup a staging bucket for all of my files to first land in and then move them to a production bucket after they have been found to be clean?

Two Bucket System?

Yes, it is very easy to setup a two bucket system and we have many customers using this approach. With a combination of Event Based scanning and Proactive Notifications you can easily implement this approach.

  1. Create staging bucket or utilize existing bucket
  2. Turn bucket protection on for this bucket from the Antivirus for Amazon S3 console
  3. Create a Python (latest version) Lambda Function with sample code provided below
  4. Make adjustments to the Lambda settings with the information below
  5. Subscribe Lambda to SNS Notifications Topic
  6. Add IAM permissions to Lambda with provided permission blocks below
  7. Modify Topic Subscription to filter down to clean objects with provided filter block below
  8. Test clean and "not clean" files to ensure behavior is as expected
Sample Copy Lambda

The code below is a starting point and does work out of the box, but more can be done with it and to it. Feel free to do so.

import json
import boto3
import os
from botocore.exceptions import ClientError, ParamValidationError
import random
from urllib import parse

def lambda_handler(event, context):


        SOURCE_BUCKET = os.getenv("SOURCE_BUCKET", 'any')
        DESTINATION_BUCKET = os.getenv("DESTINATION_BUCKET", '<some failover bucket>')
        DELETE_STAGING = os.getenv("DELETE_STAGING", 'no')

        #print("Source bucket is:" + SOURCE_BUCKET)
        #print("Destination bucket is:" + DESTINATION_BUCKET)

        record = event['Records'][0]
        messageBucket = record['Sns']['MessageAttributes']['bucket']

        print("The messageBucket value is:")

        if (messageBucket['Value'] == SOURCE_BUCKET or SOURCE_BUCKET == 'any'):

            message = json.loads(record['Sns']['Message'])

            #print("The message content is:" + str(message))
            #print("The message key is: " + message['key'])

            s3 = boto3.resource('s3')

            copy_source = {
                'Bucket': messageBucket['Value'],
                'Key': message['key']

            if 'PartsCount' in s3.meta.client.head_object(Bucket=messageBucket['Value'], Key=message['key'], PartNumber=1):
                # get the tags, then copy with tags specified
                #print("doing a multipart copy with tags")

                    tagging = s3.meta.client.get_object_tagging(Bucket=messageBucket['Value'], Key=message['key'])
                    #print("get object tagging = " + str(tagging))

                    s3.meta.client.copy(copy_source, DESTINATION_BUCKET, message['key'], ExtraArgs={'Tagging': parse.urlencode({tag['Key']: tag['Value'] for tag in tagging['TagSet']})})

                except Exception as e:
                # copy as normal
                #print("doing a normal copy")
                    s3.meta.client.copy_object(CopySource=copy_source, Bucket=DESTINATION_BUCKET, Key=message['key'])

                except Exception as e:

            print("Copied:  " + message['key'] + "  to production bucket:  " + DESTINATION_BUCKET)

            #print("Delete files: " + DELETE_STAGING)

            if (DELETE_STAGING == 'yes'):

                    s3.meta.client.delete_object(Bucket=SOURCE_BUCKET, Key=message['key'])
                    print("Deleted:  " + message['key'] + "  from source bucket:  " + SOURCE_BUCKET)

                except Exception as e:

            return {
            'statusCode': 200,
            'body': json.dumps('Non-infected object moved to production bucket')

        return {
                'statusCode': 200,
                'body': json.dumps('Not from the Staging bucket')

    except ClientError as e:
        return {
            'statusCode': 400,
            'body': "Unexpected error: %s" % e}
    except ParamValidationError as e:
        return {
            'statusCode': 400,
            'body': "Parameter validation error: %s" % e}
Make Adjustments to Lambda Settings

There are some adjustments to the Lambda you'll probably need to make:

  • Set the environment variables for the Staging Bucket and Production Bucket names as seen here:

    Field                             Description
    DELETE_STAGING yes or no value based on whether you want the original file to be deleted after the copy has occurred
    DESTINATION_BUCKET Clean or Production bucket name identifying where to copy clean files to
    SOURCE_BUCKET Dirty or Staging bucket name identifying the originating bucket

    Note: you can leverage SNS Topic Filtering to eliminate the need for this value and the if check inside the code
  • Change the Time Out under General Configuration to a value that will work for the typical file sizes you deal with. The larger the file size, the longer you may want to make it so the lambda doesn't time out before the copy finishes

Permissions to add to Lambda Role

Add an Inline Policy to the Lambda Role that was created when you created the Lambda. Paste the below into the JSON screen and change the sections that have <> to match your staging bucket name and production destination buckets

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": [
                "arn:aws:s3:::<staging bucket name>",
                "arn:aws:s3:::<staging bucket name>/*"
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
            "Resource": [
                "arn:aws:s3:::<production destination bucket name>/*",
                "arn:aws:s3:::<production destination bucket name>"
Subscription Filter for Clean Results

Go to the SNS Topic itself and find the subscription created for the Lambda. Edit the filter settings by pasting the below value in. You can modify the filter further to include more scan results and even filter down by bucket. Bucket filtering is a good idea if you have event based scanning setup for any other bucket and you do not want the lambda to copy those clean files over as well.

    "notificationType": [
    "scanResult": [
Alternatively, with bucket filtering

    "notificationType": [
    "scanResult": [
    "bucket": [
        "<staging bucket name>"

If you need any help getting this setup, please Contact Us as we are happy to help.

How the 2 Bucket System Flows:
2 bucket flow diagram

What ports do I need open for the product to function properly?

Ports and Protocols?

Port 443 for:

  • Outbound for Lambda calls
  • Outbound Console and Agent access to Elastic Container Repository (ECR)
  • Inbound access to Console for public access
    • Public access is not required as long as you have access via private IP

Port 80, 53 and high range (1024:65535) ports for:

  • AV signature updates


    You can now setup local signature updates rather than reach out over the internet. This will allow you to setup an Amazon S3 bucket for the solution to look at.

You can get a more detailed view and additional options for routing on the Deployment Details page. In either the standard deployment or the VPC Endpoints deployment, with local signature updates you can remove all non-AWS calls from the application run space. With VPC Endpoint you can remove almost all public calls as well.

Can I change the cidr, vpc or subnets post deployment for the console and agents?

Task Settings?

Yes. The Console Settings page gives you the option to modify the inboud Security Group rules, the VPC and Subnets and the specs of the task (vCPU and Memory). The Agent Settings page allows you to change the VPC and Subnets the agents run in, the specs of the task (vCPU and Memory) as well as all the scaling configuration aspects.

Do you use AWS Lambdas or EC2 Instances?

Container, Lambda or EC2?

Neither. Antivirus for Amazon S3 infrastructure is built around AWS Fargate containers. We wanted to be serverless like Lambda and faster and more flexible than EC2s. Fargate containers give you a persistance and other benefits that Lambdas aren't prepared to give you yet. We explored Lambda and do see some advantages there, but not enough to win out over AWS Fargate containers.

We do leverage two lambdas for the subdomain registration, but not for any of the workload at this time. If you are interested in a lambda-driven solution, please Contact Us to let us know. We are always exploring the best way to build and run our solution.

Do you support AWS Control Tower or Landing Zone?

Control Tower and Landing Zone?

A landing zone is a well-architected, multi-account AWS environment that's based on security and compliance best practices. AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, and account structure.

Antivirus for Amazon S3 is not currently tightly integrated with AWS Control Tower, but is designed to work within the landing zone context. Antivirus for Amazon S3 can be centrally deployed in a Security Services account while leveraging Linked Accounts to scan all other accounts.

Can I leverage Single Sign On (SSO) with your product?

Single Sign On

Yes you can leverage SSO with our solution. Antivirus for Amazon S3 utilizes Amazon Cognito for user management. Amazon Cognito allows SAML integrations. Leveraging this capability we can utilize various providers as part of SSO into our solution, both from the SSO Dashboard as well as from within the application itself.

It is a fairly simple task to setup the connection and AWS has been kind enough to document how to do it. Follow the instructions found here: How do I set up Okta as a SAML identity provider in an Amazon Cognito user pool?. The sample is for Okta, but performing the corresponding steps for whichever SSO you use should also work.

Okta Examples below:
AVforS3 Login
Okta Dashboard

Upon customer request, we have documented the steps to get GSuite working as your SSO provider. Please leverage the document below. We have had other customers leverage these steps (along with the Okta write up) to setup other providers as well such as Keycloak.
GSuite Setup Instructions

Additional Actions Required

Okta identities will be auto-created within Amazon Cognito (and therefore Antivirus for Amazon S3) as simple Users and not Admins. They are also not assigned to a particular Group. This state enables them to login, but manage nothing. One-time only you will need to assign the user to a group and the admin role. After this initial assignment, each subsequent login will allow for proper management.

SSO users will also standout as their username will be created from the SSO sign in process.

SSO Modify User

Last update: January 10, 2023