Product Functionality
Below are some of the most common questions related to product functionality.
Last updated
Below are some of the most common questions related to product functionality.
Last updated
Both. Brand new objects will be scanned via an event-based trigger as soon as they arrive in the bucket. Existing objects will be scanned via the Retro Scanning
feature where you can define to look back at all objects within the bucket or a subset based on date-time.
For more information, check out the overview.
You can also setup scanning (new or existing files) based on a schedule. Review the documentation.
You can scan all objects, existing or newly coming in, or you can scan a subset. Antivirus for Amazon S3 provides Scan Lists and Skip Lists that will allow you to create to determine which folders within buckets you'd like to include for scanning (Scan List) or exclude from scanning (Skip List).
You can also pick a subset of items based on time with the feature.
Yes, with you can send files for scanning directly. This is useful if you have a workflow where the file scan dictates whether the file should be stored in Amazon S3. There are a number of additional useful scenarios where an API comes in handy: scan on read (leveraging )
Yes, with you can send files for scanning directly. Whether you intend to use this file within Amazon S3 or not. You could have an on-prem workflow or one such that Amazon S3 is not your end destination, you can still leverage the API scan to return a file verdict.
Yes, we integrate with AWS Transfer Family in multiple ways. If you are already using the Transfer Family service, you can deploy our console and enable event-driven bucket protection on the S3 bucket(s) linked to your Transfer Family Sever(s) to ensure its contents are clean.
Yes, it is a limited set at this time, but more will be rolled out as new releases are pushed out. Check out the Management API page for more details, by adding "/swagger" at the end of your Console's URL
No, we have found that performance is faster when files are saved to disk (internal Fargate instance/container) in your own AWS account and region. As soon as the file is scanned, it's get deleted from the containers disk.
Yes, Antivirus for Amazon S3 allows for the organization and logical separation of linked accounts. This allows you to create a made-for-you organization structure to ease tracking, usage and where issues are originating. Groups also allow you to tie Antivirus for Amazon S3 users down to views and activities for specific sets of linked accounts.
The dashboard is a simple view into how much data you have scanned, how many objects you have scanned and whether you've found any infected files. Often when testing you are using small numbers of objects that are of a smaller size. It can be difficult to see those reflected on the charts, but you will see data points that you can zoom in on. They are there, just hard to see sometimes.
The problem files page is used to identify the infected, unscannable and errored files.
Below is what the notification looks like in Slack:
Yes. It's a very simple process. There's a Lambda BluePrint to set up straight from our CloudWatch Logs to Splunk:
Create a Lambda function > Use a blueprint > Blueprint name "Send CloudWatch logs to a Splunk host" nodejs18.x
Add the following import to the code already present next to the other imports:
import { createRequire } from "module"; const require = createRequire(import.meta.url);
Select the Log group CloudStorageSecurity.Agent.ScanResults
Complete the values for the 2 Splunk Keys at the end.
That's it, any scan result you get, will be logged in Cloudwatch and copied into your Splunk space
There is also a scheduling option that allows you to define when the agents should run. This can be used for new objects or all objects in the bucket(s).
The ability to scan all files or new files (since last scan) based on a schedule. Whether it is because compliance is driving you to scan on a regular basis or it is that your workflow allows for non-real-time scanning, scheduled scanning provides flexibility for how you scan your data. You get to decide whether you want real-time, one-off on demand, schedule driven scanning or a combination of all the above.
Yes. The Deployment Overview page quickly and easily shows you which regions have infrastructure installed and buckets being protected.
Yes. The Deployment Overview page gives you the option to uninstall a particular aspect of the product (like event scanning) in a particular region, cleanup the entire region or completely uninstall the product.
The product pulls new signature updates every 1 hour for the ClamAV engine and every 15 minutes for the Sophos engines and each time the agents come online or reboot. Generally, we have seen ClamAV update once per day (typically mornings) and Sophos about 4 times per day. This is not a hard and fast rule, but what we are seeing regularly.
Yes . . . for the most part. You can eliminate all non-AWS services public internet connections, but there are still three AWS services (Marketplace, AppConfig and Cognito) that you must have outbound internet access to interact with. The Console VPC will require access to these three services, but the agents do not. So you can lock the more prevalent agent VPCs down to have no outbound internet access.
Of course your Subnets can sit behind a NAT Gateway to help control this.
Yes. The Agent Role will need to be granted access to the keys. This can be done in two main ways: one-off direct access or global, but limited access.
We currently support the following scan engines:
Sophos
CSS Premium (this is based on a proprietary commercially available engine)
ClamAV
Yes, we do offer multi-engine scanning with two triggers: All Files
and By File Size
. All Files
indicates every file that is event-based scanned or on-demand/schedule scanned will be processed by the enabled engines.
By File Size
indicates smaller files (<2GB) will be scanned by ClamAV-only (since ClamAV has a size limitation and can't scan above 2GB). Larger files (>2GB) will be scanned by Sophos-only. This allows you to take advantage of the scan cost savings offered by ClamAV for part of your files, but still allow for those files that are too big for ClamAV to still be scanned.
It depends on which engine you are leveraging. There are three engines that can be used with the Antivirus for Amazon S3 solution. The Sophos engine will currently scan up to the max allowed Amazon S3 object size (5TB). While the ClamAV engine can process up to 2GB for any individual file. If you need to process files larger than 2GB then you have to go with the Sophos engine.
So depending on the engine any file under the max cap will be scanned. Any file over the cap will be tagged as unscannable
.
When deploying the Antivirus for Managed File Transfers solution, you can configure the CloudFormation Template to automatically deploy a Transfer Family Server for you. Once the deployment is complete, you can obtain your Transfer Family Server Id by reviewing the Outputs tab from the CloudFormation Template. Navigate to the Transfer Family service within the AWS console, select the correct server, and review its configuration.
You can connect to the Server using a file transfer service of your choice. Please note:
The username and password entered to connect to the console is the same username and password that will be used to connect to the Transfer Family Server.
The server's protocol will be SFTP (SSH File Transfer Protocol) - file transfer over Secure Shell.
A Lambda function is created during the deployment and will validate authentication for secure file transfers.
Currently, we only support scanning for:
S3 Standard
Intelligent-Tiering
Reduced Redundancy
We do not support scanning for the following S3 storage types:
Standard-IA
One Zone-IA
Glacier Instant Retrieval
Glacier Flexible Retrieval (formerly Glacier)
Glacier Deep Archive
If you have a need to scan files in the non-supported storage types please let us know.
If you are new to AWS Transfer Family, we have a version of our console that will automatically deploy and protect a Transfer Family Server for you. You can learn more about this specific solution at our .
Yes, with you can send files for scanning directly. This is useful if you have a workflow where the file scan dictates whether the file should be stored in Amazon S3. There are a number of additional useful scenarios where an API comes in handy: scan on read (leveraging )
Check out the documentation for more details.
There are 6 ways to tell files are being scanned: the Dashboard, the page, Tags on the objects themselves, CloudWatch Logs, AWS Security Hub integration and .
We do not have a page identifying all the clean files, so it is best to look directly at the object itself to see if it has had applied to it.
You may also to be informed in real-time of the scan results.
You can review the
Yes. You can always monitor the for any updates that come in regarding infected
files along with the other scan results: unscannable
, error
and clean
.
Proactively, you can subscribe to the Notifications
SNS Topic to get real-time updates sent directly to you or the destination of your choice. More information on is located here.
Yes. It is a simple process (that may sound more complicated than it is) that took under 10 minutes to setup. Simply follow the process laid out in the AWS blogpost talking about how to leverage webhooks seen here:
You can modify the format with .
If you run into any trouble please .
Yes. You can change all aspects of the scaling setup on the page. There is specifically a option that will change the scaling values for this very configuration with the default Scaling Threshold of 1. With the value being set to 1, that would mean any time an object is placed in the queue the agent would spin up and process it (and whatever other work may show up) and then spin down once the work is completed. If you were to set this value to 50, the agent would wait for 50 new objects to show up before spinning up to process the work. Click to see how to modify the scaling settings.
Scheduled Scanning allows you to determine when your agents run. Your workflow and requirements will determine what you pick. If you need real-time scanning then a schedule is not for you. If you can have delays in your scanning or could fit your workflow well.
It may be the case where you do not want each scanning agent to reach out to the internet to gather signature updates. Rather, you would like them to be able to retrieve updates locally from within your account. This could be you do not want the agents, that touch your data, to also have an internet connection. The Private Mirror / Local Updates feature supports changing the internet calls to local lookups within a specified S3 bucket. For more information, check out the help page.
Check out the help page for more details.
One-off access can be given to individual keys. The process can be seen .
Global access grants the solution access to all KMS keys, but only in the context of Amazon S3. Leveraging the viaService
option in the permissions gives us access to the keys, but only while using the Amazon S3 service. Granting access this way will allow the solution to decrypt and encrypt objects even as keys changes. This option is available to set during the CloudFormation deployment. If you'd like to change the value afterwards, please update the stack with the steps found .
Check out the section for more details.
Please if you'd like to see additional engines added.
If scanning files larger than 15gb, you will need to increase the disk size of the scanning agents, either globally or regionally, in the page.
Check out more details in the section.
Please if you have additional scenarios for how multiple engines could be used.
Yes, we do support user MFA. Check out the page for details on how to set this up.
Yes, we have an OEM'd slice of the Sophos Cloud Sandbox where we can do static and dynamic analysis of files. Check out the page for more details.