Event Driven Scanning for New Files

Event driven scanning is where an event, in this case the All object create event, is leveraged on the bucket so any time an object is created/modified within the bucket an event is raised. Antivirus for Amazon S3 places and event destination / handler onto the protected buckets which listen for these events to trigger scanning. This allows Antivirus for Amazon S3 to easily plugin to any existing workflow you have without modifications.

So this looks as follows:

  1. An object is added to a protected bucket

  2. An event is raised and sent to an SNS Topic

  3. The Antivirus for Amazon S3 provides an SQS Queue which subscribes to the Topic

  4. One or more Antivirus for Amazon S3 Agents are monitoring the queue

  5. Entries are pulled from the queue identifying the object to scan. The object is retrieved and scanned

  6. Objects are handled according to the Scan Settings you have set a. All objects are tagged b. Infected files are moved to a quarantine bucket (default behavior)

circle-info

This flow and behavior is irrespective of region. Amazon S3 buckets have a global view, but are regionally placed. This flow will be performed local to each region you enable buckets for scanning.

Object Tagging

After a file has been scanned we will tag the object in the S3 bucket. These tags are how our solution recognizes whether we've previously scanned the file. If an object is tagged and is copied to another protected bucket, our solution will skip over scanning the object.

Here are examples of object tags based on scan results:

Clean Object

Document Flows

Standard Document Flow

When a bucket is protected and event listener (SNS Topic) is added to the bucket. This will send S3 Events to the topic which in turn populates an SQS Queue. From there, everything is scanned in near real-time.

Standard Document Flow
  1. Users or apps upload objects to an S3 bucket protected by our solution.

  2. The S3 bucket has an event listener which pushes a notification to Amazon SNS. SNS pushes a message to an Amazon SQS Queue.

  3. Our Event Agent monitors this queue for objects and will copy the object into itself, performing a scan of the object.

  4. The Event Agent will tag the object with its verdict.

  5. The Event Agent will send scan results to Amazon CloudWatch.

  6. The Event Agent will send problematic scan results to the Console service for surfacing.

  7. (If Proactive Notifications are configured) The Event Agent will forward scan results to Amazon SNS.

  8. (Optional) The Event Agent will move infected files into a Quarantine S3 Bucket

2 Bucket (Two Bucket) System Document Flow

The Two Bucket System (2 Bucket System) allows a customer to physically separate the incoming files from the downstream users of the "production buckets". The separation lasts as long as it takes to scan the files and ensure they are clean. In this way, you can ensure nothing other than clean files makes it into your production buckets and therefore are safe to be consumed.

We have 2 options for a 2 bucket flow: Console approach and Lambda Approach.

The Console approach can be configured in the Console and moves files tagged as Clean to a designated destination bucket. This option is straightforward but cannot be tweaked for additional configurations.

Some customers opt to put a Lambda function into place for more granular control over objects. Moving multiple types of files or moving files to more than one bucket are common use cases for those who opt to utilize a Lambda instead of our console-managed two bucket functionality.

For guided steps on how to setup the 2 Bucket System go here

2 Bucket Console Flow
  1. Users or apps upload objects to an S3 bucket protected by our solution.

  2. The S3 bucket has an event listener which pushes a notification to Amazon SNS. SNS pushes a message to an Amazon SQS Queue.

  3. Our Event Agent monitors this queue for objects and will copy the object into itself, performing a scan of the object.

  4. The Event Agent will tag the object with its verdict.

  5. The Event Agent will send scan results to Amazon CloudWatch.

  6. The Event Agent will send problematic scan results to the Console service for surfacing.

  7. If the Event Agent deems an object as Clean, it will move that object from the staging S3 bucket into the destination S3 bucket.

  8. (If Proactive Notifications are configured) The Event Agent will forward scan results to Amazon SNS.

  9. (Optional) The Event Agent will move infected files into a Quarantine S3 Bucket.

Storage Gateway Configuration

If you use Storage Gateway for S3 File Gateway, you will need some additional configurations to ensure Event-Based scanning works properly.

EventBridge and Event Notifications are insufficient to notify our application when files are uploaded because Storage Gateway utilizes multipart uploads. As per AWS documentationarrow-up-right:

While Amazon S3 event notificationsarrow-up-right are a great feature for many use cases, we do not recommend using them to notify you of file uploads to Amazon S3 via a File Gateway. When a File Gateway is required to prioritize cache usage, partial file uploads may temporarily occur to Amazon S3. While the File Gateway eventually fully uploads the files as part of this process, Amazon S3 event notifications still triggers upload notifications in the interim.

Storage Gateway offers its own notification system that must be enabled via the CLI. The following section will describe what needs to be done in order to enable Storage Gateway for S3 File Gateway scanning.

Prerequisites

  1. Storage Gateway set up

  2. File Share set up and pointing to an S3 bucket in the same region

  3. CSS Console deployed (How to Deploy)

  4. AWS CLI is accessible with StorageGateway permissions

Overview

  1. Enable Storage Gateway Notifications through the CLI

  2. Set up an EventBridge rule to capture events and forward to CSS SNS Topic

  3. Ensure SNS Topic Access policy allows EventBridge to publish events

  4. Protect a bucket in File Share's region

Steps

  1. Enable Storage Gateway Notifications through the CLI

At this time, it's only possible to enable Storage Gateway notifications through the CLI. In AWS, navigate to Storage Gateway > File Shares. Locate your file share and determine whether it is NFS or SMB. In the example provided below, the type is NFS.

Click into your File Share and copy the ARN. You will need to run an AWS CLI command to enable Storage Gateway upload notifications, and that command will change depending on your File Share type. Be sure to substitute {file-share-arn} for your file share's ARN.

If your File Share is NFS:

If your File Share is SMB:

If performed successfully, the output will look like so:

  1. Set up an EventBridge rule to capture events and forward to SNS

Storage Gateway will send notifications to the Default Event Bus for the region where it resides in. In EventBridge, we must create an EventBridge rule that captures Storage Gateway notifications and forwards it to the CSS SNS topic.

  • Go to EventBridge > Event Buses > default

  • Select 'Create Rule'

  • Set 'Triggering Events' as 'Storage Gateway Object Upload Event' and 'Storage Gateway File Upload Event'.

  • Set 'Targets' as 'CloudStorageSecTopic-{your_app_id}'.

  1. Allow EventBridge to publish to CSS SNS Topic

  • Navigate to SNS > CloudStorageSecTopic-{your_app_id} and select Access policy

  • Modify the policy to allow events.amazonaws.com to sns:Publish to the SNS topic. Below is a sample Access policy, substitute bracketed values for your own.

  1. Protect bucket in Storage Gateway Region

In order for the Event Agent to scan files, it must be running. For now, the best way to do this is protect an unused bucket with Event Protection. If buckets are already protected in this region, there is no need to protect additional buckets. Do not protect the Storage Gateway Bucket, the configuration set up above will handle notifications.

After this configuration, try uploading files via Storage Gateway. Files should be scanned and tagged with our solution.

Last updated