Architecture Related

Below are some of the most common questions related to Architecture

Do I need to make any changes to my application to use the product?

No. The Antivirus for Amazon S3 solution will fit into your existing workflow. You do not have to make any changes to your current workflow.

Can I scan S3 objects from more than one account from within the same deployment?

Yes, Antivirus for Amazon S3 supports cross-account scanning. This means you can centrally install the console and scanning agents to protect not only the account you are deployed within, but also any other AWS account where you can install a cross-account role.

Check out the Linked Accounts documentation for more details.

Can I setup a staging bucket for all of my files to first land in and then move them to a production bucket after they have been found to be clean?

Yes, it is very easy to setup a two bucket system and we have many customers using this approach. We provide two methods of achieving this.

First, within the Configuration > Scan Settings menu there is a configuration option for AV Two-Bucket System Configuration. This allows for a quick and easy way to choose your source region or bucket(s) and a destination bucket to promote your clean files. With this option, our agent will handle promoting the clean files as part of the scanning process.

Alternatively, you can also implement this approach with a combination of Event Based scanning and Proactive Notifications, as described below.

  1. Create staging bucket or utilize existing bucket

  2. Turn bucket protection on for this bucket from the Antivirus for Amazon S3 console

  3. Create a Python (latest version) Lambda Function with sample code provided below

  4. Make adjustments to the Lambda settings with the information below

  5. Subscribe Lambda to SNS Notifications Topic

  6. Add IAM permissions to Lambda with provided permission blocks below

  7. Modify Topic Subscription to filter down to clean objects with provided filter block below

  8. Test clean and "not clean" files to ensure behavior is as expected

*LinkedAccountBuckets: Optionally you can make your two-bucket-system on linked accounts buckets as well, adding this pieces. **Multi-partFiles Lambda: This lambda also involves step functions to handle multi-part files and large files

Sample Copy Lambda

The code below is a starting point and does work out of the box, but more can be done with it and to it. Feel free to do so.

import json
import boto3
import os
from botocore.exceptions import ClientError, ParamValidationError
import random
from urllib import parse

def lambda_handler(event, context):

    try:
        print(json.dumps(event))

        SOURCE_BUCKET = os.getenv("SOURCE_BUCKET", 'any')
        DESTINATION_BUCKET = os.getenv("DESTINATION_BUCKET", '<some failover bucket>')
        DELETE_STAGING = os.getenv("DELETE_STAGING", 'no')

        #print("Source bucket is:" + SOURCE_BUCKET)
        #print("Destination bucket is:" + DESTINATION_BUCKET)

        record = event['Records'][0]
        messageBucket = record['Sns']['MessageAttributes']['bucket']

        print("The messageBucket value is:")
        print(messageBucket['Value'])

        if (messageBucket['Value'] == SOURCE_BUCKET or SOURCE_BUCKET == 'any'):

            message = json.loads(record['Sns']['Message'])

            #print("The message content is:" + str(message))
            #print("The message key is: " + message['key'])

            s3 = boto3.resource('s3')

            copy_source = {
                'Bucket': messageBucket['Value'],
                'Key': message['key']
                }

            if 'PartsCount' in s3.meta.client.head_object(Bucket=messageBucket['Value'], Key=message['key'], PartNumber=1):
                # get the tags, then copy with tags specified
                #print("doing a multipart copy with tags")
                try:

                    tagging = s3.meta.client.get_object_tagging(Bucket=messageBucket['Value'], Key=message['key'])
                    #print("get object tagging = " + str(tagging))

                    s3.meta.client.copy(copy_source, DESTINATION_BUCKET, message['key'], ExtraArgs={'Tagging': parse.urlencode({tag['Key']: tag['Value'] for tag in tagging['TagSet']})})

                except Exception as e:
                    print(e)
                    raise(e)
            else:
                # copy as normal
                #print("doing a normal copy")
                try: 
                    s3.meta.client.copy_object(CopySource=copy_source, Bucket=DESTINATION_BUCKET, Key=message['key'])

                except Exception as e:
                    print(e)
                    raise(e)



            print("Copied:  " + message['key'] + "  to production bucket:  " + DESTINATION_BUCKET)

            #print("Delete files: " + DELETE_STAGING)

            if (DELETE_STAGING == 'yes'):
                try:

                    s3.meta.client.delete_object(Bucket=SOURCE_BUCKET, Key=message['key'])
                    print("Deleted:  " + message['key'] + "  from source bucket:  " + SOURCE_BUCKET)

                except Exception as e:
                    print(e)
                    raise(e)

            return {
            'statusCode': 200,
            'body': json.dumps('Non-infected object moved to production bucket')
            }   


        return {
                'statusCode': 200,
                'body': json.dumps('Not from the Staging bucket')
            }

    except ClientError as e:
        return {
            'statusCode': 400,
            'body': "Unexpected error: %s" % e}
    except ParamValidationError as e:
        return {
            'statusCode': 400,
            'body': "Parameter validation error: %s" % e}

Custom/own Quarantine Bucket

If you decide to use your own quarantine bucket, you can use these same steps for a 2-bucket-system. You only need to go to Configuration > Scan Settings and change the action for infected files to Keep and change the "Clean" subscription on step 7 for "Infected"

If you need any help getting this setup, please Contact Us as we are happy to help.

How the 2 Bucket System Flows:

What ports do I need open for the product to function properly?

Port 443 for:

  • Outbound for Lambda calls

  • Outbound Console and Agent access to Elastic Container Repository (ECR)

  • Inbound access to Console for public access

    • Public access is not required as long as you have access via private IP

Port 80 for:

  • ClamAV signature updates

You can now setup local signature updates rather than reach out over the internet. This will allow you to setup an Amazon S3 bucket for the solution to look at.

You can get a more detailed view and additional options for routing on the Deployment Details page. In either the standard deployment or the VPC Endpoints deployment, with local signature updates you can remove all non-AWS calls from the application run space. With VPC Endpoint you can remove almost all public calls as well.

Can I change the CIDR range, VPC or Subnets post deployment for the console and agents?

Yes. The Console Settings page gives you the option to modify the inbound Security Group rules, the VPC and Subnets and the specs of the task (vCPU and Memory). The Agent Settings page allows you to change the VPC and Subnets the agents run in, the specs of the task (vCPU and Memory) as well as all the scaling configuration aspects.

Do you use AWS Lambdas or EC2 Instances?

Neither. Antivirus for Amazon S3 infrastructure is built around AWS Fargate containers. We wanted to be serverless like Lambda and faster and more flexible than EC2s. Fargate containers give you persistence and other benefits that Lambdas aren't prepared to give you yet. We explored Lambda and do see some advantages there, but not enough to win out over AWS Fargate containers.

We do leverage two lambdas for the subdomain registration, but not for any of the workload at this time. If you are interested in a lambda-driven solution, please Contact Us to let us know. We are always exploring the best way to build and run our solution.

Do you support AWS Control Tower or Landing Zone?

A landing zone is a well-architected, multi-account AWS environment that's based on security and compliance best practices. AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, and account structure.

Antivirus for Amazon S3 is now tightly integrated with AWS Control Tower, and is designed to work within the landing zone context. Antivirus for Amazon S3 can be centrally deployed in a Security Services account while leveraging Linked Accounts to scan all other accounts. You can learn more about AWS Control Tower here.

Can I leverage Single Sign On (SSO) with your product?

Yes you can leverage SSO with our solution. Antivirus for Amazon S3 utilizes Amazon Cognito for user management. Amazon Cognito allows SAML integrations. Leveraging this capability we can utilize various providers as part of SSO into our solution, both from the SSO Dashboard as well as from within the application itself.

It is a fairly simple task to setup the connection and AWS has been kind enough to document how to do it. Follow the instructions found here:

Examples below:

Okta

Azure AD

Upon customer request, we have documented the steps to get GSuite working as your SSO provider. Please leverage the document below. We have had other customers leverage these steps (along with the Okta and Azure AD write up) to setup other providers as well such as Keycloak. GSuite Setup Instructions

Additional Actions Required

Okta identities will be auto-created within Amazon Cognito (and therefore Antivirus for Amazon S3) as simple Users and not Admins. They are also not assigned to a particular Group. This state enables them to login, but manage nothing. One-time only you will need to assign the user to a group and the admin role. After this initial assignment, each subsequent login will allow for proper management.

SSO users will also standout as their username will be created from the SSO sign in process.

How can we scan your console/agent images for vulnerabilities?

You can access our images by either downloading them locally to your ECR or using the Container images commands on the Launch this software page on our AWS Marketplace listing.

You'll need to install and use the AWS CLI to authenticate to Amazon Elastic Container Registry and download the container images using the commands.

Below is a sample of the commands to pull our images from ECR for v7.01.001. However, you'll need to navigate to the Launch this software page on our AWS Marketplace listing to get the commands for our latest release.

aws ecr get-login-password \
    --region us-east-1 | docker login \
    --username AWS \
    --password-stdin 564477214187.dkr.ecr.us-east-1.amazonaws.com
    
CONTAINER_IMAGES="564477214187.dkr.ecr.us-east-1.amazonaws.com/cloud-storage-security/console:v7.01.001,564477214187.dkr.ecr.us-east-1.amazonaws.com/cloud-storage-security/agent:v7.01.001"    

for i in $(echo $CONTAINER_IMAGES | sed "s/,/ /g"); do docker pull $i; done

Last updated