# Architecture Related

## Do I need to make any changes to my application to use the product?

No. The Antivirus for Amazon S3 solution will fit into your existing workflow. You do not have to make any changes to your current workflow.

## Can I scan S3 objects from more than one account from within the same deployment?

Yes, Antivirus for Amazon S3 supports `cross-account` scanning. This means you can centrally install the console and scanning agents to protect not only the account you are deployed within, but also any other AWS account where you can install a cross-account role.

Check out the [Linked Accounts](https://help.cloudstoragesec.com/console-overview/access-management/linked-accounts) documentation for more details.

## Can I setup a staging bucket for all of my files to first land in and then move them to a production bucket after they have been found to be clean?

#### Two Bucket System <a href="#architecture-related-can-i-setup-a-staging-bucket-for-all-of-my-files-to-first-land-in-and-then-move" id="architecture-related-can-i-setup-a-staging-bucket-for-all-of-my-files-to-first-land-in-and-then-move"></a>

Yes, it is very easy to setup a two bucket system and we have many customers using this approach. We provide two methods of achieving this. &#x20;

**Method 1: CSS Console Approach**

First, within the Configuration > Scan Settings menu there is a configuration option for [AV Two-Bucket System Configuratio](https://help.cloudstoragesec.com/console-overview/configuration/scan-settings#av-two-bucket-system-configuration)[n](https://help.cloudstoragesec.com/console-overview/configuration/scan-settings#av-two-bucket-system-configuration).  This allows for a quick and easy way to choose your source region or bucket(s) and a destination bucket to promote your clean files.  With this option, our  agent will handle promoting the clean files as part of the scanning process.

**Method 2: Lambda Approach**

Alternatively, you can also implement this approach with a combination of [Event Based scanning](https://help.cloudstoragesec.com/console-overview/configuration/agent-settings) and [Proactive Notifications](https://help.cloudstoragesec.com/console-overview/configuration/proactive-notifications), as described below.

1. Create `staging bucket` or utilize existing bucket
2. Turn [bucket protection on for this bucket](https://help.cloudstoragesec.com/console-overview/protection/aws/protected-buckets#enable-buckets-for-scanning) from the Antivirus for Amazon S3 console
3. Create a Python (latest version) Lambda Function with sample code provided below
4. Make adjustments to the Lambda settings with the information below
5. Subscribe Lambda to SNS Notifications Topic
6. Add IAM permissions to Lambda with provided permission blocks below
7. Modify Topic Subscription to filter down to `clean` objects with provided filter block below
8. Test clean and "not clean" files to ensure behavior is as expected

\*LinkedAccountBuckets: Optionally you can make your two-bucket-system on linked accounts buckets as well, adding this pieces.\
\*\*Multi-partFiles Lambda: This lambda also involves step functions to handle multi-part files and large files

{% tabs %}
{% tab title="3. Sample Copy Lambda" %}
**Sample Copy Lambda**

The code below is a starting point and does work out of the box, but more can be done with it and to it. Feel free to do so.

```python
import json
import boto3
import os
from botocore.exceptions import ClientError, ParamValidationError
import random
from urllib import parse

def lambda_handler(event, context):

    try:
        print(json.dumps(event))

        SOURCE_BUCKET = os.getenv("SOURCE_BUCKET", 'any')
        DESTINATION_BUCKET = os.getenv("DESTINATION_BUCKET", '<some failover bucket>')
        DELETE_STAGING = os.getenv("DELETE_STAGING", 'no')

        #print("Source bucket is:" + SOURCE_BUCKET)
        #print("Destination bucket is:" + DESTINATION_BUCKET)

        record = event['Records'][0]
        messageBucket = record['Sns']['MessageAttributes']['bucket']

        print("The messageBucket value is:")
        print(messageBucket['Value'])

        if (messageBucket['Value'] == SOURCE_BUCKET or SOURCE_BUCKET == 'any'):

            message = json.loads(record['Sns']['Message'])

            #print("The message content is:" + str(message))
            #print("The message key is: " + message['key'])

            s3 = boto3.resource('s3')

            copy_source = {
                'Bucket': messageBucket['Value'],
                'Key': message['key']
                }

            if 'PartsCount' in s3.meta.client.head_object(Bucket=messageBucket['Value'], Key=message['key'], PartNumber=1):
                # get the tags, then copy with tags specified
                #print("doing a multipart copy with tags")
                try:

                    tagging = s3.meta.client.get_object_tagging(Bucket=messageBucket['Value'], Key=message['key'])
                    #print("get object tagging = " + str(tagging))

                    s3.meta.client.copy(copy_source, DESTINATION_BUCKET, message['key'], ExtraArgs={'Tagging': parse.urlencode({tag['Key']: tag['Value'] for tag in tagging['TagSet']})})

                except Exception as e:
                    print(e)
                    raise(e)
            else:
                # copy as normal
                #print("doing a normal copy")
                try: 
                    s3.meta.client.copy_object(CopySource=copy_source, Bucket=DESTINATION_BUCKET, Key=message['key'])

                except Exception as e:
                    print(e)
                    raise(e)



            print("Copied:  " + message['key'] + "  to production bucket:  " + DESTINATION_BUCKET)

            #print("Delete files: " + DELETE_STAGING)

            if (DELETE_STAGING == 'yes'):
                try:

                    s3.meta.client.delete_object(Bucket=SOURCE_BUCKET, Key=message['key'])
                    print("Deleted:  " + message['key'] + "  from source bucket:  " + SOURCE_BUCKET)

                except Exception as e:
                    print(e)
                    raise(e)

            return {
            'statusCode': 200,
            'body': json.dumps('Non-infected object moved to production bucket')
            }   


        return {
                'statusCode': 200,
                'body': json.dumps('Not from the Staging bucket')
            }

    except ClientError as e:
        return {
            'statusCode': 400,
            'body': "Unexpected error: %s" % e}
    except ParamValidationError as e:
        return {
            'statusCode': 400,
            'body': "Parameter validation error: %s" % e}
```

{% endtab %}

{% tab title="4. Make Adjustments to Lambda Settings" %}
**Make Adjustments to Lambda Settings**

There are some adjustments to the Lambda you'll probably need to make:

* Set the environment variables for the Staging Bucket and Production Bucket names as seen here:

| Field               | Description                                                                                                                                                                                                                                                                                               |
| ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| DELETE\_STAGING     | `yes` or `no` value based on whether you want the original file to be deleted after the copy has occurred                                                                                                                                                                                                 |
| DESTINATION\_BUCKET | Clean or Production bucket name identifying where to copy clean files to                                                                                                                                                                                                                                  |
| SOURCE\_BUCKET      | <p>Dirty or Staging bucket name identifying the originating bucket<br><br><em><strong>Note: you can leverage SNS Topic Filtering to eliminate the need for this value and the</strong><strong> </strong><strong><code>if</code></strong><strong> </strong><strong>check inside the code</strong></em></p> |

* Change the Time Out under General Configuration to a value that will work for the typical file sizes you deal with. The larger the file size, the longer you may want to make it so the lambda doesn't time out before the copy finishes.
  {% endtab %}

{% tab title="6. Permissions to add to Lambda Role" %}
**Permissions to add to Lambda Role**

Add an Inline Policy to the Lambda Role that was created when you created the Lambda. Paste the below into the JSON screen and change the sections that have `<>` to match your staging bucket name and production destination buckets

```json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObjectTagging",
                "s3:GetObject",
                "s3:GetObjectTagging",
                "s3:ListBucket",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::<staging bucket name>",
                "arn:aws:s3:::<staging bucket name>/*"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucket",
                "s3:PutObjectTagging"
            ],
            "Resource": [
                "arn:aws:s3:::<production destination bucket name>/*",
                "arn:aws:s3:::<production destination bucket name>"
            ]
        }
    ]
}
```

{% endtab %}

{% tab title="7. Subscription Filter for Clean Results" %}
**Subscription Filter for Clean Results**

Go to the SNS Topic itself and find the subscription created for the Lambda. Edit the filter settings by pasting the below value in. You can modify the filter further to include more scan results and even filter down by bucket. Bucket filtering is a good idea if you have event based scanning setup for any other bucket and you do not want the lambda to copy those clean files over as well.

```json
{
    "notificationType": [
        "scanResult"
    ],
    "scanResult": [
        "Clean"
    ]
}
```

Alternatively, with bucket filtering:

```json
{
    "notificationType": [
        "scanResult"
    ],
    "scanResult": [
        "Clean"
    ],
    "bucket": [
        "<staging bucket name>"
    ]
}
```

{% endtab %}

{% tab title="\* Linked Account buckets" %}
**Permissions on linked account buckets:**

* **Staging bucket policy:**

```json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "<lambda-role-arn>"
            },
            "Action": [
                "s3:DeleteObjectTagging",
                "s3:GetObject",
                "s3:GetObjectTagging",
                "s3:ListBucket",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::<staging-bucket>",
                "arn:aws:s3:::<staging-bucket>/*"
            ]
        }
    ]
}
```

* **Production bucket policy:**

```json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "<lambda-role-arn>"
            },
            "Action": [
                "s3:PutObject",
                "s3:ListBucket",
                "s3:PutObjectTagging"
            ],
            "Resource": [
                "arn:aws:s3:::<production-bucket>/*",
                "arn:aws:s3:::<production-bucket>"
            ]
        }
    ]
}
```

{% endtab %}

{% tab title="\*\*Multi-part files Lambda" %}
**Full Lambda for multi-part files handling**

```python
import json
import boto3
from botocore.exceptions import ClientError, ParamValidationError
from urllib import parse
import time
import math

def lambda_handler(event, context):

    try:
        print(json.dumps(event))

        record = event['Records'][0]
        message = json.loads(record['Sns']['Message'])

        # Modify destination Bucket as appropriate
        sourceBucket = str(message['bucketName'])
        destinationBucket = sourceBucket

        # Modify destination key as appropriate
        sourceKey = str(message['key'])
        destinationKey = sourceKey.replace("incoming", "landing", 1)

        # Set to appropriate step function state machine ARN
        stateMachineArn = 'arn:aws:states:<region>:<account_id>:stateMachine:<stateMachineName>'

        print("The source bucket is:  " + sourceBucket)
        print("The destination bucket is:  " + destinationBucket)
        print("The source key is:  " + sourceKey)
        print("The destination key: " + destinationKey)
        print("The State Machine ARN: " + stateMachineArn)

        s3 = boto3.resource('s3')

        copy_source = {
            'Bucket': sourceBucket,
            'Key': sourceKey
        }

        meta_data = s3.meta.client.head_object(
            Bucket=sourceBucket, Key=sourceKey)
        parts_meta = s3.meta.client.head_object(
            Bucket=sourceBucket, Key=sourceKey, PartNumber=1)

        partsCount = parts_meta["PartsCount"] if 'PartsCount' in parts_meta else 0

        if partsCount > 0:
            totalContentLength = meta_data["ContentLength"]
            partSize = math.floor(totalContentLength / partsCount)

            print(f'FileSize: {totalContentLength} -- PartSize: {partSize} -- PartsCount: {partsCount}')

            tagging = s3.meta.client.get_object_tagging(
                Bucket=sourceBucket, Key=sourceKey)
            tagging_encoded = parse.urlencode(
                {tag['Key']: tag['Value'] for tag in tagging['TagSet']})

            if totalContentLength >= 25_000_000_000:
                source_bucket_and_key = f'{sourceBucket}/{sourceKey}'

                try:
                    # 25GB or larger file, use step function to copy due to lambda run time limit
                    print("Using step function to copy extra large file")

                    # Initiate multi-part upload
                    kwargs = dict(
                        Bucket=destinationBucket,
                        Key=destinationKey,
                        Tagging=tagging_encoded,
                        Metadata=meta_data['Metadata'],
                        StorageClass=meta_data['StorageClass'] if 'StorageClass' in meta_data else None,
                        ServerSideEncryption=meta_data['ServerSideEncryption'] if 'ServerSideEncryption' in meta_data else None,
                        SSEKMSKeyId=meta_data['SSEKMSKeyId'] if 'SSEKMSKeyId' in meta_data else None,
                        BucketKeyEnabled=meta_data['BucketKeyEnabled'] if 'BucketKeyEnabled' in meta_data else None,
                        ObjectLockMode=meta_data['ObjectLockMode'] if 'ObjectLockLegalHoldStatus' in meta_data and 'ObjectLockRetainUntilDate' in meta_data else None,
                        ObjectLockRetainUntilDate=meta_data['ObjectLockRetainUntilDate'] if 'ObjectLockLegalHoldStatus' in meta_data and 'ObjectLockRetainUntilDate' in meta_data else None,
                        ObjectLockLegalHoldStatus=meta_data[
                            'ObjectLockLegalHoldStatus'] if 'ObjectLockLegalHoldStatus' in meta_data else None
                    )

                    multipart_upload_response = s3.meta.client.create_multipart_upload(
                        **{k: v for k, v in kwargs.items() if v is not None}
                    )

                    uploadId = multipart_upload_response['UploadId']

                    # kick off step function executions
                    sfn_client = boto3.client('stepfunctions')

                    parts = []
                    baseInput = {
                        'Bucket': destinationBucket,
                        'Key': destinationKey,
                        'CopySource': source_bucket_and_key,
                        'SourceBucket': sourceBucket,
                        'SourceKey': sourceKey,
                        'UploadId': uploadId,
                        'PartsCount': partsCount,
                        'CompleteUpload': False
                    }

                    for partNumber in range(1, partsCount + 1):
                        byteRangeStart = (partNumber - 1) * partSize
                        byteRangeEnd = byteRangeStart + partSize - 1

                        if byteRangeEnd > totalContentLength or partNumber == partsCount:
                            byteRangeEnd = totalContentLength - 1

                        parts.append({
                            'PartNumber': partNumber,
                            'CopySourceRange': f'bytes={byteRangeStart}-{byteRangeEnd}'
                        })

                        if len(parts) == 250 and partNumber != partsCount:
                            executionInput = dict(baseInput)
                            executionInput['Parts'] = parts

                            sfn_client.start_execution(
                                stateMachineArn=stateMachineArn,
                                name=f'copy_parts_{time.time() * 1000}',
                                input=json.dumps(executionInput)
                            )

                            parts.clear()

                    executionInput = dict(baseInput)
                    executionInput['Parts'] = parts
                    executionInput['CompleteUpload'] = True

                    sfn_client.start_execution(
                        stateMachineArn=stateMachineArn,
                        name=f'copy_parts_{time.time() * 1000}',
                        input=json.dumps(executionInput)
                    )

                    print("Step function execution started.")

                    return {
                        'statusCode': 200,
                        'body': json.dumps('Non-infected object moved to production bucket')
                    }
                except Exception as e:
                    print(
                        f'Failed to execute step function to copy {source_bucket_and_key}')
                    print(e)
                    raise (e)
            else:
                # get the tags, then copy with tags specified
                print("Performing a multipart copy with tags")
                try:
                    s3.meta.client.copy(copy_source, destinationBucket, destinationKey, ExtraArgs={
                                        'Tagging': tagging_encoded})

                except Exception as e:
                    print(e)
                    raise (e)
        else:
            # copy as normal
            print("Performing a normal copy")
            try:
                s3.meta.client.copy_object(
                    CopySource=copy_source, Bucket=destinationBucket, Key=destinationKey)

            except Exception as e:
                print(e)
                raise (e)

        print(
            f'Copied: {source_bucket_and_key} to destination: {destinationBucket}/{destinationKey}')

        try:
            s3.meta.client.delete_object(Bucket=sourceBucket, Key=sourceKey)
            print(f'Deleted: {source_bucket_and_key}')

        except Exception as e:
            print(e)
            raise (e)

        return {
            'statusCode': 200,
            'body': json.dumps('Non-infected object moved to production bucket')
        }

    except ClientError as e:
        return {
            'statusCode': 400,
            'body': "Unexpected error: %s" % e}
    except ParamValidationError as e:
        return {
            'statusCode': 400,
            'body': "Parameter validation error: %s" % e}
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**Custom/own Quarantine Bucket**

If you decide to use your own quarantine bucket, you can use these same steps for a 2-bucket-system. You only need to go to **Configuration > Scan Settings** and change the action for infected files to **Keep** and change the "Clean" subscription on **step 7** for "Infected"
{% endhint %}

If you need any help getting this setup, please [Contact Us](https://help.cloudstoragesec.com/contact-us) as we are happy to help.

How the 2 Bucket System Flows:

<figure><img src="https://905555942-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlGcQw8I2CHyi1loKBlfi%2Fuploads%2FGPwtLkI31lVV3YiiLi8i%2Fimage.png?alt=media&#x26;token=60981bff-221d-4d2f-9710-2555817d3dbb" alt=""><figcaption></figcaption></figure>

## What ports do I need open for the product to function properly?

Port 443 for:

* Outbound for Lambda calls
* Outbound Console and Agent access to Elastic Container Repository (ECR)
* Inbound access to Console for public access
  * Public access is not required as long as you have access via private IP

Port 80 for:

* ClamAV signature updates

{% hint style="info" %}
You can now setup [local signature updates](https://help.cloudstoragesec.com/console-overview/configuration/scan-settings#private-mirror-local-signature-updates) rather than reach out over the internet. This will allow you to setup an Amazon S3 bucket for the solution to look at.
{% endhint %}

You can get a more detailed view and additional options for routing on the [Deployment Details page](https://help.cloudstoragesec.com/how-it-works/deployment-details). In either the standard deployment or the VPC Endpoints deployment, with local signature updates you can remove all non-AWS calls from the application run space. With VPC Endpoint you can remove almost all public calls as well.

## Can I change the CIDR range, VPC or Subnets post deployment for the console and agents?

Yes. The [Console Settings](https://help.cloudstoragesec.com/console-overview/configuration/console-settings) page gives you the option to modify the inbound Security Group rules, the VPC and Subnets and the specs of the task (vCPU and Memory). The [Agent Settings](https://help.cloudstoragesec.com/console-overview/configuration/agent-settings) page allows you to change the VPC and Subnets the agents run in, the specs of the task (vCPU and Memory) as well as all the scaling configuration aspects.

## Do you use AWS Lambdas or EC2 Instances?

**Neither**. Antivirus for Amazon S3 infrastructure is built around AWS Fargate containers. We wanted to be serverless like Lambda and faster and more flexible than EC2s. Fargate containers give you persistence and other benefits that Lambdas aren't prepared to give you yet. We explored Lambda and do see some advantages there, but not enough to win out over AWS Fargate containers.

We do leverage two lambdas for the [subdomain registration](https://help.cloudstoragesec.com/console-overview/configuration/console-settings#subdomain-management), but not for any of the workload at this time. If you are interested in a lambda-driven solution, please [Contact Us](https://help.cloudstoragesec.com/contact-us) to let us know. We are always exploring the best way to build and run our solution.

## Do you support AWS Control Tower or Landing Zone?

A landing zone is a well-architected, multi-account AWS environment that's based on security and compliance best practices. AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, and account structure.

Antivirus for Amazon S3 is now tightly integrated with [AWS Control Tower](https://aws.amazon.com/controltower/), and is designed to work within the landing zone context. Antivirus for Amazon S3 can be centrally deployed in a `Security Services` account while leveraging [Linked Accounts](https://help.cloudstoragesec.com/console-overview/access-management/linked-accounts) to scan all other accounts. You can learn more about [AWS Control Tower](https://aws.amazon.com/about-aws/whats-new/2022/11/aws-control-tower-account-customization/) here.

## Can I leverage Single Sign On (SSO) with your product?

Yes you can leverage SSO with our solution. Antivirus for Amazon S3 utilizes Amazon Cognito for user management. Amazon Cognito allows SAML integrations. Leveraging this capability we can utilize various providers as part of SSO into our solution, both from the SSO Dashboard as well as from within the application itself.

We've documented SSO integration for Entra ID and Okta below:

* For [Okta](https://help.cloudstoragesec.com/how-it-works/sso-integrations/okta-sso-integration)
* For [Entra ID](https://help.cloudstoragesec.com/faq/broken-reference)

#### Examples below:

<figure><img src="https://905555942-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlGcQw8I2CHyi1loKBlfi%2Fuploads%2FPdmc6itoTusjVgCj6nHv%2Fimage.png?alt=media&#x26;token=b80f9073-221a-45cc-90c3-328c251aa071" alt=""><figcaption><p>Sign In lookslike</p></figcaption></figure>

#### Okta&#x20;

<figure><img src="https://905555942-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlGcQw8I2CHyi1loKBlfi%2Fuploads%2FSxbMC0uJbIz9gQ1Z4F3g%2Fimage.png?alt=media&#x26;token=0753ef9e-3f9f-4484-b379-f2dded0e22e4" alt=""><figcaption><p>Okta UI</p></figcaption></figure>

#### Entra ID

<figure><img src="https://905555942-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlGcQw8I2CHyi1loKBlfi%2Fuploads%2Fq7Oq9klPOpSYOvi0d4Ec%2Fimage.png?alt=media&#x26;token=34520368-cfc7-4ddf-b8ad-cbbaf1141c84" alt=""><figcaption><p>portal.azure.com > Enterprise Applications</p></figcaption></figure>

Upon customer request, we have documented the steps to get GSuite working as your SSO provider. Please leverage the document below. We have had other customers leverage these steps (along with the Okta and Entra ID write up) to setup other providers as well such as Keycloak.\
[GSuite Setup Instructions](https://css-public-docs.s3.amazonaws.com/SSO-GSuite_GoogleWorkspace_v1.pdf)

### Additional Actions Required

Okta identities will be auto-created within Amazon Cognito (and therefore Antivirus for Amazon S3) as simple `Users` and not `Admins`. They are also not assigned to a particular Group. This state enables them to login, but manage nothing. One-time only you will need to assign the user to a group and the admin role. After this initial assignment, each subsequent login will allow for proper management.

SSO users will also standout as their username will be created from the SSO sign in process.

<figure><img src="https://905555942-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlGcQw8I2CHyi1loKBlfi%2Fuploads%2FL8Ge2VJsJXPTvG8Y16UA%2Fimage.png?alt=media&#x26;token=0088fe38-0f68-4bb0-b0d9-14fd37ef94bf" alt=""><figcaption></figcaption></figure>

## How can we scan your console/agent images for vulnerabilities?

You can access our images by either [downloading them locally to your ECR](https://help.cloudstoragesec.com/getting-started/how-to-deploy/advanced-deployment-considerations#how-to-setup-local-ecr-mirroring-for-css-console-and-agent-repositories) or using the Container images commands on the `Launch this software` page on our AWS Marketplace listing.

{% hint style="info" %}
You'll need to install and use the AWS CLI to authenticate to Amazon Elastic Container Registry and download the container images using the commands.
{% endhint %}

Below is a sample of the commands to pull our images from ECR for v7.01.001. However, you'll need to navigate to the `Launch this software` page on our AWS Marketplace listing to get the commands for our latest release.

```
aws ecr get-login-password \
    --region us-east-1 | docker login \
    --username AWS \
    --password-stdin 564477214187.dkr.ecr.us-east-1.amazonaws.com
    
CONTAINER_IMAGES="564477214187.dkr.ecr.us-east-1.amazonaws.com/cloud-storage-security/console:v7.01.001,564477214187.dkr.ecr.us-east-1.amazonaws.com/cloud-storage-security/agent:v7.01.001"    

for i in $(echo $CONTAINER_IMAGES | sed "s/,/ /g"); do docker pull $i; done
```

<figure><img src="https://905555942-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlGcQw8I2CHyi1loKBlfi%2Fuploads%2FIBtCo0gl31mgaUeOonQf%2FScreenshot%202023-08-17%20at%2015.41.31.png?alt=media&#x26;token=b9e093f7-90da-4b2b-8a13-e60367556dbf" alt="" width="375"><figcaption></figcaption></figure>

## How do I completely tear down CSS Infrastructure?

If using CloudFormation, the action of just deleting the stack may miss additional infrastructure created throughout the course of using our product. We recommend that you go to Monitoring > Deployment and Delete Application so we can clean up infrastructure for you before you delete the stack.

More details are located at [this page](https://help.cloudstoragesec.com/console-overview/monitoring/deployment-overview#delete-application).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.cloudstoragesec.com/faq/architecture-related.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
