# Bring your own bucket for Log Management

In this guide, you will query raw data using Radiant’s log management and search features. You'll choose between two storage options: using Radiant-hosted logs or configuring your own external bucket.

{% hint style="warning" %}
**Important Note**: For proof of concept (POC) or testing purposes, we allow customers to offload log hosting to us. But, we expect all customers to eventually switch to using their own bucket once they’ve converted from POC.
{% endhint %}

Once a configuration is chosen, it can no longer be updated. If it needs to be updated, all the data ingested up to that point will be dropped, as we will manually reset the configuration and start fresh. For example, POC customers who start with Radiant-hosted logs will have that data deleted once they switch to bringing their own bucket. We can backfill data by re-ingesting from the tenant’s connectors, but it will require extra time.

### Requirements

All you need is an AWS account. There are two things you must do to enable this configuration in AWS:

* [ ] Create a bucket in either `us-west-2` or `eu-central-1` , depending on which region your tenant is using.
* [ ] Add the policy below to the bucket that allows our cross account role to manage the bucket.

After this has been completed, you’ll add your bucket configuration in Radiant Security. Before saving the configuration, we’ll check if the bucket is in the correct AWS region and verify that we have the correct permissions. If these verification steps pass, we allow the bucket to be saved. Otherwise, you’ll see an error letting you know that you bucket is not set up correctly. Once the bucket configuration is saved, your log data will immediately start being ingested.

### Create an S3 bucket in AWS

1. Sign in to your [AWS Management Console](https://aws.amazon.com/console/).
2. Then, navigate to the S3 Service by typing **S3** in the search bar. Click **S3** to open the S3 dashboard.
3. Click the **Create bucket** button and configure the following:
   * **Bucket name**: Enter a unique bucket name (e.g., `my-unique-bucket-name`).
   * **Region**: Select the region that matches your tenant location:
     * **US West (Oregon):** `us-west-2`
     * **Europe (Frankfurt):** `eu-central-1`
4. To configure **Default encryption**, under **Encryption type**, select **Server-side encryption with Amazon S3 managed keys (SSE-S3)**. Leave **Bucket Key** as **Disabled**.&#x20;

   <figure><img src="https://2439665791-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FPsFulb2ZOtSPcRSc2rXE%2Fuploads%2FoKGAr13YjGTXBnPg7NlX%2FBring%20your%20own%20bucket%20for%20Log%20Management_01.png?alt=media&#x26;token=e0112c86-5445-4080-a603-b0a7d4935afd" alt=""><figcaption></figcaption></figure>
5. Scroll to the bottom of the page and click **Create bucket**.

### S3 Intelligent-Tiering

S3 Intelligent-Tiering can be enabled to reduce the cost of storage, however only the automatic access tiers are supported: **Frequent Access Tier**, **Infrequent Access Tier**, and **Archive Instant Access Tier**.&#x20;

{% hint style="danger" %}
**Warning:** *Do not* enable **Archive Access Tier** or **Deep Archive Access Tier**.  If activated, this can lead to lost alerts and lost events.
{% endhint %}

### Add a bucket policy

1. From the S3 dashboard, click on the bucket name you just created.
2. Go to the **Permissions** tab within the bucket's dashboard.
3. Add a bucket policy to allow our cross account role (configured as `Principal` in the JSON below) to manage the bucket.
   * Scroll down to the **Bucket policy** section and click **Edit**.
   * Copy and paste the policy below in the text editor.
   * In the JSON below, replace the `<aws_account_id>` with your region's account ID:&#x20;
     * **us-west-2**: `649384204969`
     * **eu-central-1**: `076657324990`&#x20;

{% code overflow="wrap" %}

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "RadiantSecurityIngestionFullAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<aws_account_id>:role/radiant_security_ingestion_role"
      },
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::<s3_bucket_name>",
        "arn:aws:s3:::<s3_bucket_name>/*"
      ]
    },
    {
      "Sid": "RadiantSecurityBYOBBackfillAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<aws_account_id>:role/logmanagement-customers-split-files-backfill-sa-role"
      },
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::<s3_bucket_name>",
        "arn:aws:s3:::<s3_bucket_name>/*"
      ]
    }
  ]
}
```

{% endcode %}

4. Click **Save** **changes** to apply the policy.

### Configure Log Management in Radiant Security

1. Log in to [Radiant Security](https://app.radiantsecurity.ai/).
2. From the navigation menu, click **Log** **Management**.
3. From **Log** **Management**, click **+ Add Credentials**.
4. In the side menu, paste the bucket name (not ARN) that you created in the [Create an S3 Bucket in AWS](#create-an-s3-bucket-in-aws) section.
5. Click **Add credentials** to save the bucket configuration.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.radiantsecurity.ai/log-management/bring-your-own-bucket-byob/bring-your-own-bucket-for-log-management.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
