Skip to content

Commit

Permalink
Merge branch 'main' into ebentley/rename-roles
Browse files Browse the repository at this point in the history
  • Loading branch information
chrisdoman authored Jan 31, 2025
2 parents 84f79e1 + e0980e9 commit 6f4e089
Show file tree
Hide file tree
Showing 25 changed files with 274 additions and 19 deletions.
2 changes: 1 addition & 1 deletion docs/cado/deploy/aws/aws-nfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 4

# NFS

The initial deployment deployment deploys without a Network File Share (NFS). Enabling an NFS allows Cado to keep a copy of every file processed on disk. This enables the re-running of analysis and the downloading of the original file in the UI for further analysis.
The initial minimal deployment deploys without a Network File Share (NFS). Enabling an NFS allows Cado to keep a copy of every file processed on disk. This enables the re-running of analysis and the downloading of the original file in the UI for further analysis.

### Prerequisites

Expand Down
2 changes: 1 addition & 1 deletion docs/cado/deploy/aws/aws-secret-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 4

# Secret Manager

The initial deployment stores the key used to encrypt secrets in Cado locally on the machine. Enabling a Secret Manager allows Cado to instead store the key in AWS Secrets Manager.
The initial minimal deployment stores the key used to encrypt secrets in Cado locally on the machine. Enabling a Secret Manager allows Cado to instead store the key in AWS Secrets Manager.

### Prerequisites

Expand Down
2 changes: 1 addition & 1 deletion docs/cado/deploy/aws/aws-workers.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 4

# Workers

The initial deployment runs everything on a single EC2 instance. In order to limit load on this instance and ensure the platform remains stable we limit types of imports that can be run to those based around Cado Host captures and small artefacts stored in S3. We also limit how many pieces of evidence can be processed at once.
The initial minimal deployment runs everything on a single EC2 instance. In order to limit load on this instance and ensure the platform remains stable we limit types of imports that can be run to those based around Cado Host captures and small artefacts stored in S3. We also limit how many pieces of evidence can be processed at once.

To enable processing data from all sources or to process many items of evidence at once, Cado must be configured to allow it to run imports on additional EC2s.

Expand Down
2 changes: 1 addition & 1 deletion docs/cado/deploy/aws/iam/required-role-scoping.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Switch your platform to run using the instance role.
1. Update your `myCadoResponseInstanceRolePolicy` to match https://github.com/cado-security/Deployment-Templates/blob/main/new-roles/AWSInstanceRole.json

> **NOTE:** Replace MY_CADO_BUCKET with the name of your Cado S3 bucket.
> **NOTE:** Replace MY_CADO_BUCKET with the name of your Cado S3 bucket. Replace MY_CADO_CLODWATCH_LOG_GROUP with the ARN of your CloudWatch Log Group if using.
i. This adds the permissions required to run the Cado platform to the `myCadoResponseInstanceRolePolicy`, which leads to some duplications with `myCadoResponseRolePolicy`. The duplicate permissions will be removed in a later step.

Expand Down
2 changes: 1 addition & 1 deletion docs/cado/deploy/azure/azure-nfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 8

# NFS

The initial deployment deployment deploys without a Network File Share (NFS). Enabling an NFS allows Cado to keep a copy of every file processed on disk. This enables the re-running of analysis and the downloading of the original file in the UI for further analysis.
The initial minimal deployment deployment deploys without a Network File Share (NFS). Enabling an NFS allows Cado to keep a copy of every file processed on disk. This enables the re-running of analysis and the downloading of the original file in the UI for further analysis.

### Prerequisites

Expand Down
127 changes: 127 additions & 0 deletions docs/cado/deploy/azure/azure-quickstart-deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
---
title: Minimal Deployment Guide
hide_title: true
sidebar_position: 2
---

# Minimal Deployment Guide

This guide provides step-by-step instructions for deploying a Cado instance from the Azure console, aimed at helping you get up and running with the platform as quickly as possible

The initial deployment offers a basic working environment; however, certain functionalities are not included. Refer to the ‘Extensions’ section for details on missing features and instructions on how to add them.

## Initial Deployment

## Prerequisites

Before starting, make sure you have all the following:

- **A resource group** (exclusively for Cado), with the following resources:
- **A storage account**
- **A blob container** in the storage account above
- **A network security group** configured to allow inbound traffic from your IP. At the very least, it will need to allow HTTPS, although SSH may also be useful.
- Either **A “User Assigned Managed Identity”** named `cado-identity` or **A "Service Principal"**, with the Role assignment `Contributor` scoped to this resource group.

You can add a role assignment to a managed identity by following these steps:

1. Navigate to the **“User Assigned Managed Identity”** resource.

2. Select **“Azure role assignments”** on the sidebar.

![Azure Cado Identity](/img/cado-identity-overview.png)

3. Click **Add role assignments (preview)**

![Azure Role](/img/cado-identity-azure-role.png)

4. Fill in the form, selecting the Resource group you are deployed into and the Contributor role, then press **Save**

![Azure Role Assignment](/img/add-role-assignment.png)

## Instructions

### Deploying from the Cado Image

1. Open the “Community Images” service in Azure.Then, filter the images by the Cado public gallery name (CadoPlatform-1a38e0c7-afa4-4e0d-9c56-433a12cd67b1) to list all the Cado images available for deployment.

![Community Image](/img/community-image.png)

2. Select an image in the location you want to deploy in. The supported regions are

- East US
- East US 2
- Central US
- North Central US
- South Central US
- West US
- West US 2
- West US 3
- West Central US

Once an image has been selected, select a version to deploy (the latest version is recommended) and press **Create VM**.

![Cado Response v2](/img/cadoresponsev2.png)

3. Override the following settings on each page of the creation wizard:

#### Overview

- Resource group -> your resource group name

![Cado Response v2](/img/resource-group.png)

- Name -> Choose a name for your VM
- Size -> At least D4S_v3 (recommended for production: D16ds_v4)

![Cado Response v2](/img/disk-size.png)

- Administrator account username -> adminuser
- SSH public key source -> A key you have access to, or generate a new key.
- Public inbound ports -> None
- Licensing -> Other

#### Disk

- A data disk with Name “cado-main-vm-disk” is required.
- 100GB size recommended, with read write host cache.
- The LUN value should be set to 10 and Host Caching is Read/Write.
- Make sure delete with VM is unticked.

The options should look as below:

![Azure Data Disk](/img/azure-data-disk.png)

#### Networking

- Ensure “Delete public IP and NIC when VM is deleted” is unticked

#### Advanced

- Enable user data -> True
- User data -> The following script, replacing `<STORAGE_ACCOUNT_NAME>` and `<BLOB_STORE_NAME>` with your Storage account / blob container name respectively.

```bash
#!/bin/bash -x
echo "[FIRST_RUN]" > /home/admin/processor/first_run.cfg
echo azure_storage_account = <STORAGE_ACCOUNT_NAME> | sudo tee -a /home/admin/processor/first_run.cfg
echo "bucket = <BLOB_STORE_NAME>" >> /home/admin/processor/first_run.cfg
```

Additionally, if using a service principal, the following extra lines need to be added to the user data.

```bash
echo -n "<CLIENT_ID>" | sudo tee -a /home/admin/processor/envars/AZURE_CLIENT_ID
echo -n "<TENANT_ID>" | sudo tee -a /home/admin/processor/envars/AZURE_TENANT_ID
echo -n "<CLIENT_SECRET>" | sudo tee -a /home/admin/processor/envars/AZURE_CLIENT_SECRET
```

4. After all these settings are configured, review and create the virtual machine.

5. If you are using a managed identity: Once your virtual machine has been created, assign the user managed identity to it in the UI.
> This can be done on the **Security -> Identity** pane of the VM on the “User assigned” tab.
![Add User](/img/add-user.png)

6. The network security group created earlier needs to be associated with the subnet the virtual machine resides in. This can be done once the VM is running from the Overview section, and selecting Network Settings > Create Port Rule > Inbound Port Rule.

> The default password for the VM is the [Resource ID](https://docs.cadosecurity.com/cado/deploy/logging-in) of the virtual machine.
2 changes: 1 addition & 1 deletion docs/cado/deploy/azure/azure-secret-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 9

# Secret Manager

The initial deployment stores the key used to encrypt secrets in Cado locally on the machine. Enabling a Secret Manager allows Cado to instead store the key in Azure Key Vault.
The initial minimal deployment stores the key used to encrypt secrets in Cado locally on the machine. Enabling a Secret Manager allows Cado to instead store the key in Azure Key Vault.

### Prerequisites

Expand Down
2 changes: 1 addition & 1 deletion docs/cado/deploy/azure/azure-workers.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 7

# Workers

The initial deployment runs everything on a single Compute instance. In order to limit load on this instance and ensure the platform remains stable we limit types of imports that can be run to those based around Cado Host captures. We also limit how many pieces of evidence can be processed at once.
The initial minimal deployment runs everything on a single Compute instance. In order to limit load on this instance and ensure the platform remains stable we limit types of imports that can be run to those based around Cado Host captures. We also limit how many pieces of evidence can be processed at once.

To enable processing data from all sources or to process many items of evidence at once, Cado must be configured to allow it to run imports on additional Compute instances.

Expand Down
17 changes: 14 additions & 3 deletions docs/cado/deploy/cross/aws-sts.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,21 @@ In complex cloud environments, setting up long-term roles with access often requ

![ARN](/img/arn.png)

3. **Grant Temporary Access via AWS CLI**
Use the AWS CLI to generate the STS token for temporary access. This action uses the permissions available in your local AWS CLI environment. Alternatively, you can temporarily assume a predefined role:
3. **Grant Temporary Access via AWS CLI or a third party tool**

You can generate a session token using the AWS CLI for a user or a role, or a third-party tool such as HashiCorp Vault:

![Configure Token](/img/configure_token.png)

A session token can be generated from a user. This action uses the permissions available in your local AWS CLI environment:
![Session](/img/sts_session_token.png)

A session token can also be generated from a role:
![Role](/img/sts_role_token.png)

Or a third party tool such as HashiCorp Vault:
![Third party tool](/img/sts_third_part_token.png)

![Assume Role](/img/assume-role.png)

### Setting the Session Duration

Expand Down
2 changes: 1 addition & 1 deletion docs/cado/deploy/gcp/gcp-deploy.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: GCP Terraform deployment
title: GCP Full Terraform deployment
hide_title: true
sidebar_position: 1
---
Expand Down
2 changes: 1 addition & 1 deletion docs/cado/deploy/gcp/gcp-nfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 9

# NFS

The initial deployment deployment deploys without a Network File Share (NFS). Enabling an NFS allows Cado to keep a copy of every file processed on disk. This enables the re-running of analysis and the downloading of the original file in the UI for further analysis.
The initial minimal deployment deployment deploys without a Network File Share (NFS). Enabling an NFS allows Cado to keep a copy of every file processed on disk. This enables the re-running of analysis and the downloading of the original file in the UI for further analysis.

### Prerequisites

Expand Down
109 changes: 109 additions & 0 deletions docs/cado/deploy/gcp/gcp-quickstart-deployment-guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
---
title: Minimal Terraform Deployment Guide
hide_title: true
sidebar_position: 1
---

# Minimal Terraform Deployment Guide

This guide provides step-by-step instructions for deploying a Cado instance with a minimial terraform deployment, aimed at helping you get up and running with the platform as quickly as possible

The initial deployment offers a basic working environment; however, certain functionalities are not included. Refer to the ‘Extensions’ section for details on missing features and instructions on how to add them.

### Prerequisites

- Clone the repo https://github.com/cado-security/Deployment-Templates
- Install terraform locally
- Install and Auth with Gcloud CLI

## Initial Deployment

**Clone and Enter the directory:**
https://github.com/cado-security/Deployment-Templates/blob/main/minimum_deployments/gcp

**If not using Service account JSON:**
1. Install and Auth via GCP Cli: https://cloud.google.com/sdk/docs/install

2. Auth with: `gcloud init`

Run `terraform init` inside the directory

### Variables:
There are 3 required variables:
- **project_id** - This is the GCP project you want to deploy into
- **region** - (us-central1 / us-east1)
- **bucket** - The bucket CADO will use for evidence preservation. Needs to be in the same project

**Optional variables:**
- **credentials_file** - A Service account JSON. For if not using authentication via GCP Cli
- **gcp_image** - The Terraform will automatically select the latest image, however if you want to provide a specific version of CADO pass the global image link from the CADO updates [JSON](https://cado-public.s3.amazonaws.com/cado_updates_json_v2.json)
- **source_ip** - The IP address you want to whitelist port 443 with CADO. This will automatically select your own IP if left empty
- **public_ip** - By default True. Set to False if you do not want a Public IP on the instance

**Network Variables:**
- **network_name** - VPC network name. Leave blank to use default
- **subnetwork_name** - Automatically determined unless specified
- **service_account_email** - To specify an already created service account email. Terraform will create one if left empty

### To confirm what will be deployed:

> **Note:** Terraform is Case Sensitive. Confirm the project and other variables are in the correct case.
`terraform plan -var bucket=YOUR_BUCKET -var project_id=YOUR_PROJECT_ID -var region=DEPLOY_REGION`

You should see “Plan: 7 to add, 0 to change, 0 to destroy.”

### To deploy

`terraform apply -var bucket=YOUR_BUCKET -var project_id=YOUR_PROJECT_ID -var region=DEPLOY_REGION`

## Configure import sources

The initial deployment has the minimum set of permissions required to run the Cado platform, but not to access the different data sources you might want to import from. Until you add roles that give it the permission to capture data from a cloud environment or XDR, you will be limited to the Cado Host and URL import options which don’t require permissions beyond what the platform was deployed with.

> **Note:** Some import types are also restricted when using local workers. See the Workers section below for more details.
This [link](https://docs.cadosecurity.com/cado/deploy/cross/adding-gcp) will give details on how to configure each import source.

### Prerequisites
- Ability to update IAM Roles

### Instructions

1. Add the necessary permissions to the Cado role

```json
// GCP Project Access
"resourcemanager.projects.get",
"compute.projects.get",

// Instance Acquisition
"cloudbuild.builds.get",
"cloudbuild.builds.create",
"compute.disks.get",
"compute.disks.use",
"compute.disks.list",
"compute.disks.useReadOnly",
"compute.globalOperations.get",
"compute.images.create",
"compute.instances.get",
"compute.instances.list",
"compute.images.delete",
"compute.images.get",
"compute.instances.getSerialPortOutput"
```
2. Add the Account details to Cado

a. Go to **Settings** > **Accounts** and click on **Create an Account**

![Accounts](/img/gcp-empty-account.png)

3. Select **GCP** as the provider

![Provider](/img/gcp-provider-select.png)

4. Verify that the account health check passes




2 changes: 1 addition & 1 deletion docs/cado/deploy/gcp/gcp-secret-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 10

# Secret Manager

The initial deployment stores the key used to encrypt secrets in Cado locally on the machine. Enabling a Secret Manager allows Cado to instead store the key in GCP Secret Manager.
The initial minimal deployment stores the key used to encrypt secrets in Cado locally on the machine. Enabling a Secret Manager allows Cado to instead store the key in GCP Secret Manager.

### Prerequisites

Expand Down
2 changes: 1 addition & 1 deletion docs/cado/deploy/gcp/gcp-workers.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 11

# Workers

The initial deployment runs everything on a single Compute instance. In order to limit load on this instance and ensure the platform remains stable we limit types of imports that can be run to those based around Cado Host captures and small artefacts stored in GCS. We also limit how many pieces of evidence can be processed at once.
The initial minimal deployment runs everything on a single Compute instance. In order to limit load on this instance and ensure the platform remains stable we limit types of imports that can be run to those based around Cado Host captures and small artefacts stored in GCS. We also limit how many pieces of evidence can be processed at once.

To enable processing data from all sources or to process many items of evidence at once, Cado must be configured to allow it to run imports on additional Compute instances.

Expand Down
10 changes: 9 additions & 1 deletion docs/cado/discovery-import/openshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,12 @@ oc exec pod-name -c container-name -- /tmp/cado-host/cado-host --presigned_data

Replace `pod-name`, `container-name`, and `--presigned_data` with the relevant values from your setup.

![OpenShift](/img/openshift.png)
![OpenShift](/img/openshift.png)

## Red Hat OpenShift Service on AWS (ROSA)
Red Hat OpenShift Service on AWS (ROSA) runs on Amazon Elastic Compute Cloud (EC2) instances. ROSA is a managed service that uses EC2 to deploy, scale, and build containerized applications.

This means that you can import data from ROSA by importing EC2 instances as usual. For more information, see [How to Import Data from AWS EC2](/cado/discovery-import/aws/aws-ec2.md).
Most clusters run on containerd, which can limit the data from inside containers that can be collected vs Docker.

The method above for OpenShift should work for ROSA as well, and oc commands can be used to execute the script on the desired container within ROSA after [logging in](https://docs.openshift.com/rosa/rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.html).
2 changes: 1 addition & 1 deletion docs/cado/investigate/detections.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_position: 9

# How to Add Additional Detections to the Cado Platform

The Cado platform allows you to integrate with various systems and incorporate custom Indicators of Compromise (IOCs). You can configure these settings by navigating to **Settings > General Settings > Detection**.
The Cado platform allows you to integrate with various systems and incorporate custom Indicators of Compromise (IOCs). You can configure these settings by navigating to **Settings > General Settings > Intelligence**.

### VirusTotal API Key

Expand Down
Loading

0 comments on commit 6f4e089

Please sign in to comment.