Skip to content

Commit 1e0ea50

Browse files
committed
Add terraform examples for aws
1 parent cc91e4f commit 1e0ea50

13 files changed

+1312
-0
lines changed

examples/terraform/README.md

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Terraform Playground
2+
This repository contains a collection of Terraform configurations that we used to learn and experiment with Terraform.
3+
4+
## Install Terraform
5+
Follow the [Install Terraform](https://developer.hashicorp.com/terraform/install) page to install Terraform on your machine.
6+
7+
## Setting up Terraform with Artifactory
8+
The recommended way to manage Terraform state is to use a remote backend.
9+
Some of the repository examples use JFrog Artifactory as the remote backend (commented out).
10+
11+
To set up Terraform with Artifactory, follow the instructions in the [Terraform Artifactory Backend](https://jfrog.com/help/r/jfrog-artifactory-documentation/terraform-backend-repository-structure) documentation.
12+
13+
## Examples
14+
1. Create the needed [AWS infrastructure for running JFrog Artifactory and Xray in AWS](jfrog-platform-aws-install) using RDS, S3, and EKS. This uses the [JFrog Platform Helm Chart](https://github.com/jfrog/charts/tree/master/stable/jfrog-platform) to install Artifactory and Xray
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
# JFrog Platform Installation in AWS with Terraform
2+
This example will prepare the AWS infrastructure and services required to run Artifactory and Xray (installed with the [jfrog-platform Helm Chart](https://github.com/jfrog/charts/tree/master/stable/jfrog-platform)) using Terraform:
3+
1. The AWS VPC
4+
2. RDS (PostgreSQL) as the database for each application
5+
2. S3 as the Artifactory object storage
6+
3. EKS as the Kubernetes cluster for running Artifactory and Xray with pre-defined node groups for the different services
7+
8+
The resources are split between individual files for easy and clear separation.
9+
10+
11+
## Prepare the JFrog Platform Configurations
12+
Ensure that the AWS CLI is set up and properly configured before starting with Terraform.
13+
A configured AWS account with the necessary permissions is required to provision and manage resources successfully.
14+
15+
The [jfrog-values.yaml](jfrog-values.yaml) file has the values that Helm will use to configure the JFrog Platform installation.
16+
17+
The [artifactory-license-template.yaml](artifactory-license-template.yaml) file has the license key(s) template that you will need to copy to a `artifactory-license.yaml` file.
18+
```shell
19+
cp artifactory-license-template.yaml artifactory-license.yaml
20+
```
21+
22+
If you plan on skipping the license key(s) for now, you can leave the `artifactory-license.yaml` file empty. Terraform will create an empty one for you if you don't create it.
23+
24+
## JFrog Platform Sizing
25+
Artifactory and Xray have pre-defined sizing templates that you can use to deploy them. The supported sizing templates in this project are `small`, `medium`, `large`, `xlarge`, and `2xlarge`.
26+
27+
The sizing templates will be pulled from the [official Helm Charts](https://github.com/jfrog/charts) during the execution of the Terraform configuration.
28+
29+
## Terraform
30+
31+
32+
1. Initialize the Terraform configuration by running the following command
33+
```shell
34+
terraform init
35+
```
36+
37+
2. Plan the Terraform configuration by running the following command
38+
```shell
39+
terraform plan -var 'sizing=small'
40+
```
41+
42+
3. Apply the Terraform configuration by running the following command
43+
```shell
44+
terraform apply -var 'sizing=small'
45+
```
46+
47+
4. When you are done, you can destroy the resources by running the following command
48+
```shell
49+
terraform destroy
50+
```
51+
52+
## Accessing the EKS Cluster and Artifactory Installation
53+
To get the `kubectl` configuration for the EKS cluster, run the following command
54+
```shell
55+
aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)
56+
```
57+
### Add JFrog Helm repository
58+
Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client
59+
60+
```shell
61+
helm repo add jfrog https://charts.jfrog.io
62+
helm repo update
63+
```
64+
65+
### Install JFrog Platform
66+
Once done, install the JFrog Platform (Artifactory and Xray) using the Helm Chart with the following command.
67+
68+
Terraform will create the needed configuration files to be used for the `helm install` command.
69+
This command will auto generate and be writen to the console when you run the `Terraform apply` command.
70+
```shell
71+
helm upgrade --install jfrog jfrog/jfrog-platform \
72+
--version <version> \
73+
--namespace <namesapce>> --create-namespace \
74+
-f ./jfrog-values.yaml \
75+
-f ./artifactory-license.yaml \
76+
-f ./jfrog-artifactory-<sizing>-adjusted.yaml \
77+
-f ./jfrog-xray--<sizing>-adjusted.yaml \
78+
-f ./jfrog-custom.yaml \
79+
--timeout 600s
80+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
## A template for the Artifactory license as a helm value.
2+
## Copy this file to artifactory-license.yaml and fill in the full license key(s).
3+
artifactory:
4+
artifactory:
5+
license:
6+
licenseKey: |
7+
cHJvZHVjdHM6CiAgYXJ1aWZhY3Rvcnk6CiAgICBwcm9kdWN0OiBaWGh3YVhKbGN6b2dNakF5TlMx
8+
TFRGaFpXTmlNRGs1T0dRMVpncHZkMjVsY2p...
9+
10+
cHJvZHVjdHM6CiAgYXJ0aWZhY3Rvcnk6CiAgIBBwcm9kdWN0OiBaWGh3YVhKbGN6b2dNakF5TlMv
11+
d05DMHdObFF5TURvMU9UbzFPVm9LYVdRNkl...
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,237 @@
1+
# This file is used to create an AWS EKS cluster and the managed node group(s)
2+
3+
locals {
4+
cluster_name = "${var.cluster_name}-${random_pet.unique_name.id}"
5+
}
6+
7+
resource "aws_security_group_rule" "allow_management_from_my_ip" {
8+
type = "ingress"
9+
from_port = 0
10+
to_port = 65535
11+
protocol = "-1"
12+
cidr_blocks = var.cluster_public_access_cidrs
13+
security_group_id = module.eks.cluster_security_group_id
14+
description = "Allow all traffic from my public IP for management"
15+
}
16+
17+
module "eks" {
18+
source = "terraform-aws-modules/eks/aws"
19+
20+
cluster_name = local.cluster_name
21+
cluster_version = "1.31"
22+
23+
enable_cluster_creator_admin_permissions = true
24+
cluster_endpoint_public_access = true
25+
cluster_endpoint_public_access_cidrs = var.cluster_public_access_cidrs
26+
27+
cluster_addons = {
28+
aws-ebs-csi-driver = {
29+
most_recent = true
30+
service_account_role_arn = module.ebs_csi_irsa_role.iam_role_arn
31+
}
32+
}
33+
34+
vpc_id = module.vpc.vpc_id
35+
subnet_ids = module.vpc.private_subnets
36+
37+
eks_managed_node_group_defaults = {
38+
ami_type = "AL2_ARM_64"
39+
iam_role_additional_policies = {
40+
AmazonS3FullAccess = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
41+
AmazonEBSCSIDriverPolicy = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
42+
}
43+
pre_bootstrap_user_data = <<-EOF
44+
# This script will run on all nodes before the kubelet starts
45+
echo "It works!" > /tmp/pre_bootstrap_user_data.txt
46+
EOF
47+
block_device_mappings = {
48+
xvda = {
49+
device_name = "/dev/xvda"
50+
ebs = {
51+
volume_type = "gp3"
52+
volume_size = 50
53+
throughput = 125
54+
delete_on_termination = true
55+
}
56+
}
57+
}
58+
tags = {
59+
Group = var.common_tag
60+
}
61+
}
62+
63+
eks_managed_node_groups = {
64+
artifactory = {
65+
name = "artifactory-node-group"
66+
67+
instance_types = [(
68+
var.sizing == "large" ? var.artifactory_node_size_large :
69+
var.sizing == "xlarge" ? var.artifactory_node_size_large :
70+
var.sizing == "2xlarge" ? var.artifactory_node_size_large :
71+
var.artifactory_node_size_default
72+
)]
73+
min_size = 1
74+
max_size = 10
75+
desired_size = (
76+
var.sizing == "medium" ? 2 :
77+
var.sizing == "large" ? 3 :
78+
var.sizing == "xlarge" ? 4 :
79+
var.sizing == "2xlarge" ? 6 :
80+
1
81+
)
82+
block_device_mappings = {
83+
xvda = {
84+
device_name = "/dev/xvda"
85+
ebs = {
86+
volume_type = "gp3"
87+
volume_size = (
88+
var.sizing == "large" ? var.artifactory_disk_size_large :
89+
var.sizing == "xlarge" ? var.artifactory_disk_size_large :
90+
var.sizing == "2xlarge" ? var.artifactory_disk_size_large :
91+
var.artifactory_disk_size_default
92+
)
93+
iops = (
94+
var.sizing == "large" ? var.artifactory_disk_iops_large :
95+
var.sizing == "xlarge" ? var.artifactory_disk_iops_large :
96+
var.sizing == "2xlarge" ? var.artifactory_disk_iops_large :
97+
var.artifactory_disk_iops_default
98+
)
99+
throughput = (
100+
var.sizing == "large" ? var.artifactory_disk_throughput_large :
101+
var.sizing == "xlarge" ? var.artifactory_disk_throughput_large :
102+
var.sizing == "2xlarge" ? var.artifactory_disk_throughput_large :
103+
var.artifactory_disk_throughput_default
104+
)
105+
delete_on_termination = true
106+
}
107+
}
108+
}
109+
labels = {
110+
"group" = "artifactory"
111+
}
112+
}
113+
114+
nginx = {
115+
name = "nginx-node-group"
116+
117+
instance_types = [(
118+
var.sizing == "xlarge" ? var.nginx_node_size_large :
119+
var.sizing == "2xlarge" ? var.nginx_node_size_large :
120+
var.nginx_node_size_default
121+
)]
122+
123+
min_size = 1
124+
max_size = 10
125+
desired_size = (
126+
var.sizing == "medium" ? 2 :
127+
var.sizing == "large" ? 2 :
128+
var.sizing == "xlarge" ? 2 :
129+
var.sizing == "2xlarge" ? 3 :
130+
1
131+
)
132+
133+
labels = {
134+
"group" = "nginx"
135+
}
136+
}
137+
138+
xray = {
139+
name = "xray-node-group"
140+
141+
instance_types = [(
142+
var.sizing == "xlarge" ? var.xray_node_size_xlarge :
143+
var.sizing == "2xlarge" ? var.xray_node_size_xlarge :
144+
var.xray_node_size_default
145+
)]
146+
min_size = 1
147+
max_size = 10
148+
desired_size = (
149+
var.sizing == "medium" ? 2 :
150+
var.sizing == "large" ? 3 :
151+
var.sizing == "xlarge" ? 4 :
152+
var.sizing == "2xlarge" ? 6 :
153+
1
154+
)
155+
block_device_mappings = {
156+
xvda = {
157+
device_name = "/dev/xvda"
158+
ebs = {
159+
volume_type = "gp3"
160+
volume_size = (
161+
var.sizing == "large" ? var.xray_disk_size_large :
162+
var.sizing == "xlarge" ? var.xray_disk_size_large :
163+
var.sizing == "2xlarge" ? var.xray_disk_size_large :
164+
var.xray_disk_size_default
165+
)
166+
iops = (
167+
var.sizing == "large" ? var.xray_disk_iops_large :
168+
var.sizing == "xlarge" ? var.xray_disk_iops_large :
169+
var.sizing == "2xlarge" ? var.xray_disk_iops_large :
170+
var.xray_disk_iops_default
171+
)
172+
throughput = (
173+
var.sizing == "large" ? var.xray_disk_throughput_large :
174+
var.sizing == "xlarge" ? var.xray_disk_throughput_large :
175+
var.sizing == "2xlarge" ? var.xray_disk_throughput_large :
176+
var.xray_disk_throughput_default
177+
)
178+
delete_on_termination = true
179+
}
180+
}
181+
}
182+
labels = {
183+
"group" = "xray"
184+
}
185+
}
186+
187+
## Create an extra node group for testing
188+
extra = {
189+
name = "extra-node-group"
190+
191+
instance_types = [var.extra_node_size]
192+
193+
min_size = 1
194+
max_size = 3
195+
desired_size = var.extra_node_count
196+
197+
labels = {
198+
"group" = "extra"
199+
}
200+
}
201+
}
202+
203+
tags = {
204+
Group = var.common_tag
205+
}
206+
}
207+
208+
# Create the gp3 storage class and make it the default
209+
resource "kubernetes_storage_class" "gp3_storage_class" {
210+
metadata {
211+
name = "gp3"
212+
annotations = {
213+
"storageclass.kubernetes.io/is-default-class" = "true"
214+
}
215+
}
216+
storage_provisioner = "ebs.csi.aws.com"
217+
volume_binding_mode = "WaitForFirstConsumer"
218+
allow_volume_expansion = true
219+
parameters = {
220+
"fsType" = "ext4"
221+
"type" = "gp3"
222+
}
223+
}
224+
225+
module "ebs_csi_irsa_role" {
226+
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
227+
228+
role_name = "ebs-csi-${module.eks.cluster_name}-${var.region}"
229+
attach_ebs_csi_policy = true
230+
231+
oidc_providers = {
232+
ex = {
233+
provider_arn = module.eks.oidc_provider_arn
234+
namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"]
235+
}
236+
}
237+
}

0 commit comments

Comments
 (0)