We rely on Terraform workspaces to create separate environments. This requires corrsponding naming convention for resources. Your resource names should have following prefix ${var.project_name}-${terraform.workspace}-
. For example:
resource "aws_lambda_layer_version" "lambda_layer" {
layer_name = "${var.project_name}-${terraform.workspace}-lambda-layer"
filename = data.archive_file.lambda_layer_archive.output_path
source_code_hash = data.archive_file.lambda_layer_archive.output_base64sha256
compatible_runtimes = ["python3.7", "python3.8"]
}
Generally, I suggest to use Kebab case since it is easier to distiguish project resources from resources (especially policies and roles) created by AWS which is using Pascal case.
$ cd terraform/
$ terraform init -backend-config=backend.tfvars
$ terraform workspace list
If workspaces exist, such as dev
, stage
or prod
$ terraform workspace select YOUR_ENV
If there is only default
one, create a new one
$ terraform workspace new YOUR_ENV
Finally, plan and deploy
$ terraform plan -var-file=prod.tfvars
$ terraform apply -var-file=prod.tfvars
We use Github Actions to run CI/CD workflows. To test and deploy Lambdas together with the rest of the infrastructure we need just one workflow with two jobs:
- Test lambdas and make sure that not only all tests are passing but also appropriate test coverage is achieved
- Plan and apply changes to the infrastructure with Terraform. The main idea is to only run
terraform plan
and present future changes if it is a PR and runterraform apply
only when that PR is approved and merged tostage
ormaster
(main
) branch.
For this setup to work, you should have a corresponding branch for each environment. For example:
- master -> prod
- staging -> stage
- dev -> dev
Here we assume that you have
prod
andstage
environments. But similar to those you can also adddev
. The main idea is to map each environment to corresponding Terraform workspace described above.
Our CI/CD process assumes that you are using self-hosted
runner on an EC2. This approach has several benefits in
comparison to Github-hosted runners. First, this was you are not limited to credits that Github provides and
usually a t3.micro
instance should be enough and want cost much or will be free since it is eligible for free tier.
Second, this approach is more secure. You do not need to save access secrets or other sensitive information in Github
Secrets. You only need to attach a corresponding role to an EC2 instance which should allow it to assume a more powerful role.
Self-hosted runner are easy to setup. Just follow these guidelines:
- https://docs.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners
- https://docs.github.com/en/actions/hosting-your-own-runners/configuring-the-self-hosted-runner-application-as-a-service
Lambda module has following structure
lambda
|-- src
| `-- lambda.py
|-- tests
| `-- test_lambda.py
|-- .gitignore
`-- requirements.txt
lambda.py
and test_lambda.py
are placeholders for actual lambdas and their tests.