This repository was archived by the owner on Jan 31, 2021. It is now read-only.
Releases: cloudposse-archives/terraform-aws-codefresh-backing-services
Releases · cloudposse-archives/terraform-aws-codefresh-backing-services
Added s3 bucket for backups
0.7.0
Disable TLS by default
What
- Disable TLS for document DB
Why
- Codefresh does not support TLS connection to MongoDB
Migration note
Remember that the changes require document DB instances reboot,
so after terraform apply the changes will become in-sync after the next maintenance window
or you have to reboot the instances manually.
0.5.0 Fix DocumentDB DNS cluster and reader DNS names
what
- Fix DocumentDB DNS cluster and reader DNS names
why
0.4.0 Add Amazon DocumentDB
what
- Add Amazon DocumentDB
why
- Provision Amazon DocumentDB for Codefresh Enterprise and use it instead of MongoDB
0.3.0: Added ACM (#3)
* Added ACM
0.2.0
0.1.0: Initial Implementation (#1)
* Add basic main.tf
This will hold our global config to be used in other resources
* Add default global variables
* Add Postgres Aurora
This commit has largely been lifted from our backing services module for
Postgres Aurora. It creates a Postgres Aurora cluster and writes the
outputs of that to SSM. If no master username / password is given, these
are generated as a random strings and saved to SSM using SecureString
where necessary. If not postgres_name is given, it defaults to using
using the terraform-null-label outputted id.
* Add VPC backing-services
This is part of terraform-root-modules backing service but I think we
will likely want to create the VPC outside of this module and pass it
in. I’ve left in for now but we can take out later.
* Add Postgres read replica
* Add Elasticache Redis backing service
* Add Kops metadata module
This is used to pull information from our kops cluser by doing a load of
data lookups based on some tags. We can then pass things like Kops node
security groups around to be used by other modules.
* Add AmazonMQ backing service for CodeFresh
This commit adds ActiveMQ broker and config for CodeFresh Enterprise. An
admin user is created and credentials stored encrypted in SSM. A DNS
hostname is created for the ActiveMQ endpoints.
Note that unlike the other backing modules in here, AmazonMQ resources
are not currently their own module. There are only a handful of
resources for this AmazonMQ stuff but we can pull it out into a module
if we so wish.
The default ActiveMQ config [1] has been added. This is optional in the
`aws_mq_broker` but due to Terraform not supporting conditional blocks
beyond a basic count, it is a pain to conditionally add this. The schema
can be found [2]
[1] http://svn.apache.org/repos/asf/activemq/trunk/assembly/src/release/conf/activemq.xml
[2] https://s3-us-west-2.amazonaws.com/amazon-mq-docs/XML/amazon-mq-active-mq-5.15.0.xsd
* Add docs
* Remove RDS Aurora Postgres replica
* Remove VPC and subnet modules
We will be injecting these resources into an existing VPC
* Remove need for kops metadata
* Move AmazonMQ into own module
* Add EFS
* Drop Redis Elasticache version to 3.2.6
3.2.10 doesn’t support encryption, see below:
```
Error creating Elasticache Replication Group: InvalidParameterCombination: Encryption features are not supported for engine version 3.2.10. Please use engine version 3.2.6
```
* Move aws_mq_broker users into module
These should have been in the module that calls this module however
there is a Terraform bug [1] meaning passing the list of user maps is
failing when trying to use a local value.
From the calling module, the below does work:
```
users = [{
"username" = "admin"
"password" = "defaultpassword"
"groups" = ["admin"]
"console_access" = true
}]
```
however, using locals as we want to, it does not:
```
users = [{
"username" = “${local.admin_user}”
"password" = “${local.admin_password}”
"groups" = ["admin"]
"console_access" = true
}]
```
[1] https://github.com/hashicorp/terraform/issues/12263
* Update docs
* Remove deprecated mq_broker_name variable
* Pin aws-mq-broker module to 0.1.0 release
* Add global enabled variable for whole module
We can toggle individual components of the backing services by enabling
a specific service, or using the global flag to enable/disable all.
* Add s3 bucket to CodeFresh backing services.
* Rename node_security_groups to security_groups
These maybe the security groups of your kops nodes, they may not, so be
more generic.
* Add usage to README
* Pass only 1 or 2 subnets to mq.tf
Running ActiveMQ in ACTIVE_STANDBY_MULTI_AZ mode requires you only
supply 2 subnets. Any more and the resource will complain. Similarly
you must pass a single subnet if running in SINGLE_INSTANCE mode
* Actually use postgres_db_name if we pass it in
* Add full example
* Remove postgres_name variable
This should just be var.name as the postgres name is computed in the
RDS cluster module using labels.
* Pin mq broker module to latest 0.2.0 release
* Remove redis_name as this is calculated in module
* Update Redis variable descriptions
* overwrite SSM parameter is expected as a boolean
* Bump AmazonMQ default instance type
mq.t2.micro is likely not what you want in production.
* Remove null-label since not being used anymore
It is in use in all the submodule we call and we delegate down to them
for naming.
* Bump aws-efs module
To ensure we can use enabled flags on it
* Bump aws-s3-bucket to 0.1.0
Since we have the initial PR merged and have cut a release
* Remove aws-mq-broker enabled flags
Bump aws-mq-broker to 0.3.0, which does not feature enabled flags, so
remove them.
In its current incarnation the terraform-aws-mq-broker does not support
the enabled variable allowing for boolean creation of resources in the
module, see [1] for more context.
[1] https://github.com/cloudposse/terraform-aws-mq-broker/pull/4
* Add optional EFS VPC and subnet_id variables
This commit adds variables to enable overriding what VPC and subnet that
EFS runs in. If you don’t provide them, it default back to the
`var.vpc_id` and `var.subnet_ids` values.
During testing of this module, we found something interesting. We were
deploying EFS to the backing services VPC, which is a different VPC to
the k8s cluster. Our pods were unable to resolve the DNS endpoint of the
EFS cluster, despite there being VPC peering between the two, with DNS
lookups between them enabled. AWS documentation [1] states that in this
scenario, you have a few options:
1) Use the EFS IP address directly (no thanks)
2) Create a DNS entry in Route53 CNAME’ing to the EFS DNS endpoint in
a private hosted zone
The underlying EFS module does already create a Route53 DNS entry CNAME
to the EFS DNS endpoint, but it isn’t in a private zone. The pod could
not resolve the Route53 DNS. Deploying EFS into the _same_ VPC as the
k8s nodes worked a treat and finally the pods were able to resolve and
mount EFS volumes.
[1] https://docs.aws.amazon.com/efs/latest/ug/manage-fs-access-vpc-peering.html
* Fix typos
* Fix typos
* Remove EFS + AmazonMQ from CodeFresh services
CodeFresh Enterprise is not compatible with AmazonMQ on the protocol
level and EFS will be moved into a general Kops backing service.
* Remove Terraform glue variables
These should be in the caller module, not this generic module.
* Update docs and pin example modules
* Update docs to remove TODO and add note on enabled