|
| 1 | +--- |
| 2 | +title: Atlassian Apps - Jira, Confluence, Crowd and GlusterFS |
| 3 | +last_reviewed_on: 2025-01-17 |
| 4 | +review_in: 12 months |
| 5 | +weight: 159 |
| 6 | +--- |
| 7 | + |
| 8 | +# <%= current_page.data.title %> |
| 9 | + |
| 10 | +We have deployed Atlassian apps Jira, Confluence, Crowd on the staging environment. The [repository](https://github.com/hmcts/atlassian-infrastructure) has all the IaC as well as automation scripts. |
| 11 | + |
| 12 | +Our goal was to create replica of the production atlassian environment on the staging environment. |
| 13 | + |
| 14 | +This documentation provides a detailed outline of the procedure, offering guidance for anyone tasked with rolling out atlassian apps from scratch. |
| 15 | + |
| 16 | +## Prerequisites |
| 17 | + |
| 18 | +The atlassian production environment is deployed on the [MOJ DCD Atlassian LVE](https://portal.azure.com/#@HMCTS.NET/resource/subscriptions/79898897-729c-41a0-a5ca-53c764839d95/overview) subscription and the staging environment is deployed on the [MOJ DCD Atlassian NLE](https://portal.azure.com/#@HMCTS.NET/resource/subscriptions/b7d2bd5f-b744-4acc-9c73-e068cec2e8d8/overview) subscription. so make sure you have access to these subscriptions as contributor. |
| 19 | + |
| 20 | + |
| 21 | +## Existing Production Setup |
| 22 | + |
| 23 | +Please take a look at this [documentation](https://tools.hmcts.net/confluence/pages/viewpage.action?spaceKey=DTSPO&title=ODP+0002+-+Atlassian+Interim+Hosting+Deployment+Approach) on confluence which has detailed list of VMs and existing setup and the diagram. |
| 24 | + |
| 25 | +There are total 9 VMs in the production environment, 3 VMs for Jira, 2 VMs for Confluence, 1 VM for Crowd and 3 VMs for GlusterFS. |
| 26 | + |
| 27 | +All the VMs are on this resource group [RG-PRD-ATL-INT-01](https://portal.azure.com/#@HMCTS.NET/resource/subscriptions/79898897-729c-41a0-a5ca-53c764839d95/resourceGroups/RG-PRD-ATL-INT-01/overview). |
| 28 | + |
| 29 | +#### Jira VMs |
| 30 | +* PRDATL01AJRA01.cp.cjs.hmcts.net |
| 31 | +* PRDATL01AJRA02.cp.cjs.hmcts.net |
| 32 | +* PRDATL01AJRA03.cp.cjs.hmcts.net |
| 33 | + |
| 34 | +#### Confluence VMs |
| 35 | +* PRDATL01ACNF02.cp.cjs.hmcts.net |
| 36 | +* PRDATL01ACNF04.cp.cjs.hmcts.net |
| 37 | + |
| 38 | +#### Crowd VM |
| 39 | +* PRDATL01ACRD01.cp.cjs.hmcts.net |
| 40 | + |
| 41 | +#### GlusterFS VMs |
| 42 | +* PRDATL01DGST01.cp.cjs.hmcts.net |
| 43 | +* PRDATL01DGST02.cp.cjs.hmcts.net |
| 44 | +* PRDATL01DGST03.cp.cjs.hmcts.net |
| 45 | + |
| 46 | +The database is available on the [RG-PRD-ATL-01](https://portal.azure.com/#@HMCTS.NET/resource/subscriptions/79898897-729c-41a0-a5ca-53c764839d95/resourceGroups/RG-PRD-ATL-01) resource group. |
| 47 | + |
| 48 | +#### Postgres Single Server |
| 49 | +ps-prd-atl-dpdb |
| 50 | + |
| 51 | +Please note that there are other important resources like Application Gateways, Load Balancer, NSGs, VNETs, etc. which are not mentioned here. |
| 52 | + |
| 53 | +## Approach for Staging deployment |
| 54 | + |
| 55 | +We decided to use the same setup as production for the staging environment. We have first of all used backup copies of the above VMs to deploy the VMs on staging. We restored a backup of the production database to a new staging single server. |
| 56 | + |
| 57 | +In order to restore VMs from the Backup and give them new names, we have used the following script: |
| 58 | + |
| 59 | +```bash |
| 60 | +az account set --subscription 79898897-729c-41a0-a5ca-53c764839d95 |
| 61 | + |
| 62 | +TargetSubnetName=<TargetSubnetName> # eg "atlassian-int-subnet-dat" |
| 63 | +TargetVmName=<TargetVmName> # eg "atlassian-nonprod-gluster-03" |
| 64 | +ItemName=<ItemName> # eg"PRDATL01DGST03.cp.cjs.hmcts.net" |
| 65 | + |
| 66 | +VaultName="BK-PRD-ATL-INT-01" |
| 67 | +SourceResourceGroup="RG-PRD-ATL-INT-01" |
| 68 | +SourceSubscription="79898897-729c-41a0-a5ca-53c764839d95" |
| 69 | +TargetResouceGroup="atlassian-nonprod-rg" |
| 70 | +TargetSubscription="b7d2bd5f-b744-4acc-9c73-e068cec2e8d8" |
| 71 | +StorageAccountName="atlassiannonprod" |
| 72 | +TargetVNetName="atlassian-int-nonprod-vnet" |
| 73 | + |
| 74 | +# az backup container list --resource-group $SourceResourceGroup --vault-name $VaultName --backup-management-type AzureIaasVM --query '[].{Name:name, ItemName:properties.friendlyName}' -o table |
| 75 | + |
| 76 | +ContainerName=$(az backup container list --resource-group $SourceResourceGroup --vault-name $VaultName --backup-management-type AzureIaasVM --query "[?properties.friendlyName=='$ItemName'].{Name:name}" -o tsv) |
| 77 | + |
| 78 | +# az backup recoverypoint list --vault-name $VaultName --resource-group $SourceResourceGroup --container-name $ContainerName --item-name $ItemName --query '[].{Name:properties.recoveryPointTime, ID:name}' -o table |
| 79 | + |
| 80 | +RecoverypointName=$(az backup recoverypoint list --vault-name $VaultName --resource-group $SourceResourceGroup --container-name $ContainerName --item-name $ItemName --query '[0].name' -o tsv) |
| 81 | + |
| 82 | +echo $RecoverypointName |
| 83 | + |
| 84 | +az backup restore restore-disks \ |
| 85 | + --resource-group $SourceResourceGroup \ |
| 86 | + --vault-name $VaultName \ |
| 87 | + --item-name $ItemName \ |
| 88 | + --rp-name $RecoverypointName \ |
| 89 | + --storage-account $StorageAccountName \ |
| 90 | + --restore-to-staging-storage-account true \ |
| 91 | + --target-resource-group $TargetResouceGroup \ |
| 92 | + --target-subscription-id $TargetSubscription \ |
| 93 | + --target-vm-name $TargetVmName \ |
| 94 | + --target-vnet-name $TargetVNetName \ |
| 95 | + --target-subnet-name $TargetSubnetName \ |
| 96 | + --target-vnet-resource-group $TargetResouceGroup \ |
| 97 | + --container-name $ContainerName \ |
| 98 | + --subscription $SourceSubscription \ |
| 99 | + --storage-account-resource-group $TargetResouceGroup |
| 100 | +``` |
| 101 | + |
| 102 | +Above script will trigger the restore of the VM from the backup and will create a new VM in the staging environment. You can able to see progress of the restore in the [backup vault](https://portal.azure.com/#view/Microsoft_Azure_DataProtection/V1JobsListBlade/vaultId/%2Fsubscriptions%2F79898897-729c-41a0-a5ca-53c764839d95%2FresourceGroups%2FRG-PRD-ATL-INT-01%2Fproviders%2FMicrosoft.RecoveryServices%2Fvaults%2FBK-PRD-ATL-INT-01/status/InProgress). |
| 103 | + |
| 104 | +Please note that each backup is separate job so its fine to run restore for multiple VMs at the same time. |
| 105 | + |
| 106 | +### Set new public key for access |
| 107 | + |
| 108 | +Once the VMs are restored, you need to set the public key for the access. You can use the following script to set the public key for the VMs: |
| 109 | + |
| 110 | +```bash |
| 111 | +TargetVmName=<TargetVmName> # eg "atlassian-nonprod-gluster-03" |
| 112 | + |
| 113 | +az account set --subscription b7d2bd5f-b744-4acc-9c73-e068cec2e8d8 |
| 114 | +TargetSubscription="b7d2bd5f-b744-4acc-9c73-e068cec2e8d8" |
| 115 | +KeyvaultName="atlasssian-nonprod-kv" |
| 116 | +SecretName="public-key" |
| 117 | +TargetResouceGroup="atlassian-nonprod-rg" |
| 118 | + |
| 119 | +PublicKey=$(az keyvault secret show \ |
| 120 | + --vault-name $KeyvaultName \ |
| 121 | + --name $SecretName \ |
| 122 | + --subscription $TargetSubscription \ |
| 123 | + --query value -o tsv) |
| 124 | + |
| 125 | +username="atlassian-admin" |
| 126 | + |
| 127 | +az vm user update \ |
| 128 | + --resource-group $TargetResouceGroup \ |
| 129 | + --name $TargetVmName \ |
| 130 | + --username $username \ |
| 131 | + --ssh-key-value $PublicKey \ |
| 132 | + --subscription $TargetSubscription |
| 133 | +``` |
| 134 | + |
| 135 | +This is the time it has taken to restore the VMs last time when we did it: |
| 136 | + |
| 137 | +<img src=images/AtlassianVMRestore.png width="600"> |
| 138 | + |
| 139 | +### Accessing the VMs |
| 140 | + |
| 141 | +Download the private key from the key vault, and save it to a file. There are two private keys on this keyvault secret atlasssian-nonprod-kv but they are both same, private-key and test-private-key. There was some formatting issue with the private-key so we have created a new secret test-private-key. |
| 142 | + |
| 143 | +```bash |
| 144 | +# run chmod 600 <privatekeyfilename> |
| 145 | + |
| 146 | +ssh -i <privatekeyfilename> atlassian-admin@<VM-IP> |
| 147 | +``` |
| 148 | + |
| 149 | +### Restore the databases |
| 150 | + |
| 151 | +We then have to restore the databases from the production to the staging environment. We have used the backup and restore from vault feature within Azure |
| 152 | + |
| 153 | +Backup Vault containing [backups from prod](https://portal.azure.com/#@HMCTS.NET/resource/subscriptions/79898897-729c-41a0-a5ca-53c764839d95/resourceGroups/RG-PRD-ATL-01/providers/Microsoft.DataProtection/BackupVaults/ATL-Backup-Vault/overview) |
| 154 | + |
| 155 | +Restored the latest backup via the portal to target server atlassian-nonprod-server |
| 156 | + |
| 157 | +Jobs can be [viewed here](https://portal.azure.com/#@HMCTS.NET/resource/subscriptions/79898897-729c-41a0-a5ca-53c764839d95/resourceGroups/RG-PRD-ATL-01/providers/Microsoft.DataProtection/BackupVaults/ATL-Backup-Vault/backupJobs) |
| 158 | + |
| 159 | +Crowd took 2min |
| 160 | +Confluence took 1hr30min |
| 161 | +Jira took 2hr5min |
| 162 | + |
| 163 | +### Import VM to the terraform |
| 164 | + |
| 165 | +Once the VMs are restored and you have access to the VMs, you can import the VMs to the terraform. You can use the following PR to see what changes you may have to do to import the VMs to the terraform. |
| 166 | + |
| 167 | +[Pull Request](https://github.com/hmcts/atlassian-infrastructure/pull/48) |
| 168 | + |
| 169 | +Terraform already has the automation script which would make config changes on the VMs. Please note that the script is only triggered when there is any change on the script so please make sure to make any change on the script to trigger the script. |
| 170 | + |
| 171 | + |
| 172 | +### Post Deployment Steps |
| 173 | + |
| 174 | +**1.** Post merging PR above and having the automation script run, add the recovery admin password on crowd and make your account Admin account |
| 175 | + |
| 176 | +**2.** Added password here - vi /opt/crowd/apache-tomcat/bin/setenv.sh --> append ``` -Datlassian.recovery.password=<any password> ``` |
| 177 | + |
| 178 | +**3.** Login to crowd using recovery_admin username and above password and add yourself Administrator groups. |
| 179 | + |
| 180 | +Your account then should sync with Jira and Confluence and you can then able to make any Administrative changes. |
| 181 | + |
| 182 | +**4.** Change the Base URLs on all the apps to staging.tools.hmcts.net |
| 183 | + |
| 184 | +**5.** Change colours of the environment to differentiate between Live and Staging |
| 185 | + |
| 186 | + |
| 187 | +## Troubleshooting |
| 188 | + |
| 189 | +**1.** If you are unable to access the VMs, please make sure you are connected to F5 VPN and using the correct private key and the private key is in the correct format. |
| 190 | + |
| 191 | +**2.** For some reason, if you see errors on the application, please make sure GlusterFS shares are mounted correctly on the VMs. |
| 192 | +e.g `jira_shared` should be mounted here `/var/atlassian/application-data/jira/shared` |
| 193 | + |
| 194 | +Please use `mount -a` command to mount them correctly. |
| 195 | + |
| 196 | +There was problem where the share was not mounted correctly after auto shutdown, we have got cronjob on the staging VMs to run the mounting script everyday at 8am. |
| 197 | + |
| 198 | + |
0 commit comments