Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement CD Pipeline Using Argo for hyperswitch-helm #83

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions .github/workflows/ArgoCD-Deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# This is a basic workflow to help deploy an application to ArgoCD
name: ArgoCD-Deploy

# Controls when the workflow will run
on:
workflow_dispatch:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @abhinavtyagiO
How are you planning to do the release for a particular version and rollout updates via workflows, will you be able to connect with your cluster via github CI?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On changing the image in values.yml, argo has an auto-sync policy which brings up pods with the new image based on the rolling strategy. And, yes, we can connect to the cluster if the cluster is public or if the workflow is using a runner that has access to the cluster.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have this tested this before?

inputs:
logLevel:
description: 'Application Name'
required: true
default: 'Please select application name'

jobs:
infra-deploy:
# The type of runner that the job will run on
runs-on: runner
env:
GITHUB_USERNAME: '${{secrets.GITHUB_ID}}'
GITHUB_PASSWORD: '${{secrets.GITHUB_SECRET}}'

# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
with:
path: hyperswitch-helm

# Runs a kubectl apply command to deploy the application to ArgoCD
- name: ${{ github.event.inputs.logLevel }}
run: |
kubectl apply -f hyperswitch-helm/charts/incubator/hyperswitch-cd/${{github.event.inputs.logLevel}}-argo-application.yml
11 changes: 11 additions & 0 deletions charts/incubator/hyperswitch-cd/Chart.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: v2
appVersion: 1.16.0
description: A Helm chart for Kubernetes to deploy hyperswitch application
name: hyperswitch-cd
type: application
version: 0.1.0
dependencies:
- name: argocd
version: 2.10.5
repository: https://charts.bitnami.com/bitnami
condition: argocd.enabled
66 changes: 66 additions & 0 deletions charts/incubator/hyperswitch-cd/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Implement CD Pipeline Using Argo: Proposal

This proposal attemps to implement CD pipeline to automate the deployment process using Argo.

## Folder Tree

- The proposed setup contains a `hyperstack-cd` folder at `charts/incubator` level so as to achieve a proper structure and readiness.

- There is an argo config file built for each workload with the naming convention `{app_name}-argo-application.yml` which ofcourse can be automated through workflows.
- The tree layout would look like:

```text
.
├── charts/incubator
│ ├── hyperswitch-app
│ ├── hyperswitch-card-vault
│ ├── hyperswitch-cd
│ │ ├── {app_name}-argo-application.yml
│ ├── hyperswitch-sdk
│ ├── hyperswitch-stack
|── repo
```

## How It Works

- A workflow is created to deploy applications to reduce any human interventions.
- Simply run the `ArgoCD-Deploy` workflow from the GitHub Actions mentioning the application name that needs to be deployed.
- As of now, the configurations are done based on my local cluster configuration.


## Getting Started

Inside the cluster, following steps have to be performed in order to install argocd cli and argocd API Server:

- Install `kubectl` command line tool. [Install kubectl](https://kubernetes.io/docs/tasks/tools/)
- Have a `kubeconfig` file (default location is ~/.kube/config).
- Install ArgoCD.
```
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
- Configuration of the certificates for the TLS endpoints is required explicitly.
- Install the ArgoCD CLI. For MacOS, it is
```
brew install argocd
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of installing argocd manually, we can add it as dependant chart
https://github.com/bitnami/charts/tree/main/bitnami/argo-cd/ and keep it within hyperswitch-cd

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, better approach. I have added it in the latest commit.

````
- Change the argocd-server service type to LoadBalancer:
```
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
```
- Port Forwarding.
```
kubectl port-forward svc/argocd-server -n argocd 8080:443
```

The ArgoCD API Server can then be accessed at https://localhost:8080

## Achievements

- I was able to deploy the applications in my local setup using Argo.
- However, I was using an AMD64 based machine and the image is built using ARM architecture. Due to the same reason, pods were not coming up. Getting `exec /bin/sh: exec format error` in ArgoCD logs.

## What's Next

- This setup can be leveraged to achieve environment specific configurations by having respective branches e.g. dev, stg and prod.
- Pipelines can be setup to build argocd config files from one env to other.
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do we use this applicationset without chart.yaml?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the chart.yml in hyperswitch-cd.

metadata:
name: hyperswitch-app-prod-app-set
labels:
app: hyperswitch-app-prod-set
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
generators:
- list:
elements:
- cluster: local-as1
url: https://kubernetes.default.svc
helm_value: values.yaml
template:
metadata:
name: 'hyperswitch-app-prod-app-{{cluster}}'
spec:
project: default
source:
path: charts/incubator/hyperswitch-app
repoURL: 'https://github.com/juspay/hyperswitch-helm.git'
targetRevision: HEAD
helm:
valueFiles:
- '{{helm_value}}'
destination:
server: '{{url}}'
namespace: hyperswitch-app-prod
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
prune: true
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: hyperswitch-card-vault-prod-app-set
labels:
app: hyperswitch-card-vault-prod-set
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
generators:
- list:
elements:
- cluster: local-as1
url: https://kubernetes.default.svc
helm_value: values.yaml
template:
metadata:
name: 'hyperswitch-card-vault-prod-app-{{cluster}}'
spec:
project: default
source:
path: charts/incubator/hyperswitch-card-vault
repoURL: 'https://github.com/juspay/hyperswitch-helm.git'
targetRevision: HEAD
helm:
valueFiles:
- '{{helm_value}}'
destination:
server: '{{url}}'
namespace: hyperswitch-card-vault-prod
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
prune: true
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the expectation is to have the individual hyperswitch services running with the help of argocd, ApplicationSet defined here is no way connected to the helm chart, how this will be added to kubernetes on running helm install inside the charts/incubator/hyperswitch-cd folder

metadata:
name: hyperswitch-sdk-prod-app-set
labels:
app: hyperswitch-sdk-prod-set
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
generators:
- list:
elements:
- cluster: local-as1
url: https://kubernetes.default.svc
helm_value: values.yaml
template:
metadata:
name: 'hyperswitch-sdk-prod-app-{{cluster}}'
spec:
project: default
source:
path: charts/incubator/hyperswitch-sdk
repoURL: 'https://github.com/juspay/hyperswitch-helm.git'
targetRevision: HEAD
helm:
valueFiles:
- '{{helm_value}}'
destination:
server: '{{url}}'
namespace: hyperswitch-sdk-prod
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
prune: true
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: hyperswitch-stack-prod-app-set
labels:
app: hyperswitch-stack-prod-set
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
generators:
- list:
elements:
- cluster: local-as1
url: https://kubernetes.default.svc
helm_value: values.yaml
template:
metadata:
name: 'hyperswitch-stack-prod-app-{{cluster}}'
spec:
project: default
source:
path: charts/incubator/hyperswitch-stack
repoURL: 'https://github.com/juspay/hyperswitch-helm.git'
targetRevision: HEAD
helm:
valueFiles:
- '{{helm_value}}'
destination:
server: '{{url}}'
namespace: hyperswitch-stack-prod
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
prune: true