|
| 1 | +# Helm |
| 2 | + |
| 3 | +The directory holds the values.yaml for deploying various pre-defined helm charts can be found in `k8s/helm/` |
| 4 | + |
| 5 | +First off lets assume that helm has already been deployed. |
| 6 | + |
| 7 | +To install a helm chart it's quite simple: |
| 8 | + |
| 9 | +``` |
| 10 | +helm install --name deployment_name --namespace somenamespace chart/name |
| 11 | +``` |
| 12 | + |
| 13 | +If we want to specify some overrides to that charge we can specify those as cli arguments, |
| 14 | +or "more better" we can specify those in values.yaml file. |
| 15 | + |
| 16 | +``` |
| 17 | +helm install -f path/to/values.yaml --name deployment_name --namespace somenamespace chart/name |
| 18 | +``` |
| 19 | + |
| 20 | +## Elasticsearch |
| 21 | + |
| 22 | +First off lets deploy Elasticsearch using the |
| 23 | +[official helm chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch). |
| 24 | + |
| 25 | +If you haven't added the elastic helm repo you should do that first: |
| 26 | + |
| 27 | +``` |
| 28 | +helm repo add elastic https://helm.elastic.co |
| 29 | +``` |
| 30 | + |
| 31 | +If you want to practice installing Elasticsearch you can specify a new namespace and delete it when you're done. |
| 32 | + |
| 33 | +Next let's assume we want to deploy elasticsearch to a testing namespace. Lets call it `test-es` |
| 34 | + |
| 35 | +``` |
| 36 | +$ helm install --name elasticsearch --namespace test-es elastic/elasticsearch |
| 37 | +``` |
| 38 | + |
| 39 | +Now you can monitor the pods to see if elasticearch is up and ready. |
| 40 | + |
| 41 | +Now lets say we want to make some changes to the options for deploying elasticsearch. |
| 42 | +You can create a file in `values` called `Elasticsearch.yaml`. And to deploy these changes |
| 43 | +we just need to run the command: |
| 44 | + |
| 45 | +``` |
| 46 | +$ helm upgrade -f values/elasticsearch.yaml elasticsearch elastic/elasticsearch --namespace test-es |
| 47 | +``` |
| 48 | + |
| 49 | +The upgrade process will 1 by 1 take add a new elasticsearch node in to the cluster, wait till the cluster is green |
| 50 | +then remove a node from the cluster, wait till green and so on. |
| 51 | + |
| 52 | +The upgrade of elasticsearch can be done with 0 downtime using this rolling upgrade procedure. |
| 53 | + |
| 54 | +This processss can also be used to upgrade elasticsearch to newer versions in the future. |
| 55 | + |
| 56 | +## Kibana |
| 57 | + |
| 58 | +Deploying kibana is just as simple as deploying elasticsearch. |
| 59 | + |
| 60 | +Skipping the initial install step like we did with Elasticsearch, lets assume that we already have the values file |
| 61 | +for kibana we wanna use. |
| 62 | + |
| 63 | +So deploying kibana using a custom values file can be done using: |
| 64 | + |
| 65 | +``` |
| 66 | +helm upgrade -i -f values/kibana.yaml --name kibana --namespace test-es elastic/kibana |
| 67 | +``` |
| 68 | + |
| 69 | +We'll note here that this command is slightly different. In this case we are running `upgrade` with the `-i` flag. |
| 70 | +This means upgrade if a release exists already, if not install it. This command is more idempotent than the first |
| 71 | +command we saw in the [Elasticsearch](./README.md#elasticsearch) section. |
| 72 | + |
| 73 | +### Ingress |
| 74 | + |
| 75 | +Please note that by default the Ingress for Kibana is disabled. |
| 76 | +If you'd like to enable the ingress for Kibana you must do so explicitly. |
| 77 | + |
| 78 | +The default configuration for kibana can be found [here](https://github.com/elastic/helm-charts/blob/master/kibana/values.yaml#L105-L116) |
| 79 | + |
| 80 | +In the `values/kibana.yaml` file you must override the Ingress settings to enable an ingress for Kibana. |
| 81 | + |
| 82 | +### Port Forwarding |
| 83 | + |
| 84 | +In the meantime after kibana has been deployed you can use kubectl's port forwarding to be able to access kibana |
| 85 | +instance using localhost. |
| 86 | + |
| 87 | +``` |
| 88 | +$ kubectl port-forward deployment/kibana-kibana 5601 -n test-es |
| 89 | +``` |
| 90 | + |
| 91 | +Now you can access kibana through `http://localhost:5601` |
| 92 | + |
| 93 | + |
| 94 | +## APM Server |
| 95 | + |
| 96 | +Deploying the APM server using a custom values.yaml file would look like: |
| 97 | + |
| 98 | +``` |
| 99 | +helm upgrade -i -f values/apm-server.yaml --name apm-server --namespace test-es elastic/apm-server |
| 100 | +``` |
| 101 | + |
| 102 | +Like Kibana the APM server is configured without an ingress. This should not be exposed publicly, except in the case |
| 103 | +where you want to collect APM data from an application that's running outside of the k8s cluster. |
| 104 | +Even in that event, I would suggest, deploying that app to k8s instead. |
| 105 | + |
| 106 | +## Beats |
| 107 | + |
| 108 | +Deploying the metricbeats and filebeats using a custom values.yaml file would look like: |
| 109 | + |
| 110 | +``` |
| 111 | +helm upgrade -i -f values/filebeat.yaml --name filebeat --namespace test-es elastic/filebeat |
| 112 | +helm upgrade -i -f values/metricbeat.yaml --name metricbeat --namespace test-es elastic/metricbeat |
| 113 | +``` |
| 114 | + |
| 115 | +Metricbeat and filebeat are both configured by default to start pulling metrics/logs from the k8s cluster and ship to the local Elasticsearch instance. |
| 116 | + |
0 commit comments