- Prerequisites
- Deployment
- Configuration
Kubelet configuration must contain these flags:
--authentication-token-webhook=true
This flag enables, that aServiceAccount
token can be used to authenticate against the kubelet(s). This can also be enabled by setting the kubelet configuration valueauthentication.webhook.enabled
totrue
.--authorization-mode=Webhook
This flag enables, that the kubelet will perform an RBAC request with the API to determine, whether the requesting entity (Prometheus in this case) is allow to access a resource, in specific for this project the/metrics
endpoint. This can also be enabled by setting the kubelet configuration valueauthorization.mode
toWebhook
.
This stack provides resource metrics by deploying the Prometheus Adapter. This adapter is an Extension API Server and Kubernetes needs to be have this feature enabled, otherwise the adapter has no effect, but is still deployed.
Prerequisites:
- swap disabled
- AppArmor/SELinux disabled
- kubectl and kubelet are installed and its version is 1.21.2
- Docker installed
Start minikube with the following parameters:
minikube start --kubernetes-version=v1.21.2 --memory=6g --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0
The kube-prometheus stack includes a resource metrics API server, so the metrics-server addon is not necessary. Ensure the metrics-server addon is disabled on minikube:
$ minikube addons disable metrics-server
Install Helm for windows Install Helm from source
https://helm.sh/blog/new-location-stable-incubator-charts/
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://charts.helm.sh/stable
helm repo update
-
Install custom resource definitions and the operator with its RBAC rules:
$ kubectl apply -f https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml
-
Namespace for Elastic:
$ kubectl apply -f ./namespace-kube-logging.yaml
-
Elastic cluster:
$ kubectl apply -f ./elastic-cluster.yaml
-
Kibana:
$ kubectl apply -f ./kibana.yaml
-
Filebeat deployment set:
kubectl apply -f ./filebeat-kubernetes.yaml
NOTE: checkout password for "elastic" user:
kubectl get secret -n kube-logging quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode
replace password in the configuration filefilebeat-kubernetes.yaml
line 93 -
Expose ports for elastic and kibana:
kubectl -n kube-logging port-forward service/quickstart-es-http --address 0.0.0.0 9200 &
kubectl -n kube-logging port-forward service/quickstart-kb-http --address 0.0.0.0 5601 &
- Create namespace for Prometheus and Grafana
$ kubectl apply -f ./namespace-kube-graph.yaml
- Run installation
$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --namespace kube-graph
**NOTE **: You can ignore warnings like this:
W1028 17:37:10.523912 36088 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
- Expose ports for prometheus and grafana web UI:
NOTE: Keep in mind that pod's name is unique per installation. To find name for
kube-prometheus-stack-grafana
pod in your cluster runkubectl get pods -n kube-graph
kubectl port-forward -n kube-graph prometheus-kube-prometheus-stack-prometheus-0 --address 0.0.0.0 9090 &
kubectl port-forward -n kube-graph kube-prometheus-stack-grafana-77f995c9c-m48gx --address 0.0.0.0 3000 &
- Open Kibana UI
https://127.0.0.1:5601/
- list indices and create index pattern:
Management: Stack Management, Data: Index Management. To see the list. Management: Stack Management, Kibana: Index patterns. To create pattern.
- Visualize:
Kibana: Visualize, Pie chart.
- Aggregation: Count
- Add bucket: Split slices: Aggregation: Terms, Field: kubernetes.pods.name, Size: 10
https://www.elastic.co/guide/en/kibana/7.2/tutorial-build-dashboard.html
Grafana default credentials
USERNAME: admin
PASSWORD: prom-operator
Prometheus configuration file:
kubectl exec -it prometheus-kube-prometheus-stack-prometheus-0 -n kube-graph -- /bin/sh
- Login to grafana web UI:
http://127.0.0.1:3000/
. - On the left plane click "+" Create dashboard.
- Convert to Row and name it "PODs".
- Repeat the same (or top right corner "Add panel") naming them as "Nodes", "Namespaces", "Clusters".
- Right top corner "Dashboard Settings" and name the dashboard as "Kubernetes".
- Add two more pannels in the settings tab set Pannel title as "CPU" and "Memory" accordingly.
- For CPU set parameters and "Apply" (top right corner):
- Data source:
Prometheus
- Metrics:
sum(rate(container_cpu_usage_seconds_total{container!="POD",pod!=""}[5m])) by (pod)
- Legend:
{{pod}}
- Data source:
- Duplicate "CPU" panel CPU > More > Duplicate and name it as "CPU requests"
- Metrics:
sum(kube_pod_container_resource_requests_cpu_cores) by (pod)
- Legend:
{{pod}}
- Metrics:
- For Memory set parameters and "Apply" (top right corner):
- Data source:
Prometheus
- Metrics:
sum(container_memory_usage_bytes{container!="POD",container!=""}) by (pod)
- Legend:
{{pod}}
- Data source:
- Duplicate "Memory" panel Memory > More > Duplicate and name it as "Memory requests"
sum(container_memory_usage_bytes{container!="POD",container!=""}) by (node)
minikube delete && minikube start --kubernetes-version=v1.21.2 --memory=6g --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0
minikube addons disable metrics-server
- Grant users the required privileges
$ kubectl apply -f ./filebeat_setup.yuml
- Client Machine (centos)
$ kubectl apply -f ./client-01.yaml
Install filebeat
sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
Create repository file /etc/yum.repos.d/elastic.repo:
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Filebeat configuration: filebeat.yaml
# =================================== Kibana ===================================
setup.kibana:
host: "https://172.17.0.5:5601"
ssl.verification_mode: none
# -------------+--------------- Elasticsearch Output ----------------------------
output.elasticsearch:
hosts: ["172.17.0.4:9200"]
protocol: "https"
ssl.verification_mode: none
username: "elastic"
password: "7r4gJKMl3VRm806i0V0Y6Dg1"
-
Install nginx to collect logs from:
$ yum install nginx
-
Enable modules:
$ filebeat modules enable system nginx
-
Load assets:
$ filebeat setup -e
-
Start filebeat:
$ filebeat -e
-
Start nginx and start genereting logs:
[root@client-01 /]# nginx
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] still could not bind()
/prometheus $ cat /etc/prometheus/prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']