Skip to content

Update Logging Screenshots #337

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Aug 10, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 28 additions & 18 deletions docs/4-return-of-the-monitoring/1-enable-monitoring.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,29 @@
## User Workload Monitoring

> OpenShift's has monitoring capabilities built in. It deploys the prometheus stack and integrates into the OpenShift UI for consuming cluster metrics.
OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. A cluster administrator can configure the monitoring stack with the supported configurations. OpenShift Container Platform delivers monitoring best practices out of the box.

### OCP Developer view Monitoring (pods etc)
A set of alerts are included by default that immediately notify administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. With the OpenShift Container Platform web console, you can access metrics and manage alerts.

Additionally, it is possible also having the option to enable **monitoring for User-Defined Projects**, also known as **User Workload Monitoring**. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects, collecting metrics and generating custom alerts.

> Out of the box monitoring in OpenShift - this gives us the Kubernetes metrics for our apps such as Memory usage & CPU etc.
Please find more information about User-Defined Projects in the following <span style="color:blue;">[link](https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring/configuring-user-workload-monitoring).</span>

### OCP Developer view Monitoring (pods etc)

1. To enable the User Workload Monitoring, a one line change has to be made to a config map. This is cluster wide so it's already been done for you, but if you're interested how the <span style="color:blue;">[docs are here](https://docs.openshift.com/container-platform/4.12/monitoring/enabling-monitoring-for-user-defined-projects.html).</span>
1. To enable the User Workload Monitoring, a one line change has to be made to a config map and it's already been done for you.

On the OpenShift UI, go to *Observe*, it should show basic health indicators

![petbattle-default-metrics](images/petbattle-default-metrics.png)

2. You can run queries across the namesapce easily with `promql`, a query language for Prometheus. Run a `promql` query to get some info about the memory consumed by the pods in your test namespace
2. You can run queries across the namespace easily with `promql`, a query language for Prometheus. Run a `promql` query to get some info about the memory consumed by the pods in your test namespace

```bash
sum(container_memory_working_set_bytes{container!='',namespace='<TEAM_NAME>-test'}) by (pod)
```

![petbattle-promql](images/petbattle-promql.png)
![petbattle-promql](images/petbattle-promql-I.png)
![petbattle-promql](images/petbattle-promql-II.png)

### Add Grafana & Service Monitor

Expand Down Expand Up @@ -70,6 +75,14 @@
oc get servicemonitor -n ${TEAM_NAME}-test -o yaml
```

Additionally, after some seconds you will be able to run queries using the new metrics scraped by Prometheus:

```$bash
jvm_buffer_count_buffers{id="direct"}
```

![petbattle-promql](images/petbattle-promql-III.png)

2. We can create our own application specific dashboards to display live data for ops use or efficiency or A/B test results. We will use Grafana to create dashboards and since it will be another tool, we need to install it through `ubiquitous-journey/values-tooling.yaml`

```yaml
Expand Down Expand Up @@ -117,29 +130,26 @@

> Let's extend the Pet Battle Dashboard with a new `panel` to capture some metrics in a visual way for us. Configuring dashboards is easy through the Grafana UI. Then Dashboards are easily shared as they can be exported as a `JSON` document.

1. OpenShift users have a read-only view on Grafana by default - get the `admin` user details from your cluster:

```bash
oc get secret grafana-admin-credentials -o=jsonpath='{.data.GF_SECURITY_ADMIN_PASSWORD}' -n ${TEAM_NAME}-ci-cd \
| base64 -d; echo -n
```
1. Once you've signed in, add a new panel clicking in **Edit**:

2. Back on Grafana, `login` with these creds after you've signed in using the OpenShift Auth (yes we know this is silly but so are Operators):
![grafana-edit](./images/grafana-add-panel.png)

![grafana-login-admin](./images/grafana-login-admin.png)
2. Once you are able to edit the dashboard, add a new panel clicking in **Add -> Visualisation**:

3. Once you've signed in, add a new panel:
![grafana-add-panel](./images/grafana-add-panel-I.png)

![grafana-add-panel](./images/grafana-add-panel.png)

4. On the new panel, let's configure it to query for some information about our projects. We're going to use a very simple query to count the number of pods running in the namespace (feel free to use any other query). On the Panel settings, set the title to something sensible and add the query below. Hit save!
3. On the new panel, let's configure it to query for some information about our projects. We're going to use a very simple query to count the number of pods running in the namespace (feel free to use any other query). On the Panel settings, set the title to something sensible and add the query below. Hit save!

```bash
sum(kube_pod_status_ready{namespace="<TEAM_NAME>-test",condition="true"})
```

![new-panel](./images/new-panel.png)

4. Hit save! and review the final dashboard

![final-dashboard](./images/final-dashboard.png)

5. With the new panel on our dashboard, let's see it in action by killing off some pods in our namespace

```bash
Expand Down
14 changes: 9 additions & 5 deletions docs/4-return-of-the-monitoring/3-logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,20 +12,24 @@

By default, these logs are not stored in a database, but there are a number of reasons to store them (ie troubleshooting, legal obligations..)

2. OpenShift magic provides a great way to collect logs across services, anything that's pumped to `STDOUT` or `STDERR` is collected and added to LokiStack. This makes indexing and querrying logs very easy. Let's take a look at OpenShift Logs UI now.
2. OpenShift magic provides a great way to collect logs across services, anything that's pumped to `STDOUT` or `STDERR` is collected and added to LokiStack. This makes indexing and querying logs very easy. Let's take a look at OpenShift Logs UI now.

![logs-test.png](./images/logs-test.png)


7. Let's filter the information, look for the logs specifically for pet-battle apps running in the test nameaspace by adding this to the query bar. Click `Show Query`, paste the below and then hit `Run Query`.
3. Let's filter the information, look for the logs specifically for pet-battle apps running in the test namespace by adding this to the query bar. Click `Show Query`.

![example-query](./images/show-query.png)

4. Paste the below and then hit `Run Query`.

```bash
{ log_type="application", kubernetes_pod_name=~"pet-battle-.*", kubernetes_namespace_name="<TEAM_NAME>-test" } | json
{ log_type="application", kubernetes_pod_name=~"pet-battle-.*", kubernetes_namespace_name="<TEAM_NAME>-test" }
```

![example-query](./images/example-query.png)

8. Container logs are ephemeral, so once they die you'd loose them unless they're aggregated and stored somewhere. Let's generate some messages and query them from the UI. Connect to pod via rsh and generate logs.
5. Container logs are ephemeral, so once they die you'd loose them unless they're aggregated and stored somewhere. Let's generate some messages and query them from the UI. Connect to pod via rsh and generate logs.

```bash
oc project ${TEAM_NAME}-test
Expand All @@ -44,7 +48,7 @@
exit
```

9. Back on Kibana we can filter and find these messages with another query:
6. Back on Kibana we can filter and find these messages with another query:

```yaml
{ log_type="application", kubernetes_pod_name=~".*mongodb.*", kubernetes_namespace_name="<TEAM_NAME>-test" } |= `🦄🦄🦄🦄` | json
Expand Down
Binary file modified docs/4-return-of-the-monitoring/images/example-query.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/4-return-of-the-monitoring/images/grafana-add-panel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/4-return-of-the-monitoring/images/grafana-http-reqs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/4-return-of-the-monitoring/images/logs-test.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/4-return-of-the-monitoring/images/mongodb-unicorn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/4-return-of-the-monitoring/images/new-panel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion tekton/templates/triggers/gitlab-trigger-template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ spec:
persistentVolumeClaim:
claimName: build-images
- name: maven-settings
configmap:
configMap:
name: maven-settings
- name: maven-m2
persistentVolumeClaim:
Expand Down