Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional config options are needed to configure ServiceMonitor #25

Open
apu1111 opened this issue Dec 11, 2019 · 10 comments
Open

Additional config options are needed to configure ServiceMonitor #25

apu1111 opened this issue Dec 11, 2019 · 10 comments
Assignees

Comments

@apu1111
Copy link

apu1111 commented Dec 11, 2019

Currently all database CRDs support only minimal configuration under spec.monitor.prometheus. There are other fields under ServiceMonitor CRD which users will be interested to configure like relabelings, metricRelabelings etc. This issue is to create additional fields in PrometheusSpec to support these. Probably via a generic field called additionalConfig.

https://github.com/coreos/prometheus-operator/blob/master/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml

@tamalsaha
Copy link
Contributor

Why not directly edit the generated ServiceMonitors?

@apu1111
Copy link
Author

apu1111 commented Dec 11, 2019

That will be manual right? Or you are saying there is a way to programmatically edit it from operator after it gets created?

@tamalsaha
Copy link
Contributor

@apu1111, sorry for slow response. Before you spend too much time trying to make a pr, I think it will be worth while to work out the design.

My main concerns are:

  1. I don't want KubeDB crds to become an aggregate of ServiceMonitor CRD + KubeDB CRD. I think this is just bad design in general. It is hard to draw the line which fields to support for ServiceMonitor. Also, what if we want to support another Monitoring system like Datalog?

  2. The one technical problem with having large CRDs is that there is a limit on the byte size. With the introduction is structural schema / Openapi Validation, it is fairly easy to cross that limit. We have this DormantDatabase crd which is an aggregate of all the crds and has been a problem. We have decided to remove that crd in a future release. Also, you see the issues with various code generators.

  3. We want to support the GitOps model. In that model, we expect users to provide their own CR for service monitors.

  4. We are looking to design the monitoring stack for KubeDB. So, there are unknowns that we have not yet decided.

In general I would like to go with a model where we use loose coupling when integrating with external projects. One of the ideas that I am thinking about is provide a helm chart for postgres CRD. This has been a request for some time. In that chart we can bundle a Postgres CRD, ServiceMonitor CRD, and any additional CRD (for say Grafana, Datadig, etc.) and use chart values file to pass in any parameter. What do you think?

@apu1111
Copy link
Author

apu1111 commented Dec 20, 2019

Hi @tamalsaha
I agree. Currently ServiceMonitor CRD is tightly coupled with database CRDs because the object( ServiceMonitor) is created implicitly. It is a very good and welcoming idea to let the users create it explicitly. KubeDB Operator can start the exporter and create the stats service and stop without creating the ServiceMonitor. Database helm charts can include a default servicemonitor yaml matching the stats service which will be deployed by helm, based on values.yaml flags and configs. Many operators are following this model. I believe in this case we can completely take out the coreos-operator association/code from monitoring-agent-api and simplify/unify the database CRD fields under monitor.

I think this should be possible even now by setting spec.monitor.agent: prometheus.io/builtin and creating a ServiceMonitor explicitly by matching builtin-prom-mgo-stats service. Let me try this.

@tamalsaha
Copy link
Contributor

👍

@apu1111
Copy link
Author

apu1111 commented Dec 20, 2019

I created a ServiceMonitor explicitly and set MongoDB CRD's spec.monitor.agent to prometheus.io/builtin. It worked absolutely fine. IMO following tasks we can plan for this issue next.

  1. Clean up the code and remove servicemonitor creation.
  2. Create only one Service for both builtin and operator types. (Will we be needing two types under prometheus.io as service monitor is not created?)
  3. Include a default service monitor crd yaml spec and values.yaml changes in the helm chart.
  4. Rearrange spec.monitor section of the database CRD to clearly differentiate exporter and service options.
  5. Document the current behavior of How to create customized service monitor by setting agent type as builtin and deploying a ServiceMonitor explicitly.

tamalsaha added a commit that referenced this issue Dec 23, 2019
Going forward AppsCode operators will only add Prometheus exporter sidecar and create a stats service. It will not apply Promethus annotations on stats service or create ServiceMonitor.
xref: #25

Signed-off-by: Tamal Saha <[email protected]>
tamalsaha added a commit that referenced this issue Dec 23, 2019
Going forward AppsCode operators will only add Prometheus exporter sidecar and create a stats service. It will not apply Promethus annotations on stats service or create ServiceMonitor.
xref: #25

Signed-off-by: Tamal Saha <[email protected]>
tamalsaha added a commit that referenced this issue Dec 23, 2019
Going forward AppsCode operators will only add Prometheus exporter sidecar and create a stats service. It will not apply Promethus annotations on stats service or create ServiceMonitor.
xref: #25

Signed-off-by: Tamal Saha <[email protected]>
@tamalsaha
Copy link
Contributor

@apu1111 , I have updated the exporter api accordingly. Also, updated the kubedb apis (not implemented yet) kubedb/apimachinery#476

@tamalsaha
Copy link
Contributor

@apu1111 , I wonder if kustomize will be a better solution for something like this. One of the issues with Charts is that almost every field might end up on the Values file.

@apu1111
Copy link
Author

apu1111 commented Jan 7, 2020

Hi @tamalsaha, "kustomize" is a very good tool but will it not force people to use kustomize? How about handling "helm release/rollback/list" commands? Does it go well with helm?

Unless someone wants to customize values, default helm charts provided by upstream repo should work just fine. In the default ServiceMonitor manifest, only necessary and mandatory parameters would be present.

@tamalsaha
Copy link
Contributor

I think we are going to do both. First customize, then auto generate charts from those.

@the-redback the-redback removed their assignment Feb 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants