You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+35-1Lines changed: 35 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -19,17 +19,35 @@ The license for this project is [AGPL-3.0](LICENSE.md), and a [notice](NOTICE.md
19
19
20
20
## Version compatibility
21
21
22
-
We recommand Grafana v9.5 or v10.
22
+
We recommend Grafana v10.X.
23
23
24
24
Quickwit 0.6 is compatible with 0.2.x versions only.
25
25
26
26
Quickwit 0.7 is compatible with 0.3.x versions only.
27
27
28
+
Quickwit 0.8 is compatible with 0.4.x versions only.
28
29
29
30
## Installation
30
31
31
32
You can either download the plugin manually and unzip it into the plugin directory or use the env variable `GF_INSTALL_PLUGINS` to install it.
32
33
34
+
### 0.4.0 for Quickwit 0.8
35
+
36
+
Run `grafana-oss` container with the env variable:
37
+
38
+
```bash
39
+
docker run -p 3000:3000 -e GF_INSTALL_PLUGINS="https://github.com/quickwit-oss/quickwit-datasource/releases/download/v0.4.0/quickwit-quickwit-datasource-0.4.0.zip;quickwit-quickwit-datasource" grafana/grafana-oss run
If you’re sure your query is correct and the results are fetched, then you’re fine! The query linting feature is still quite rough around the edges and will improve in future versions of the plugin.
130
+
If results are not fetched, make sure you are using a recent version of Quickwit, as some improvements have been made to the query parser.
131
+
132
+
### The older logs button stops working
133
+
134
+
This is probably due to a bug in Grafana up to versions 10.3, the next release of Grafana v10.4 should fix the issue.
135
+
136
+
### There are holes in my logs between pages
137
+
138
+
This may be due to a limitation of the pagination scheme. In order to avoid querying data without controlling the size of the response, we set a limit on how many records to fetch per query. The pagination scheme then tries to fetch the next chunk of results based on the timestamps already collected and may skip some logs if there was more records with a given timestamp.
139
+
To avoid that : try using timestamps with a finer resolution if possible, set the query limits higher or refine your query.
0 commit comments