Skip to content

Commit 0bb36bd

Browse files
pipelines: outputs: kafka: Document the raw_log_key (#1397)
* pipelines: outputs: kafka: Document the raw_log_key This is similar to the log_key in other outputs, e.g. the cloudwatch one. Signed-off-by: Holger Hans Peter Freyther <[email protected]> * Update pipeline/outputs/kafka.md Co-authored-by: Adam Locke <[email protected]> Signed-off-by: Holger Freyther <[email protected]> * Update pipeline/outputs/kafka.md Co-authored-by: Adam Locke <[email protected]> Signed-off-by: Holger Freyther <[email protected]> * Make language more inclusive Replace "dummy" with "example." Signed-off-by: Adam Locke <[email protected]> --------- Signed-off-by: Holger Hans Peter Freyther <[email protected]> Signed-off-by: Holger Freyther <[email protected]> Signed-off-by: Adam Locke <[email protected]> Co-authored-by: Adam Locke <[email protected]>
1 parent a394ae7 commit 0bb36bd

File tree

1 file changed

+27
-1
lines changed

1 file changed

+27
-1
lines changed

pipeline/outputs/kafka.md

+27-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Kafka output plugin allows to ingest your records into an [Apache Kafka](https:/
66

77
| Key | Description | default |
88
| :--- | :--- | :--- |
9-
| format | Specify data format, options available: json, msgpack. | json |
9+
| format | Specify data format, options available: json, msgpack, raw. | json |
1010
| message\_key | Optional key to store the message | |
1111
| message\_key\_field | If set, the value of Message\_Key\_Field in the record will indicate the message key. If not set nor found in the record, Message\_Key will be used \(if set\). | |
1212
| timestamp\_key | Set the key to store the record timestamp | @timestamp |
@@ -17,6 +17,7 @@ Kafka output plugin allows to ingest your records into an [Apache Kafka](https:/
1717
| dynamic\_topic | adds unknown topics \(found in Topic\_Key\) to Topics. So in Topics only a default topic needs to be configured | Off |
1818
| queue\_full\_retries | Fluent Bit queues data into rdkafka library, if for some reason the underlying library cannot flush the records the queue might fills up blocking new addition of records. The `queue_full_retries` option set the number of local retries to enqueue the data. The default value is 10 times, the interval between each retry is 1 second. Setting the `queue_full_retries` value to `0` set's an unlimited number of retries. | 10 |
1919
| rdkafka.{property} | `{property}` can be any [librdkafka properties](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md) | |
20+
| raw\_log\_key | When using the raw format and set, the value of raw\_log\_key in the record will be send to kafka as the payload. | |
2021

2122
> Setting `rdkafka.log.connection.close` to `false` and `rdkafka.request.required.acks` to 1 are examples of recommended settings of librdfkafka properties.
2223
@@ -114,3 +115,28 @@ specific avro schema.
114115
rdkafka.log_level 7
115116
rdkafka.metadata.broker.list 192.168.1.3:9092
116117
```
118+
119+
#### Kafka Configuration File with Raw format
120+
121+
This example Fluent Bit configuration file creates example records with the
122+
_payloadkey_ and _msgkey_ keys. The _msgkey_ value is used as the Kafka message
123+
key, and the _payloadkey_ value as the payload.
124+
125+
126+
```text
127+
[INPUT]
128+
Name example
129+
Tag example.data
130+
Dummy {"payloadkey":"Data to send to kafka", "msgkey": "Key to use in the message"}
131+
132+
133+
[OUTPUT]
134+
Name kafka
135+
Match *
136+
Brokers 192.168.1.3:9092
137+
Topics test
138+
Format raw
139+
140+
Raw_Log_Key payloadkey
141+
Message_Key_Field msgkey
142+
```

0 commit comments

Comments
 (0)