21
21
class FirehoseClient extends AbstractApi
22
22
{
23
23
/**
24
- * Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery
25
- * stream, use PutRecordBatch. Applications using these operations are referred to as producers.
24
+ * Writes a single data record into an Firehose stream. To write multiple data records into a Firehose stream, use
25
+ * PutRecordBatch. Applications using these operations are referred to as producers.
26
26
*
27
- * By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
27
+ * By default, each Firehose stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
28
28
* per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each
29
- * delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
29
+ * Firehose stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
30
30
*
31
31
* Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
32
- * that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
32
+ * that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the
33
33
* actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
34
34
*
35
- * You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
35
+ * You must specify the name of the Firehose stream and the data record when using PutRecord. The data record consists
36
36
* of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log
37
37
* file, geographic location data, website clickstream data, and so on.
38
38
*
39
+ * For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000
40
+ * KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn't work as
41
+ * expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated
42
+ * base64 decoded records.
43
+ *
39
44
* Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
40
45
* destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
41
46
* unique within the data. This allows the consumer application to parse individual data items when reading the data
@@ -45,13 +50,13 @@ class FirehoseClient extends AbstractApi
45
50
* applications can use this ID for purposes such as auditability and investigation.
46
51
*
47
52
* If the `PutRecord` operation throws a `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3
48
- * times. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery
53
+ * times. If the exception persists, it is possible that the throughput limits have been exceeded for the Firehose
49
54
* stream.
50
55
*
51
56
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
52
57
* larger data assets, allow for a longer time out before retrying Put API operations.
53
58
*
54
- * Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries
59
+ * Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it tries
55
60
* to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no
56
61
* longer available.
57
62
*
@@ -90,23 +95,28 @@ public function putRecord($input): PutRecordOutput
90
95
}
91
96
92
97
/**
93
- * Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per
94
- * producer than when writing single records. To write single data records into a delivery stream, use PutRecord.
98
+ * Writes multiple data records into a Firehose stream in a single call, which can achieve higher throughput per
99
+ * producer than when writing single records. To write single data records into a Firehose stream, use PutRecord.
95
100
* Applications using these operations are referred to as producers.
96
101
*
97
102
* Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
98
- * that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
103
+ * that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the
99
104
* actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
100
105
*
101
106
* For information about service quota, see Amazon Firehose Quota [^1].
102
107
*
103
108
* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB
104
109
* (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.
105
110
*
106
- * You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
111
+ * You must specify the name of the Firehose stream and the data record when using PutRecord. The data record consists
107
112
* of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a
108
113
* log file, geographic location data, website clickstream data, and so on.
109
114
*
115
+ * For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000
116
+ * KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn't work as
117
+ * expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated
118
+ * base64 decoded records.
119
+ *
110
120
* Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
111
121
* destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
112
122
* unique within the data. This allows the consumer application to parse individual data items when reading the data
@@ -132,12 +142,12 @@ public function putRecord($input): PutRecordOutput
132
142
* charges). We recommend that you handle any duplicates at the destination.
133
143
*
134
144
* If PutRecordBatch throws `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3 times. If the
135
- * exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.
145
+ * exception persists, it is possible that the throughput limits have been exceeded for the Firehose stream.
136
146
*
137
147
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
138
148
* larger data assets, allow for a longer time out before retrying Put API operations.
139
149
*
140
- * Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it
150
+ * Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it
141
151
* attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data
142
152
* is no longer available.
143
153
*
0 commit comments