@@ -28,6 +28,11 @@ class FirehoseClient extends AbstractApi
28
28
* delivery stream. For more information about limits and how to request an increase, see Amazon Kinesis Data Firehose
29
29
* Limits [^1].
30
30
*
31
+ * Kinesis Data Firehose accumulates and publishes a particular metric for a customer account in one minute intervals.
32
+ * It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds.
33
+ * Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch
34
+ * metrics.
35
+ *
31
36
* You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
32
37
* of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log
33
38
* file, geographic location data, website clickstream data, and so on.
@@ -40,8 +45,12 @@ class FirehoseClient extends AbstractApi
40
45
* The `PutRecord` operation returns a `RecordId`, which is a unique string assigned to each record. Producer
41
46
* applications can use this ID for purposes such as auditability and investigation.
42
47
*
43
- * If the `PutRecord` operation throws a `ServiceUnavailableException`, back off and retry. If the exception persists,
44
- * it is possible that the throughput limits have been exceeded for the delivery stream.
48
+ * If the `PutRecord` operation throws a `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3
49
+ * times. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery
50
+ * stream.
51
+ *
52
+ * Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
53
+ * larger data assets, allow for a longer time out before retrying Put API operations.
45
54
*
46
55
* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream
47
56
* as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the
@@ -84,6 +93,11 @@ public function putRecord($input): PutRecordOutput
84
93
* producer than when writing single records. To write single data records into a delivery stream, use PutRecord.
85
94
* Applications using these operations are referred to as producers.
86
95
*
96
+ * Kinesis Data Firehose accumulates and publishes a particular metric for a customer account in one minute intervals.
97
+ * It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds.
98
+ * Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch
99
+ * metrics.
100
+ *
87
101
* For information about service quota, see Amazon Kinesis Data Firehose Quota [^1].
88
102
*
89
103
* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB
@@ -117,8 +131,11 @@ public function putRecord($input): PutRecordOutput
117
131
* processing. This minimizes the possible duplicate records and also reduces the total bytes sent (and corresponding
118
132
* charges). We recommend that you handle any duplicates at the destination.
119
133
*
120
- * If PutRecordBatch throws `ServiceUnavailableException`, back off and retry. If the exception persists, it is possible
121
- * that the throughput limits have been exceeded for the delivery stream.
134
+ * If PutRecordBatch throws `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3 times. If the
135
+ * exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.
136
+ *
137
+ * Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
138
+ * larger data assets, allow for a longer time out before retrying Put API operations.
122
139
*
123
140
* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream
124
141
* as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the
0 commit comments