Skip to content

Commit 12dd4eb

Browse files
Update generated code (#1577)
* update generated code * Update src/Service/Ses/CHANGELOG.md --------- Co-authored-by: Jérémy Derussé <[email protected]>
1 parent e4deb74 commit 12dd4eb

File tree

2 files changed

+25
-4
lines changed

2 files changed

+25
-4
lines changed

CHANGELOG.md

+4
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@
22

33
## NOT RELEASED
44

5+
### Changed
6+
7+
- AWS enhancement: Documentation updates.
8+
59
## 1.1.0
610

711
### Added

src/FirehoseClient.php

+21-4
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,11 @@ class FirehoseClient extends AbstractApi
2828
* delivery stream. For more information about limits and how to request an increase, see Amazon Kinesis Data Firehose
2929
* Limits [^1].
3030
*
31+
* Kinesis Data Firehose accumulates and publishes a particular metric for a customer account in one minute intervals.
32+
* It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds.
33+
* Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch
34+
* metrics.
35+
*
3136
* You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
3237
* of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log
3338
* file, geographic location data, website clickstream data, and so on.
@@ -40,8 +45,12 @@ class FirehoseClient extends AbstractApi
4045
* The `PutRecord` operation returns a `RecordId`, which is a unique string assigned to each record. Producer
4146
* applications can use this ID for purposes such as auditability and investigation.
4247
*
43-
* If the `PutRecord` operation throws a `ServiceUnavailableException`, back off and retry. If the exception persists,
44-
* it is possible that the throughput limits have been exceeded for the delivery stream.
48+
* If the `PutRecord` operation throws a `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3
49+
* times. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery
50+
* stream.
51+
*
52+
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
53+
* larger data assets, allow for a longer time out before retrying Put API operations.
4554
*
4655
* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream
4756
* as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the
@@ -84,6 +93,11 @@ public function putRecord($input): PutRecordOutput
8493
* producer than when writing single records. To write single data records into a delivery stream, use PutRecord.
8594
* Applications using these operations are referred to as producers.
8695
*
96+
* Kinesis Data Firehose accumulates and publishes a particular metric for a customer account in one minute intervals.
97+
* It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds.
98+
* Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch
99+
* metrics.
100+
*
87101
* For information about service quota, see Amazon Kinesis Data Firehose Quota [^1].
88102
*
89103
* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB
@@ -117,8 +131,11 @@ public function putRecord($input): PutRecordOutput
117131
* processing. This minimizes the possible duplicate records and also reduces the total bytes sent (and corresponding
118132
* charges). We recommend that you handle any duplicates at the destination.
119133
*
120-
* If PutRecordBatch throws `ServiceUnavailableException`, back off and retry. If the exception persists, it is possible
121-
* that the throughput limits have been exceeded for the delivery stream.
134+
* If PutRecordBatch throws `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3 times. If the
135+
* exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.
136+
*
137+
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
138+
* larger data assets, allow for a longer time out before retrying Put API operations.
122139
*
123140
* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream
124141
* as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the

0 commit comments

Comments
 (0)