Skip to content

Commit 83906a4

Browse files
Update generated code (#1668)
update generated code
1 parent 5024f87 commit 83906a4

File tree

4 files changed

+31
-30
lines changed

4 files changed

+31
-30
lines changed

Diff for: CHANGELOG.md

+4
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@
22

33
## NOT RELEASED
44

5+
### Changed
6+
7+
- AWS enhancement: Documentation updates.
8+
59
## 1.2.0
610

711
### Added

Diff for: src/Exception/InvalidKMSResourceException.php

+3-3
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@
55
use AsyncAws\Core\Exception\Http\ClientException;
66

77
/**
8-
* Kinesis Data Firehose throws this exception when an attempt to put records or to start or stop delivery stream
9-
* encryption fails. This happens when the KMS service throws one of the following exception types:
10-
* `AccessDeniedException`, `InvalidStateException`, `DisabledException`, or `NotFoundException`.
8+
* Firehose throws this exception when an attempt to put records or to start or stop delivery stream encryption fails.
9+
* This happens when the KMS service throws one of the following exception types: `AccessDeniedException`,
10+
* `InvalidStateException`, `DisabledException`, or `NotFoundException`.
1111
*/
1212
final class InvalidKMSResourceException extends ClientException
1313
{

Diff for: src/Exception/ServiceUnavailableException.php

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
/**
88
* The service is unavailable. Back off and retry the operation. If you continue to see the exception, throughput limits
99
* for the delivery stream may have been exceeded. For more information about limits and how to request an increase, see
10-
* Amazon Kinesis Data Firehose Limits [^1].
10+
* Amazon Firehose Limits [^1].
1111
*
1212
* [^1]: https://docs.aws.amazon.com/firehose/latest/dev/limits.html
1313
*/

Diff for: src/FirehoseClient.php

+23-26
Original file line numberDiff line numberDiff line change
@@ -21,25 +21,23 @@
2121
class FirehoseClient extends AbstractApi
2222
{
2323
/**
24-
* Writes a single data record into an Amazon Kinesis Data Firehose delivery stream. To write multiple data records into
25-
* a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers.
24+
* Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery
25+
* stream, use PutRecordBatch. Applications using these operations are referred to as producers.
2626
*
2727
* By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
2828
* per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each
29-
* delivery stream. For more information about limits and how to request an increase, see Amazon Kinesis Data Firehose
30-
* Limits [^1].
29+
* delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
3130
*
32-
* Kinesis Data Firehose accumulates and publishes a particular metric for a customer account in one minute intervals.
33-
* It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds.
34-
* Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch
35-
* metrics.
31+
* Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
32+
* that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
33+
* actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
3634
*
3735
* You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
3836
* of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log
3937
* file, geographic location data, website clickstream data, and so on.
4038
*
41-
* Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at
42-
* the destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
39+
* Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
40+
* destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
4341
* unique within the data. This allows the consumer application to parse individual data items when reading the data
4442
* from the destination.
4543
*
@@ -53,9 +51,9 @@ class FirehoseClient extends AbstractApi
5351
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
5452
* larger data assets, allow for a longer time out before retrying Put API operations.
5553
*
56-
* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream
57-
* as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the
58-
* data is no longer available.
54+
* Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries
55+
* to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no
56+
* longer available.
5957
*
6058
* ! Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw
6159
* ! data, then perform base64 encoding.
@@ -96,12 +94,11 @@ public function putRecord($input): PutRecordOutput
9694
* producer than when writing single records. To write single data records into a delivery stream, use PutRecord.
9795
* Applications using these operations are referred to as producers.
9896
*
99-
* Kinesis Data Firehose accumulates and publishes a particular metric for a customer account in one minute intervals.
100-
* It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds.
101-
* Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch
102-
* metrics.
97+
* Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
98+
* that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
99+
* actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
103100
*
104-
* For information about service quota, see Amazon Kinesis Data Firehose Quota [^1].
101+
* For information about service quota, see Amazon Firehose Quota [^1].
105102
*
106103
* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB
107104
* (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.
@@ -110,8 +107,8 @@ public function putRecord($input): PutRecordOutput
110107
* of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a
111108
* log file, geographic location data, website clickstream data, and so on.
112109
*
113-
* Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at
114-
* the destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
110+
* Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
111+
* destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
115112
* unique within the data. This allows the consumer application to parse individual data items when reading the data
116113
* from the destination.
117114
*
@@ -120,9 +117,9 @@ public function putRecord($input): PutRecordOutput
120117
* indicating that there are records for which the operation didn't succeed. Each entry in the `RequestResponses` array
121118
* provides additional information about the processed record. It directly correlates with a record in the request array
122119
* using the same ordering, from the top to the bottom. The response array always includes the same number of records as
123-
* the request array. `RequestResponses` includes both successfully and unsuccessfully processed records. Kinesis Data
124-
* Firehose tries to process all records in each PutRecordBatch request. A single record failure does not stop the
125-
* processing of subsequent records.
120+
* the request array. `RequestResponses` includes both successfully and unsuccessfully processed records. Firehose tries
121+
* to process all records in each PutRecordBatch request. A single record failure does not stop the processing of
122+
* subsequent records.
126123
*
127124
* A successfully processed record includes a `RecordId` value, which is unique for the record. An unsuccessfully
128125
* processed record includes `ErrorCode` and `ErrorMessage` values. `ErrorCode` reflects the type of error, and is one
@@ -140,9 +137,9 @@ public function putRecord($input): PutRecordOutput
140137
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
141138
* larger data assets, allow for a longer time out before retrying Put API operations.
142139
*
143-
* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream
144-
* as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the
145-
* data is no longer available.
140+
* Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it
141+
* attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data
142+
* is no longer available.
146143
*
147144
* ! Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw
148145
* ! data, then perform base64 encoding.

0 commit comments

Comments
 (0)