Skip to content

Commit 0d31e74

Browse files
Update generated code (#1790)
update generated code
1 parent 1348024 commit 0d31e74

8 files changed

+34
-20
lines changed

CHANGELOG.md

+4
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@
22

33
## NOT RELEASED
44

5+
### Changed
6+
7+
- AWS enhancement: Documentation updates.
8+
59
## 1.3.2
610

711
### Changed

src/Exception/InvalidKMSResourceException.php

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
use AsyncAws\Core\Exception\Http\ClientException;
66

77
/**
8-
* Firehose throws this exception when an attempt to put records or to start or stop delivery stream encryption fails.
8+
* Firehose throws this exception when an attempt to put records or to start or stop Firehose stream encryption fails.
99
* This happens when the KMS service throws one of the following exception types: `AccessDeniedException`,
1010
* `InvalidStateException`, `DisabledException`, or `NotFoundException`.
1111
*/

src/Exception/ServiceUnavailableException.php

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
/**
88
* The service is unavailable. Back off and retry the operation. If you continue to see the exception, throughput limits
9-
* for the delivery stream may have been exceeded. For more information about limits and how to request an increase, see
9+
* for the Firehose stream may have been exceeded. For more information about limits and how to request an increase, see
1010
* Amazon Firehose Limits [^1].
1111
*
1212
* [^1]: https://docs.aws.amazon.com/firehose/latest/dev/limits.html

src/FirehoseClient.php

+24-14
Original file line numberDiff line numberDiff line change
@@ -21,21 +21,26 @@
2121
class FirehoseClient extends AbstractApi
2222
{
2323
/**
24-
* Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery
25-
* stream, use PutRecordBatch. Applications using these operations are referred to as producers.
24+
* Writes a single data record into an Firehose stream. To write multiple data records into a Firehose stream, use
25+
* PutRecordBatch. Applications using these operations are referred to as producers.
2626
*
27-
* By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
27+
* By default, each Firehose stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
2828
* per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each
29-
* delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
29+
* Firehose stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
3030
*
3131
* Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
32-
* that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
32+
* that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the
3333
* actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
3434
*
35-
* You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
35+
* You must specify the name of the Firehose stream and the data record when using PutRecord. The data record consists
3636
* of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log
3737
* file, geographic location data, website clickstream data, and so on.
3838
*
39+
* For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000
40+
* KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn't work as
41+
* expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated
42+
* base64 decoded records.
43+
*
3944
* Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
4045
* destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
4146
* unique within the data. This allows the consumer application to parse individual data items when reading the data
@@ -45,13 +50,13 @@ class FirehoseClient extends AbstractApi
4550
* applications can use this ID for purposes such as auditability and investigation.
4651
*
4752
* If the `PutRecord` operation throws a `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3
48-
* times. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery
53+
* times. If the exception persists, it is possible that the throughput limits have been exceeded for the Firehose
4954
* stream.
5055
*
5156
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
5257
* larger data assets, allow for a longer time out before retrying Put API operations.
5358
*
54-
* Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries
59+
* Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it tries
5560
* to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no
5661
* longer available.
5762
*
@@ -90,23 +95,28 @@ public function putRecord($input): PutRecordOutput
9095
}
9196

9297
/**
93-
* Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per
94-
* producer than when writing single records. To write single data records into a delivery stream, use PutRecord.
98+
* Writes multiple data records into a Firehose stream in a single call, which can achieve higher throughput per
99+
* producer than when writing single records. To write single data records into a Firehose stream, use PutRecord.
95100
* Applications using these operations are referred to as producers.
96101
*
97102
* Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
98-
* that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
103+
* that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the
99104
* actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
100105
*
101106
* For information about service quota, see Amazon Firehose Quota [^1].
102107
*
103108
* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB
104109
* (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.
105110
*
106-
* You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
111+
* You must specify the name of the Firehose stream and the data record when using PutRecord. The data record consists
107112
* of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a
108113
* log file, geographic location data, website clickstream data, and so on.
109114
*
115+
* For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000
116+
* KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn't work as
117+
* expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated
118+
* base64 decoded records.
119+
*
110120
* Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
111121
* destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
112122
* unique within the data. This allows the consumer application to parse individual data items when reading the data
@@ -132,12 +142,12 @@ public function putRecord($input): PutRecordOutput
132142
* charges). We recommend that you handle any duplicates at the destination.
133143
*
134144
* If PutRecordBatch throws `ServiceUnavailableException`, the API is automatically reinvoked (retried) 3 times. If the
135-
* exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.
145+
* exception persists, it is possible that the throughput limits have been exceeded for the Firehose stream.
136146
*
137147
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
138148
* larger data assets, allow for a longer time out before retrying Put API operations.
139149
*
140-
* Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it
150+
* Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it
141151
* attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data
142152
* is no longer available.
143153
*

src/Input/PutRecordBatchInput.php

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
final class PutRecordBatchInput extends Input
1212
{
1313
/**
14-
* The name of the delivery stream.
14+
* The name of the Firehose stream.
1515
*
1616
* @required
1717
*

src/Input/PutRecordInput.php

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
final class PutRecordInput extends Input
1212
{
1313
/**
14-
* The name of the delivery stream.
14+
* The name of the Firehose stream.
1515
*
1616
* @required
1717
*

src/ValueObject/PutRecordBatchResponseEntry.php

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
/**
66
* Contains the result for an individual record from a PutRecordBatch request. If the record is successfully added to
7-
* your delivery stream, it receives a record ID. If the record fails to be added to your delivery stream, the result
7+
* your Firehose stream, it receives a record ID. If the record fails to be added to your Firehose stream, the result
88
* includes an error code and an error message.
99
*/
1010
final class PutRecordBatchResponseEntry

src/ValueObject/Record.php

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
use AsyncAws\Core\Exception\InvalidArgument;
66

77
/**
8-
* The unit of data in a delivery stream.
8+
* The unit of data in a Firehose stream.
99
*/
1010
final class Record
1111
{

0 commit comments

Comments
 (0)