21
21
class FirehoseClient extends AbstractApi
22
22
{
23
23
/**
24
- * Writes a single data record into an Amazon Kinesis Data Firehose delivery stream. To write multiple data records into
25
- * a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers.
24
+ * Writes a single data record into an Amazon Firehose delivery stream. To write multiple data records into a delivery
25
+ * stream, use PutRecordBatch. Applications using these operations are referred to as producers.
26
26
*
27
27
* By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB
28
28
* per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each
29
- * delivery stream. For more information about limits and how to request an increase, see Amazon Kinesis Data Firehose
30
- * Limits [^1].
29
+ * delivery stream. For more information about limits and how to request an increase, see Amazon Firehose Limits [^1].
31
30
*
32
- * Kinesis Data Firehose accumulates and publishes a particular metric for a customer account in one minute intervals.
33
- * It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds.
34
- * Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch
35
- * metrics.
31
+ * Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
32
+ * that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
33
+ * actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
36
34
*
37
35
* You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists
38
36
* of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log
39
37
* file, geographic location data, website clickstream data, and so on.
40
38
*
41
- * Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at
42
- * the destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
39
+ * Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
40
+ * destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
43
41
* unique within the data. This allows the consumer application to parse individual data items when reading the data
44
42
* from the destination.
45
43
*
@@ -53,9 +51,9 @@ class FirehoseClient extends AbstractApi
53
51
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
54
52
* larger data assets, allow for a longer time out before retrying Put API operations.
55
53
*
56
- * Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream
57
- * as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the
58
- * data is no longer available.
54
+ * Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it tries
55
+ * to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no
56
+ * longer available.
59
57
*
60
58
* ! Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw
61
59
* ! data, then perform base64 encoding.
@@ -96,12 +94,11 @@ public function putRecord($input): PutRecordOutput
96
94
* producer than when writing single records. To write single data records into a delivery stream, use PutRecord.
97
95
* Applications using these operations are referred to as producers.
98
96
*
99
- * Kinesis Data Firehose accumulates and publishes a particular metric for a customer account in one minute intervals.
100
- * It is possible that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds.
101
- * Due to this, the actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch
102
- * metrics.
97
+ * Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible
98
+ * that the bursts of incoming bytes/records ingested to a delivery stream last only for a few seconds. Due to this, the
99
+ * actual spikes in the traffic might not be fully visible in the customer's 1 minute CloudWatch metrics.
103
100
*
104
- * For information about service quota, see Amazon Kinesis Data Firehose Quota [^1].
101
+ * For information about service quota, see Amazon Firehose Quota [^1].
105
102
*
106
103
* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB
107
104
* (before base64 encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.
@@ -110,8 +107,8 @@ public function putRecord($input): PutRecordOutput
110
107
* of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a
111
108
* log file, geographic location data, website clickstream data, and so on.
112
109
*
113
- * Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at
114
- * the destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
110
+ * Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the
111
+ * destination, a common solution is to use delimiters in the data, such as a newline (`\n`) or some other character
115
112
* unique within the data. This allows the consumer application to parse individual data items when reading the data
116
113
* from the destination.
117
114
*
@@ -120,9 +117,9 @@ public function putRecord($input): PutRecordOutput
120
117
* indicating that there are records for which the operation didn't succeed. Each entry in the `RequestResponses` array
121
118
* provides additional information about the processed record. It directly correlates with a record in the request array
122
119
* using the same ordering, from the top to the bottom. The response array always includes the same number of records as
123
- * the request array. `RequestResponses` includes both successfully and unsuccessfully processed records. Kinesis Data
124
- * Firehose tries to process all records in each PutRecordBatch request. A single record failure does not stop the
125
- * processing of subsequent records.
120
+ * the request array. `RequestResponses` includes both successfully and unsuccessfully processed records. Firehose tries
121
+ * to process all records in each PutRecordBatch request. A single record failure does not stop the processing of
122
+ * subsequent records.
126
123
*
127
124
* A successfully processed record includes a `RecordId` value, which is unique for the record. An unsuccessfully
128
125
* processed record includes `ErrorCode` and `ErrorMessage` values. `ErrorCode` reflects the type of error, and is one
@@ -140,9 +137,9 @@ public function putRecord($input): PutRecordOutput
140
137
* Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For
141
138
* larger data assets, allow for a longer time out before retrying Put API operations.
142
139
*
143
- * Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery stream
144
- * as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the
145
- * data is no longer available.
140
+ * Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it
141
+ * attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data
142
+ * is no longer available.
146
143
*
147
144
* ! Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw
148
145
* ! data, then perform base64 encoding.
0 commit comments