Skip to content

Commit 44c046c

Browse files
Doc only update for API descriptions.
1 parent 2af3aa9 commit 44c046c

14 files changed

+348
-195
lines changed

generator/ServiceModels/dynamodb/dynamodb-2011-12-05.endpoint-tests.json

Lines changed: 90 additions & 90 deletions
Large diffs are not rendered by default.

generator/ServiceModels/dynamodb/dynamodb-2012-08-10.docs.json

Lines changed: 5 additions & 5 deletions
Large diffs are not rendered by default.

generator/ServiceModels/dynamodb/dynamodb-2012-08-10.endpoint-tests.json

Lines changed: 90 additions & 90 deletions
Large diffs are not rendered by default.

generator/ServiceModels/dynamodb/dynamodb-2012-08-10.normal.json

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343
{"shape":"RequestLimitExceeded"},
4444
{"shape":"InternalServerError"}
4545
],
46-
"documentation":"<p>The <code>BatchGetItem</code> operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.</p> <p>A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. <code>BatchGetItem</code> returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for <code>UnprocessedKeys</code>. You can use this value to retry the operation starting with the next item to get.</p> <important> <p>If you request more than 100 items, <code>BatchGetItem</code> returns a <code>ValidationException</code> with the message \"Too many items requested for the BatchGetItem call.\"</p> </important> <p>For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate <code>UnprocessedKeys</code> value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one dataset.</p> <p>If <i>none</i> of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then <code>BatchGetItem</code> returns a <code>ProvisionedThroughputExceededException</code>. If <i>at least one</i> of the items is successfully processed, then <code>BatchGetItem</code> completes successfully, while returning the keys of the unread items in <code>UnprocessedKeys</code>.</p> <important> <p>If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, <i>we strongly recommend that you use an exponential backoff algorithm</i>. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#BatchOperations\">Batch Operations and Error Handling</a> in the <i>Amazon DynamoDB Developer Guide</i>.</p> </important> <p>By default, <code>BatchGetItem</code> performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set <code>ConsistentRead</code> to <code>true</code> for any or all tables.</p> <p>In order to minimize response latency, <code>BatchGetItem</code> may retrieve items in parallel.</p> <p>When designing your application, keep in mind that DynamoDB does not return items in any particular order. To help parse the response by item, include the primary key values for the items in your request in the <code>ProjectionExpression</code> parameter.</p> <p>If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see <a href=\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#CapacityUnitCalculations\">Working with Tables</a> in the <i>Amazon DynamoDB Developer Guide</i>.</p>",
46+
"documentation":"<p>The <code>BatchGetItem</code> operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.</p> <p>A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. <code>BatchGetItem</code> returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for <code>UnprocessedKeys</code>. You can use this value to retry the operation starting with the next item to get.</p> <important> <p>If you request more than 100 items, <code>BatchGetItem</code> returns a <code>ValidationException</code> with the message \"Too many items requested for the BatchGetItem call.\"</p> </important> <p>For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate <code>UnprocessedKeys</code> value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one dataset.</p> <p>If <i>none</i> of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then <code>BatchGetItem</code> returns a <code>ProvisionedThroughputExceededException</code>. If <i>at least one</i> of the items is successfully processed, then <code>BatchGetItem</code> completes successfully, while returning the keys of the unread items in <code>UnprocessedKeys</code>.</p> <important> <p>If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, <i>we strongly recommend that you use an exponential backoff algorithm</i>. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html#BatchOperations\">Batch Operations and Error Handling</a> in the <i>Amazon DynamoDB Developer Guide</i>.</p> </important> <p>By default, <code>BatchGetItem</code> performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set <code>ConsistentRead</code> to <code>true</code> for any or all tables.</p> <p>In order to minimize response latency, <code>BatchGetItem</code> may retrieve items in parallel.</p> <p>When designing your application, keep in mind that DynamoDB does not return items in any particular order. To help parse the response by item, include the primary key values for the items in your request in the <code>ProjectionExpression</code> parameter.</p> <p>If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see <a href=\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#CapacityUnitCalculations\">Working with Tables</a> in the <i>Amazon DynamoDB Developer Guide</i>.</p> <note> <p> <code>BatchGetItem</code> will result in a <code>ValidationException</code> if the same key is specified multiple times.</p> </note>",
4747
"endpointdiscovery":{
4848
},
4949
"operationContextParams":{
@@ -1992,7 +1992,7 @@
19921992
},
19931993
"OnDemandThroughput":{
19941994
"shape":"OnDemandThroughput",
1995-
"documentation":"<p>The maximum number of read and write units for the global secondary index being created. If you use this parameter, you must specify <code>MaxReadRequestUnits</code>, <code>MaxWriteRequestUnits</code>, or both.</p>"
1995+
"documentation":"<p>The maximum number of read and write units for the global secondary index being created. If you use this parameter, you must specify <code>MaxReadRequestUnits</code>, <code>MaxWriteRequestUnits</code>, or both. You must use either <code>OnDemand Throughput</code> or <code>ProvisionedThroughput</code> based on your table's capacity mode.</p>"
19961996
},
19971997
"WarmThroughput":{
19981998
"shape":"WarmThroughput",
@@ -4604,7 +4604,7 @@
46044604
"documentation":"<p>The maximum number of writes consumed per second before DynamoDB returns a <code>ThrottlingException</code>. For more information, see <a href=\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughput.html\">Specifying Read and Write Requirements</a> in the <i>Amazon DynamoDB Developer Guide</i>.</p> <p>If read/write capacity mode is <code>PAY_PER_REQUEST</code> the value is set to 0.</p>"
46054605
}
46064606
},
4607-
"documentation":"<p>Represents the provisioned throughput settings for a specified table or index. The settings can be modified using the <code>UpdateTable</code> operation.</p> <p>For current minimum and maximum provisioned throughput values, see <a href=\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html\">Service, Account, and Table Quotas</a> in the <i>Amazon DynamoDB Developer Guide</i>.</p>"
4607+
"documentation":"<p>Represents the provisioned throughput settings for the specified global secondary index. You must use <code>ProvisionedThroughput</code> or <code>OnDemandThroughput</code> based on your table’s capacity mode.</p> <p>For current minimum and maximum provisioned throughput values, see <a href=\"https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html\">Service, Account, and Table Quotas</a> in the <i>Amazon DynamoDB Developer Guide</i>.</p>"
46084608
},
46094609
"ProvisionedThroughputDescription":{
46104610
"type":"structure",
@@ -6118,10 +6118,10 @@
61186118
},
61196119
"Status":{
61206120
"shape":"TableStatus",
6121-
"documentation":"<p>Represents warm throughput value of the base table..</p>"
6121+
"documentation":"<p>Represents warm throughput value of the base table.</p>"
61226122
}
61236123
},
6124-
"documentation":"<p>Represents the warm throughput value (in read units per second and write units per second) of the base table.</p>"
6124+
"documentation":"<p>Represents the warm throughput value (in read units per second and write units per second) of the table. Warm throughput is applicable for DynamoDB Standard-IA tables and specifies the minimum provisioned capacity maintained for immediate data access.</p>"
61256125
},
61266126
"Tag":{
61276127
"type":"structure",

sdk/src/Services/DynamoDBv2/Generated/Model/BatchGetItemRequest.cs

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,12 @@ namespace Amazon.DynamoDBv2.Model
100100
/// read. For more information, see <a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#CapacityUnitCalculations">Working
101101
/// with Tables</a> in the <i>Amazon DynamoDB Developer Guide</i>.
102102
/// </para>
103+
/// <note>
104+
/// <para>
105+
/// <c>BatchGetItem</c> will result in a <c>ValidationException</c> if the same key is
106+
/// specified multiple times.
107+
/// </para>
108+
/// </note>
103109
/// </summary>
104110
public partial class BatchGetItemRequest : AmazonDynamoDBRequest
105111
{

sdk/src/Services/DynamoDBv2/Generated/Model/CreateGlobalSecondaryIndexAction.cs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,8 @@ internal bool IsSetKeySchema()
8484
/// <para>
8585
/// The maximum number of read and write units for the global secondary index being created.
8686
/// If you use this parameter, you must specify <c>MaxReadRequestUnits</c>, <c>MaxWriteRequestUnits</c>,
87-
/// or both.
87+
/// or both. You must use either <c>OnDemand Throughput</c> or <c>ProvisionedThroughput</c>
88+
/// based on your table's capacity mode.
8889
/// </para>
8990
/// </summary>
9091
public OnDemandThroughput OnDemandThroughput

sdk/src/Services/DynamoDBv2/Generated/Model/ProvisionedThroughput.cs

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,9 @@
3030
namespace Amazon.DynamoDBv2.Model
3131
{
3232
/// <summary>
33-
/// Represents the provisioned throughput settings for a specified table or index. The
34-
/// settings can be modified using the <c>UpdateTable</c> operation.
33+
/// Represents the provisioned throughput settings for the specified global secondary
34+
/// index. You must use <c>ProvisionedThroughput</c> or <c>OnDemandThroughput</c> based
35+
/// on your table’s capacity mode.
3536
///
3637
///
3738
/// <para>

sdk/src/Services/DynamoDBv2/Generated/Model/TableWarmThroughputDescription.cs

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,8 @@ namespace Amazon.DynamoDBv2.Model
3131
{
3232
/// <summary>
3333
/// Represents the warm throughput value (in read units per second and write units per
34-
/// second) of the base table.
34+
/// second) of the table. Warm throughput is applicable for DynamoDB Standard-IA tables
35+
/// and specifies the minimum provisioned capacity maintained for immediate data access.
3536
/// </summary>
3637
public partial class TableWarmThroughputDescription
3738
{
@@ -61,7 +62,7 @@ internal bool IsSetReadUnitsPerSecond()
6162
/// <summary>
6263
/// Gets and sets the property Status.
6364
/// <para>
64-
/// Represents warm throughput value of the base table..
65+
/// Represents warm throughput value of the base table.
6566
/// </para>
6667
/// </summary>
6768
public TableStatus Status

0 commit comments

Comments
 (0)