Skip to content

Commit

Permalink
Updates SDK to v2.1055.0
Browse files Browse the repository at this point in the history
  • Loading branch information
awstools committed Jan 11, 2022
1 parent 909fbcf commit ecbb9d2
Show file tree
Hide file tree
Showing 26 changed files with 776 additions and 172 deletions.
22 changes: 22 additions & 0 deletions .changes/2.1055.0.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[
{
"type": "feature",
"category": "EC2",
"description": "EC2 Capacity Reservations now supports RHEL instance platforms (RHEL with SQL Server Standard, RHEL with SQL Server Enterprise, RHEL with SQL Server Web, RHEL with HA, RHEL with HA and SQL Server Standard, RHEL with HA and SQL Server Enterprise)"
},
{
"type": "feature",
"category": "Kendra",
"description": "Amazon Kendra now supports advanced query language and query-less search."
},
{
"type": "feature",
"category": "RDS",
"description": "This release adds the db-proxy event type to support subscribing to RDS Proxy events."
},
{
"type": "feature",
"category": "WorkSpaces",
"description": "Introducing new APIs for Workspaces audio optimization with Amazon Connect: CreateConnectClientAddIn, DescribeConnectClientAddIns, UpdateConnectClientAddIn and DeleteConnectClientAddIn."
}
]
8 changes: 7 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
# Changelog for AWS SDK for JavaScript
<!--LATEST=2.1054.0-->
<!--LATEST=2.1055.0-->
<!--ENTRYINSERT-->

## 2.1055.0
* feature: EC2: EC2 Capacity Reservations now supports RHEL instance platforms (RHEL with SQL Server Standard, RHEL with SQL Server Enterprise, RHEL with SQL Server Web, RHEL with HA, RHEL with HA and SQL Server Standard, RHEL with HA and SQL Server Enterprise)
* feature: Kendra: Amazon Kendra now supports advanced query language and query-less search.
* feature: RDS: This release adds the db-proxy event type to support subscribing to RDS Proxy events.
* feature: WorkSpaces: Introducing new APIs for Workspaces audio optimization with Amazon Connect: CreateConnectClientAddIn, DescribeConnectClientAddIns, UpdateConnectClientAddIn and DeleteConnectClientAddIn.

## 2.1054.0
* feature: ComputeOptimizer: Adds support for new Compute Optimizer capability that makes it easier for customers to optimize their EC2 instances by leveraging multiple CPU architectures.
* feature: DataBrew: This SDK release adds support for specifying a Bucket Owner for an S3 location.
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ For release notes, see the [CHANGELOG](https://github.com/aws/aws-sdk-js/blob/ma
To use the SDK in the browser, simply add the following script tag to your
HTML pages:

<script src="https://sdk.amazonaws.com/js/aws-sdk-2.1054.0.min.js"></script>
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.1055.0.min.js"></script>

You can also build a custom browser SDK with your specified set of AWS services.
This can allow you to reduce the SDK's size, specify different API versions of
Expand Down
26 changes: 13 additions & 13 deletions apis/ce-2017-10-25.normal.json

Large diffs are not rendered by default.

8 changes: 7 additions & 1 deletion apis/ec2-2016-11-15.normal.json
Original file line number Diff line number Diff line change
Expand Up @@ -10716,7 +10716,13 @@
"Windows with SQL Server Web",
"Linux with SQL Server Standard",
"Linux with SQL Server Web",
"Linux with SQL Server Enterprise"
"Linux with SQL Server Enterprise",
"RHEL with SQL Server Standard",
"RHEL with SQL Server Enterprise",
"RHEL with SQL Server Web",
"RHEL with HA",
"RHEL with HA and SQL Server Standard",
"RHEL with HA and SQL Server Enterprise"
]
},
"CapacityReservationOptions": {
Expand Down
34 changes: 19 additions & 15 deletions apis/finspace-data-2020-07-13.normal.json
Original file line number Diff line number Diff line change
Expand Up @@ -575,6 +575,7 @@
},
"activeFromTimestamp": {
"shape": "TimestampEpoch",
"documentation": "<p>Beginning time from which the Changeset is active. The value is determined as Epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.</p>",
"box": true
},
"updatesChangesetId": {
Expand Down Expand Up @@ -668,7 +669,7 @@
"members": {
"clientToken": {
"shape": "ClientToken",
"documentation": "<p>A token used to ensure idempotency.</p>",
"documentation": "<p>A token that ensures idempotency. This token expires in 10 minutes.</p>",
"idempotencyToken": true
},
"datasetId": {
Expand All @@ -683,11 +684,11 @@
},
"sourceParams": {
"shape": "SourceParams",
"documentation": "<p>Options that define the location of the data being ingested.</p>"
"documentation": "<p>Options that define the location of the data being ingested (<code>s3SourcePath</code>) and the source of the changeset (<code>sourceType</code>).</p> <p>Both <code>s3SourcePath</code> and <code>sourceType</code> are required attributes.</p> <p>Here is an example of how you could specify the <code>sourceParams</code>:</p> <p> <code> \"sourceParams\": { \"s3SourcePath\": \"s3://finspace-landing-us-east-2-bk7gcfvitndqa6ebnvys4d/scratch/wr5hh8pwkpqqkxa4sxrmcw/ingestion/equity.csv\", \"sourceType\": \"S3\" } </code> </p> <p>The S3 path that you specify must allow the FinSpace role access. To do that, you first need to configure the IAM policy on S3 bucket. For more information, see <a href=\"https://docs.aws.amazon.com/finspace/latest/data-api/fs-using-the-finspace-api.html#access-s3-buckets\">Loading data from an Amazon S3 Bucket using the FinSpace API</a>section.</p>"
},
"formatParams": {
"shape": "FormatParams",
"documentation": "<p>Options that define the structure of the source file(s) including the format type (<code>formatType</code>), header row (<code>withHeader</code>), data separation character (<code>separator</code>) and the type of compression (<code>compression</code>). </p> <p> <code>formatType</code> is a required attribute and can have the following values: </p> <ul> <li> <p> <code>PARQUET</code> - Parquet source file format.</p> </li> <li> <p> <code>CSV</code> - CSV source file format.</p> </li> <li> <p> <code>JSON</code> - JSON source file format.</p> </li> <li> <p> <code>XML</code> - XML source file format.</p> </li> </ul> <p> For example, you could specify the following for <code>formatParams</code>: <code> \"formatParams\": { \"formatType\": \"CSV\", \"withHeader\": \"true\", \"separator\": \",\", \"compression\":\"None\" } </code> </p>"
"documentation": "<p>Options that define the structure of the source file(s) including the format type (<code>formatType</code>), header row (<code>withHeader</code>), data separation character (<code>separator</code>) and the type of compression (<code>compression</code>). </p> <p> <code>formatType</code> is a required attribute and can have the following values: </p> <ul> <li> <p> <code>PARQUET</code> - Parquet source file format.</p> </li> <li> <p> <code>CSV</code> - CSV source file format.</p> </li> <li> <p> <code>JSON</code> - JSON source file format.</p> </li> <li> <p> <code>XML</code> - XML source file format.</p> </li> </ul> <p>Here is an example of how you could specify the <code>formatParams</code>:</p> <p> <code> \"formatParams\": { \"formatType\": \"CSV\", \"withHeader\": \"true\", \"separator\": \",\", \"compression\":\"None\" } </code> </p> <p>Note that if you only provide <code>formatType</code> as <code>CSV</code>, the rest of the attributes will automatically default to CSV values as following:</p> <p> <code> { \"withHeader\": \"true\", \"separator\": \",\" } </code> </p> <p> For more information about supported file formats, see <a href=\"https://docs.aws.amazon.com/finspace/latest/userguide/supported-data-types.html\">Supported Data Types and File Formats</a> in the FinSpace User Guide.</p>"
}
},
"documentation": "The request for a CreateChangeset operation."
Expand Down Expand Up @@ -715,7 +716,7 @@
"members": {
"clientToken": {
"shape": "ClientToken",
"documentation": "<p>A token used to ensure idempotency.</p>",
"documentation": "<p>A token that ensures idempotency. This token expires in 10 minutes.</p>",
"idempotencyToken": true
},
"datasetId": {
Expand Down Expand Up @@ -772,7 +773,7 @@
"members": {
"clientToken": {
"shape": "ClientToken",
"documentation": "<p>A token used to ensure idempotency.</p>",
"documentation": "<p>A token that ensures idempotency. This token expires in 10 minutes.</p>",
"idempotencyToken": true
},
"datasetTitle": {
Expand Down Expand Up @@ -850,13 +851,15 @@
"members": {
"destinationType": {
"shape": "DataViewDestinationType",
"documentation": "<p>Destination type for a Dataview.</p> <ul> <li> <p> <code>GLUE_TABLE</code> - Glue table destination type.</p> </li> </ul>"
"documentation": "<p>Destination type for a Dataview.</p> <ul> <li> <p> <code>GLUE_TABLE</code> - Glue table destination type.</p> </li> <li> <p> <code>S3</code> - S3 destination type.</p> </li> </ul>"
},
"s3DestinationExportFileFormat": {
"shape": "ExportFileFormat"
"shape": "ExportFileFormat",
"documentation": "<p>Data view export file format.</p> <ul> <li> <p> <code>PARQUET</code> - Parquet export file format.</p> </li> <li> <p> <code>DELIMITED_TEXT</code> - Delimited text export file format.</p> </li> </ul>"
},
"s3DestinationExportFileFormatOptions": {
"shape": "S3DestinationFormatOptions"
"shape": "S3DestinationFormatOptions",
"documentation": "<p>Format Options for S3 Destination type.</p> <p>Here is an example of how you could specify the <code>s3DestinationExportFileFormatOptions</code> </p> <p> <code> { \"header\": \"true\", \"delimiter\": \",\", \"compression\": \"gzip\" }</code> </p>"
}
},
"documentation": "<p>Structure for the Dataview destination type parameters.</p>"
Expand Down Expand Up @@ -1078,7 +1081,7 @@
"members": {
"clientToken": {
"shape": "ClientToken",
"documentation": "<p>A token used to ensure idempotency.</p>",
"documentation": "<p>A token that ensures idempotency. This token expires in 10 minutes.</p>",
"idempotencyToken": true,
"location": "querystring",
"locationName": "clientToken"
Expand Down Expand Up @@ -1214,6 +1217,7 @@
},
"activeFromTimestamp": {
"shape": "TimestampEpoch",
"documentation": "<p>Beginning time from which the Changeset is active. The value is determined as Epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.</p>",
"box": true
},
"updatesChangesetId": {
Expand Down Expand Up @@ -1594,7 +1598,7 @@
"documentation": "<p>List of resource permissions.</p>"
}
},
"documentation": "<p>Permission group parameters for Dataset permissions.</p>"
"documentation": "<p>Permission group parameters for Dataset permissions.</p> <p>Here is an example of how you could specify the <code>PermissionGroupParams</code>:</p> <p> <code> { \"permissionGroupId\": \"0r6fCRtSTUk4XPfXQe3M0g\", \"datasetPermissions\": [ {\"permission\": \"ViewDatasetDetails\"}, {\"permission\": \"AddDatasetData\"}, {\"permission\": \"EditDatasetMetadata\"}, {\"permission\": \"DeleteDataset\"} ] } </code> </p>"
},
"PhoneNumber": {
"type": "string",
Expand All @@ -1611,7 +1615,7 @@
"documentation": "<p>Permission for a resource.</p>"
}
},
"documentation": "<p>Resource permission for a Dataset.</p>"
"documentation": "<p>Resource permission for a dataset. When you create a dataset, all the other members of the same user group inherit access to the dataset. You can only create a dataset if your user group has application permission for Create Datasets.</p> <p>The following is a list of valid dataset permissions that you can apply: </p> <ul> <li> <p> <code>ViewDatasetDetails</code> </p> </li> <li> <p> <code>ReadDatasetDetails</code> </p> </li> <li> <p> <code>AddDatasetData</code> </p> </li> <li> <p> <code>CreateSnapshot</code> </p> </li> <li> <p> <code>EditDatasetMetadata</code> </p> </li> <li> <p> <code>DeleteDataset</code> </p> </li> </ul> <p>For more information on the ataset permissions, see <a href=\"https://docs.aws.amazon.com/finspace/latest/userguide/managing-user-permissions.html#supported-dataset-permissions\">Supported Dataset Permissions</a> in the FinSpace User Guide.</p>"
},
"ResourcePermissionsList": {
"type": "list",
Expand Down Expand Up @@ -1720,7 +1724,7 @@
"members": {
"clientToken": {
"shape": "ClientToken",
"documentation": "<p>A token used to ensure idempotency.</p>",
"documentation": "<p>A token that ensures idempotency. This token expires in 10 minutes.</p>",
"idempotencyToken": true
},
"datasetId": {
Expand All @@ -1737,11 +1741,11 @@
},
"sourceParams": {
"shape": "SourceParams",
"documentation": "<p>Options that define the location of the data being ingested.</p>"
"documentation": "<p>Options that define the location of the data being ingested (<code>s3SourcePath</code>) and the source of the changeset (<code>sourceType</code>).</p> <p>Both <code>s3SourcePath</code> and <code>sourceType</code> are required attributes.</p> <p>Here is an example of how you could specify the <code>sourceParams</code>:</p> <p> <code> \"sourceParams\": { \"s3SourcePath\": \"s3://finspace-landing-us-east-2-bk7gcfvitndqa6ebnvys4d/scratch/wr5hh8pwkpqqkxa4sxrmcw/ingestion/equity.csv\", \"sourceType\": \"S3\" } </code> </p> <p>The S3 path that you specify must allow the FinSpace role access. To do that, you first need to configure the IAM policy on S3 bucket. For more information, see <a href=\"https://docs.aws.amazon.com/finspace/latest/data-api/fs-using-the-finspace-api.html#access-s3-buckets\">Loading data from an Amazon S3 Bucket using the FinSpace API</a>section.</p>"
},
"formatParams": {
"shape": "FormatParams",
"documentation": "<p>Options that define the structure of the source file(s).</p>"
"documentation": "<p>Options that define the structure of the source file(s) including the format type (<code>formatType</code>), header row (<code>withHeader</code>), data separation character (<code>separator</code>) and the type of compression (<code>compression</code>). </p> <p> <code>formatType</code> is a required attribute and can have the following values: </p> <ul> <li> <p> <code>PARQUET</code> - Parquet source file format.</p> </li> <li> <p> <code>CSV</code> - CSV source file format.</p> </li> <li> <p> <code>JSON</code> - JSON source file format.</p> </li> <li> <p> <code>XML</code> - XML source file format.</p> </li> </ul> <p>Here is an example of how you could specify the <code>formatParams</code>:</p> <p> <code> \"formatParams\": { \"formatType\": \"CSV\", \"withHeader\": \"true\", \"separator\": \",\", \"compression\":\"None\" } </code> </p> <p>Note that if you only provide <code>formatType</code> as <code>CSV</code>, the rest of the attributes will automatically default to CSV values as following:</p> <p> <code> { \"withHeader\": \"true\", \"separator\": \",\" } </code> </p> <p> For more information about supported file formats, see <a href=\"https://docs.aws.amazon.com/finspace/latest/userguide/supported-data-types.html\">Supported Data Types and File Formats</a> in the FinSpace User Guide.</p>"
}
},
"documentation": "Request to update an existing changeset."
Expand Down Expand Up @@ -1770,7 +1774,7 @@
"members": {
"clientToken": {
"shape": "ClientToken",
"documentation": "<p>A token used to ensure idempotency.</p>",
"documentation": "<p>A token that ensures idempotency. This token expires in 10 minutes.</p>",
"idempotencyToken": true
},
"datasetId": {
Expand Down
6 changes: 3 additions & 3 deletions apis/iotevents-data-2018-10-23.normal.json
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@
"shape": "ThrottlingException"
}
],
"documentation": "<p>Sends a set of messages to the AWS IoT Events system. Each message payload is transformed into the input you specify (<code>\"inputName\"</code>) and ingested into any detectors that monitor that input. If multiple messages are sent, the order in which the messages are processed isn't guaranteed. To guarantee ordering, you must send messages one at a time and wait for a successful response.</p>"
"documentation": "<p>Sends a set of messages to the IoT Events system. Each message payload is transformed into the input you specify (<code>\"inputName\"</code>) and ingested into any detectors that monitor that input. If multiple messages are sent, the order in which the messages are processed isn't guaranteed. To guarantee ordering, you must send messages one at a time and wait for a successful response.</p>"
},
"BatchResetAlarm": {
"name": "BatchResetAlarm",
Expand Down Expand Up @@ -1408,7 +1408,7 @@
},
"timestamp": {
"shape": "Timestamp",
"documentation": "<p>The number of seconds which have elapsed on the timer.</p>"
"documentation": "<p>The expiration time for the timer.</p>"
}
},
"documentation": "<p>The current state of a timer.</p>"
Expand Down Expand Up @@ -1561,5 +1561,5 @@
}
}
},
"documentation": "<p>AWS IoT Events monitors your equipment or device fleets for failures or changes in operation, and triggers actions when such events occur. You can use AWS IoT Events Data API commands to send inputs to detectors, list detectors, and view or update a detector's status.</p> <p> For more information, see <a href=\"https://docs.aws.amazon.com/iotevents/latest/developerguide/what-is-iotevents.html\">What is AWS IoT Events?</a> in the <i>AWS IoT Events Developer Guide</i>.</p>"
"documentation": "<p>IoT Events monitors your equipment or device fleets for failures or changes in operation, and triggers actions when such events occur. You can use IoT Events Data API commands to send inputs to detectors, list detectors, and view or update a detector's status.</p> <p> For more information, see <a href=\"https://docs.aws.amazon.com/iotevents/latest/developerguide/what-is-iotevents.html\">What is IoT Events?</a> in the <i>IoT Events Developer Guide</i>.</p>"
}
13 changes: 11 additions & 2 deletions apis/kendra-2019-02-03.min.json
Original file line number Diff line number Diff line change
Expand Up @@ -1563,8 +1563,7 @@
"input": {
"type": "structure",
"required": [
"IndexId",
"QueryText"
"IndexId"
],
"members": {
"IndexId": {},
Expand Down Expand Up @@ -1728,6 +1727,16 @@
},
"TotalNumberOfResults": {
"type": "integer"
},
"Warnings": {
"type": "list",
"member": {
"type": "structure",
"members": {
"Message": {},
"Code": {}
}
}
}
}
}
Expand Down
Loading

0 comments on commit ecbb9d2

Please sign in to comment.