Skip to content

Commit

Permalink
Updates SDK to v2.788.0
Browse files Browse the repository at this point in the history
  • Loading branch information
awstools committed Nov 9, 2020
1 parent af5e87d commit 482cd91
Show file tree
Hide file tree
Showing 45 changed files with 4,952 additions and 1,407 deletions.
52 changes: 52 additions & 0 deletions .changes/2.788.0.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
[
{
"type": "feature",
"category": "DataSync",
"description": "DataSync now enables customers to adjust the network bandwidth used by a running AWS DataSync task."
},
{
"type": "feature",
"category": "DynamoDB",
"description": "This release adds supports for exporting Amazon DynamoDB table data to Amazon S3 to perform analytics at any scale."
},
{
"type": "feature",
"category": "ECS",
"description": "This release provides native support for specifying Amazon FSx for Windows File Server file systems as volumes in your Amazon ECS task definitions."
},
{
"type": "feature",
"category": "ES",
"description": "Adding support for package versioning in Amazon Elasticsearch Service"
},
{
"type": "feature",
"category": "FSx",
"description": "This release adds support for creating DNS aliases for Amazon FSx for Windows File Server, and using AWS Backup to automate scheduled, policy-driven backup plans for Amazon FSx file systems."
},
{
"type": "feature",
"category": "IoTAnalytics",
"description": "AWS IoT Analytics now supports Late Data Notifications for datasets, dataset content creation using previous version IDs, and includes the LastMessageArrivalTime attribute for channels and datastores."
},
{
"type": "feature",
"category": "Macie2",
"description": "Sensitive data findings in Amazon Macie now include enhanced location data for Apache Avro object containers and Apache Parquet files."
},
{
"type": "feature",
"category": "S3",
"description": "S3 Intelligent-Tiering adds support for Archive and Deep Archive Access tiers; S3 Replication adds replication metrics and failure notifications, brings feature parity for delete marker replication"
},
{
"type": "feature",
"category": "SSM",
"description": "add a new filter to allow customer to filter automation executions by using resource-group which used for execute automation"
},
{
"type": "feature",
"category": "StorageGateway",
"description": "Added bandwidth rate limit schedule for Tape and Volume Gateways"
}
]
14 changes: 13 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,19 @@
# Changelog for AWS SDK for JavaScript
<!--LATEST=2.787.0-->
<!--LATEST=2.788.0-->
<!--ENTRYINSERT-->

## 2.788.0
* feature: DataSync: DataSync now enables customers to adjust the network bandwidth used by a running AWS DataSync task.
* feature: DynamoDB: This release adds supports for exporting Amazon DynamoDB table data to Amazon S3 to perform analytics at any scale.
* feature: ECS: This release provides native support for specifying Amazon FSx for Windows File Server file systems as volumes in your Amazon ECS task definitions.
* feature: ES: Adding support for package versioning in Amazon Elasticsearch Service
* feature: FSx: This release adds support for creating DNS aliases for Amazon FSx for Windows File Server, and using AWS Backup to automate scheduled, policy-driven backup plans for Amazon FSx file systems.
* feature: IoTAnalytics: AWS IoT Analytics now supports Late Data Notifications for datasets, dataset content creation using previous version IDs, and includes the LastMessageArrivalTime attribute for channels and datastores.
* feature: Macie2: Sensitive data findings in Amazon Macie now include enhanced location data for Apache Avro object containers and Apache Parquet files.
* feature: S3: S3 Intelligent-Tiering adds support for Archive and Deep Archive Access tiers; S3 Replication adds replication metrics and failure notifications, brings feature parity for delete marker replication
* feature: SSM: add a new filter to allow customer to filter automation executions by using resource-group which used for execute automation
* feature: StorageGateway: Added bandwidth rate limit schedule for Tape and Volume Gateways

## 2.787.0
* feature: DLM: Amazon Data Lifecycle Manager now supports the creation and retention of EBS-backed Amazon Machine Images
* feature: EC2: Network card support with four new attributes: NetworkCardIndex, NetworkPerformance, DefaultNetworkCardIndex, and MaximumNetworkInterfaces, added to the DescribeInstanceTypes API.
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ version.
To use the SDK in the browser, simply add the following script tag to your
HTML pages:

<script src="https://sdk.amazonaws.com/js/aws-sdk-2.787.0.min.js"></script>
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.788.0.min.js"></script>

You can also build a custom browser SDK with your specified set of AWS services.
This can allow you to reduce the SDK's size, specify different API versions of
Expand Down
19 changes: 19 additions & 0 deletions apis/datasync-2018-11-09.min.json
Original file line number Diff line number Diff line change
Expand Up @@ -913,6 +913,25 @@
"type": "structure",
"members": {}
}
},
"UpdateTaskExecution": {
"input": {
"type": "structure",
"required": [
"TaskExecutionArn",
"Options"
],
"members": {
"TaskExecutionArn": {},
"Options": {
"shape": "S1o"
}
}
},
"output": {
"type": "structure",
"members": {}
}
}
},
"shapes": {
Expand Down
62 changes: 52 additions & 10 deletions apis/datasync-2018-11-09.normal.json
Original file line number Diff line number Diff line change
Expand Up @@ -695,6 +695,28 @@
}
],
"documentation": "<p>Updates the metadata associated with a task.</p>"
},
"UpdateTaskExecution": {
"name": "UpdateTaskExecution",
"http": {
"method": "POST",
"requestUri": "/"
},
"input": {
"shape": "UpdateTaskExecutionRequest"
},
"output": {
"shape": "UpdateTaskExecutionResponse"
},
"errors": [
{
"shape": "InvalidRequestException"
},
{
"shape": "InternalException"
}
],
"documentation": "<p>Updates execution of a task.</p> <p>You can modify bandwidth throttling for a task execution that is running or queued. For more information, see <a href=\"https://docs.aws.amazon.com/datasync/latest/working-with-task-executions.html#adjust-bandwidth-throttling\">Adjusting Bandwidth Throttling for a Task Execution</a>.</p> <note> <p>The only <code>Option</code> that can be modified by <code>UpdateTaskExecution</code> is <code> <a href=\"https://docs.aws.amazon.com/datasync/latest/userguide/API_Options.html#DataSync-Type-Options-BytesPerSecond\">BytesPerSecond</a> </code>.</p> </note>"
}
},
"shapes": {
Expand Down Expand Up @@ -912,7 +934,7 @@
"members": {
"Subdirectory": {
"shape": "NfsSubdirectory",
"documentation": "<p>The subdirectory in the NFS file system that is used to read data from the NFS source location or write data to the NFS destination. The NFS path should be a path that's exported by the NFS server, or a subdirectory of that path. The path should be such that it can be mounted by other NFS clients in your network. </p> <p>To see all the paths exported by your NFS server. run \"<code>showmount -e nfs-server-name</code>\" from an NFS client that has access to your server. You can specify any directory that appears in the results, and any subdirectory of that directory. Ensure that the NFS export is accessible without Kerberos authentication. </p> <p>To transfer all the data in the folder you specified, DataSync needs to have permissions to read all the data. To ensure this, either configure the NFS export with <code>no_root_squash,</code> or ensure that the permissions for all of the files that you want DataSync allow read access for all users. Doing either enables the agent to read the files. For the agent to access directories, you must additionally enable all execute access.</p> <p>If you are copying data to or from your AWS Snowcone device, see <a href=\"https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#nfs-on-snowcone\">NFS Server on AWS Snowcone</a> for more information.</p> <p>For information about NFS export configuration, see 18.7. The /etc/exports Configuration File in the Red Hat Enterprise Linux documentation.</p>"
"documentation": "<p>The subdirectory in the NFS file system that is used to read data from the NFS source location or write data to the NFS destination. The NFS path should be a path that's exported by the NFS server, or a subdirectory of that path. The path should be such that it can be mounted by other NFS clients in your network. </p> <p>To see all the paths exported by your NFS server, run \"<code>showmount -e nfs-server-name</code>\" from an NFS client that has access to your server. You can specify any directory that appears in the results, and any subdirectory of that directory. Ensure that the NFS export is accessible without Kerberos authentication. </p> <p>To transfer all the data in the folder you specified, DataSync needs to have permissions to read all the data. To ensure this, either configure the NFS export with <code>no_root_squash,</code> or ensure that the permissions for all of the files that you want DataSync allow read access for all users. Doing either enables the agent to read the files. For the agent to access directories, you must additionally enable all execute access.</p> <p>If you are copying data to or from your AWS Snowcone device, see <a href=\"https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#nfs-on-snowcone\">NFS Server on AWS Snowcone</a> for more information.</p> <p>For information about NFS export configuration, see 18.7. The /etc/exports Configuration File in the Red Hat Enterprise Linux documentation.</p>"
},
"ServerHostname": {
"shape": "ServerHostname",
Expand Down Expand Up @@ -1013,18 +1035,18 @@
},
"S3BucketArn": {
"shape": "S3BucketArn",
"documentation": "<p>The Amazon Resource Name (ARN) of the Amazon S3 bucket. If the bucket is on an AWS Outpost, this must be an access point ARN.</p>"
"documentation": "<p>The ARN of the Amazon S3 bucket. If the bucket is on an AWS Outpost, this must be an access point ARN.</p>"
},
"S3StorageClass": {
"shape": "S3StorageClass",
"documentation": "<p>The Amazon S3 storage class that you want to store your files in when this location is used as a task destination. For buckets in AWS Regions, the storage class defaults to Standard. For buckets on AWS Outposts, the storage class defaults to AWS S3 Outposts.</p> <p>For more information about S3 storage classes, see <a href=\"https://aws.amazon.com/s3/storage-classes/\">Amazon S3 Storage Classes</a> in the <i>Amazon Simple Storage Service Developer Guide</i>. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see <a>using-storage-classes</a>.</p>"
"documentation": "<p>The Amazon S3 storage class that you want to store your files in when this location is used as a task destination. For buckets in AWS Regions, the storage class defaults to Standard. For buckets on AWS Outposts, the storage class defaults to AWS S3 Outposts.</p> <p>For more information about S3 storage classes, see <a href=\"http://aws.amazon.com/s3/storage-classes/\">Amazon S3 Storage Classes</a>. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see <a>using-storage-classes</a>.</p>"
},
"S3Config": {
"shape": "S3Config"
},
"AgentArns": {
"shape": "AgentArnList",
"documentation": "<p>If you are using DataSync on an AWS Outpost, specify the Amazon Resource Names (ARNs) of the DataSync agents deployed on your AWS Outpost. For more information about launching a DataSync agent on an Amazon Outpost, see <a>outposts-agent</a>.</p>"
"documentation": "<p>If you are using DataSync on an AWS Outpost, specify the Amazon Resource Names (ARNs) of the DataSync agents deployed on your Outpost. For more information about launching a DataSync agent on an AWS Outpost, see <a>outposts-agent</a>.</p>"
},
"Tags": {
"shape": "InputTagList",
Expand Down Expand Up @@ -1127,7 +1149,7 @@
},
"Excludes": {
"shape": "FilterList",
"documentation": "<p>A list of filter rules that determines which files to exclude from a task. The list should contain a single filter string that consists of the patterns to exclude. The patterns are delimited by \"|\" (that is, a pipe), for example, <code>\"/folder1|/folder2\"</code> </p> <p> </p>"
"documentation": "<p>A list of filter rules that determines which files to exclude from a task. The list should contain a single filter string that consists of the patterns to exclude. The patterns are delimited by \"|\" (that is, a pipe), for example, <code>\"/folder1|/folder2\"</code>. </p> <p> </p>"
},
"Schedule": {
"shape": "TaskSchedule",
Expand Down Expand Up @@ -1434,14 +1456,14 @@
},
"S3StorageClass": {
"shape": "S3StorageClass",
"documentation": "<p>The Amazon S3 storage class that you chose to store your files in when this location is used as a task destination. For more information about S3 storage classes, see <a href=\"https://aws.amazon.com/s3/storage-classes/\">Amazon S3 Storage Classes</a> in the <i>Amazon Simple Storage Service Developer Guide</i>. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see <a>using-storage-classes</a>.</p>"
"documentation": "<p>The Amazon S3 storage class that you chose to store your files in when this location is used as a task destination. For more information about S3 storage classes, see <a href=\"http://aws.amazon.com/s3/storage-classes/\">Amazon S3 Storage Classes</a>. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see <a>using-storage-classes</a>.</p>"
},
"S3Config": {
"shape": "S3Config"
},
"AgentArns": {
"shape": "AgentArnList",
"documentation": "<p>If you are using DataSync on an Amazon Outpost, the Amazon Resource Name (ARNs) of the EC2 agents deployed on your AWS Outpost. For more information about launching a DataSync agent on an Amazon Outpost, see <a>outposts-agent</a>.</p>"
"documentation": "<p>If you are using DataSync on an AWS Outpost, the Amazon Resource Name (ARNs) of the EC2 agents deployed on your Outpost. For more information about launching a DataSync agent on an AWS Outpost, see <a>outposts-agent</a>.</p>"
},
"CreationTime": {
"shape": "Time",
Expand Down Expand Up @@ -1585,7 +1607,7 @@
},
"Status": {
"shape": "TaskStatus",
"documentation": "<p>The status of the task that was described.</p> <p>For detailed information about task execution statuses, see Understanding Task Statuses in the <i>AWS DataSync User Guide.</i> </p>"
"documentation": "<p>The status of the task that was described.</p> <p>For detailed information about task execution statuses, see Understanding Task Statuses in the <i>AWS DataSync User Guide</i>.</p>"
},
"Name": {
"shape": "TagValue",
Expand Down Expand Up @@ -2511,15 +2533,15 @@
},
"TransferStatus": {
"shape": "PhaseStatus",
"documentation": "<p>The status of the TRANSFERRING Phase.</p>"
"documentation": "<p>The status of the TRANSFERRING phase.</p>"
},
"VerifyDuration": {
"shape": "Duration",
"documentation": "<p>The total time in milliseconds that AWS DataSync spent in the VERIFYING phase.</p>"
},
"VerifyStatus": {
"shape": "PhaseStatus",
"documentation": "<p>The status of the VERIFYING Phase.</p>"
"documentation": "<p>The status of the VERIFYING phase.</p>"
},
"ErrorCode": {
"shape": "string",
Expand Down Expand Up @@ -2696,6 +2718,26 @@
"type": "structure",
"members": {}
},
"UpdateTaskExecutionRequest": {
"type": "structure",
"required": [
"TaskExecutionArn",
"Options"
],
"members": {
"TaskExecutionArn": {
"shape": "TaskExecutionArn",
"documentation": "<p>The Amazon Resource Name (ARN) of the specific task execution that is being updated. </p>"
},
"Options": {
"shape": "Options"
}
}
},
"UpdateTaskExecutionResponse": {
"type": "structure",
"members": {}
},
"UpdateTaskRequest": {
"type": "structure",
"required": [
Expand Down
Loading

0 comments on commit 482cd91

Please sign in to comment.