From ee812febb3173d4e8553782ff53a7ed6330d6706 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 31 May 2023 18:11:48 +0000 Subject: [PATCH 001/317] Amazon Relational Database Service Update: This release adds support for changing the engine for Oracle using the ModifyDbInstance API --- ...ture-AmazonRelationalDatabaseService-6161e54.json | 6 ++++++ .../main/resources/codegen-resources/service-2.json | 12 ++++++++++-- 2 files changed, 16 insertions(+), 2 deletions(-) create mode 100644 .changes/next-release/feature-AmazonRelationalDatabaseService-6161e54.json diff --git a/.changes/next-release/feature-AmazonRelationalDatabaseService-6161e54.json b/.changes/next-release/feature-AmazonRelationalDatabaseService-6161e54.json new file mode 100644 index 000000000000..e49a2599c1d6 --- /dev/null +++ b/.changes/next-release/feature-AmazonRelationalDatabaseService-6161e54.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Relational Database Service", + "contributor": "", + "description": "This release adds support for changing the engine for Oracle using the ModifyDbInstance API" +} diff --git a/services/rds/src/main/resources/codegen-resources/service-2.json b/services/rds/src/main/resources/codegen-resources/service-2.json index 1f8752d72590..5280d7a4e428 100644 --- a/services/rds/src/main/resources/codegen-resources/service-2.json +++ b/services/rds/src/main/resources/codegen-resources/service-2.json @@ -3812,7 +3812,7 @@ }, "StorageType":{ "shape":"String", - "documentation":"
Specifies the storage type to be associated with the DB cluster.
This setting is required to create a Multi-AZ DB cluster.
When specified for a Multi-AZ DB cluster, a value for the Iops parameter is required.
Valid values: aurora, aurora-iopt1 (Aurora DB clusters); io1 (Multi-AZ DB clusters)
Default: aurora (Aurora DB clusters); io1 (Multi-AZ DB clusters)
Valid for: Aurora DB clusters and Multi-AZ DB clusters
" + "documentation":"Specifies the storage type to be associated with the DB cluster.
This setting is required to create a Multi-AZ DB cluster.
When specified for a Multi-AZ DB cluster, a value for the Iops parameter is required.
Valid values: aurora, aurora-iopt1 (Aurora DB clusters); io1 (Multi-AZ DB clusters)
Default: aurora (Aurora DB clusters); io1 (Multi-AZ DB clusters)
Valid for: Aurora DB clusters and Multi-AZ DB clusters
For more information on storage types for Aurora DB clusters, see Storage configurations for Amazon Aurora DB clusters. For more information on storage types for Multi-AZ DB clusters, see Settings for creating Multi-AZ DB clusters.
" }, "Iops":{ "shape":"IntegerOptional", @@ -10733,6 +10733,10 @@ "MasterUserSecretKmsKeyId":{ "shape":"String", "documentation":"The Amazon Web Services KMS key identifier to encrypt a secret that is automatically generated and managed in Amazon Web Services Secrets Manager.
This setting is valid only if both of the following conditions are met:
The DB instance doesn't manage the master user password in Amazon Web Services Secrets Manager.
If the DB instance already manages the master user password in Amazon Web Services Secrets Manager, you can't change the KMS key used to encrypt the secret.
You are turning on ManageMasterUserPassword to manage the master user password in Amazon Web Services Secrets Manager.
If you are turning on ManageMasterUserPassword and don't specify MasterUserSecretKmsKeyId, then the aws/secretsmanager KMS key is used to encrypt the secret. If the secret is in a different Amazon Web Services account, then you can't use the aws/secretsmanager KMS key to encrypt the secret, and you must use a customer managed KMS key.
The Amazon Web Services KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different Amazon Web Services account, specify the key ARN or alias ARN.
There is a default KMS key for your Amazon Web Services account. Your Amazon Web Services account has a different default KMS key for each Amazon Web Services Region.
" + }, + "Engine":{ + "shape":"String", + "documentation":"The target Oracle DB engine when you convert a non-CDB to a CDB. This intermediate step is necessary to upgrade an Oracle Database 19c non-CDB to an Oracle Database 21c CDB.
Note the following requirements:
Make sure that you specify oracle-ee-cdb or oracle-se2-cdb.
Make sure that your DB engine runs Oracle Database 19c with an April 2021 or later RU.
Note the following limitations:
You can't convert a CDB to a non-CDB.
You can't convert a replica database.
You can't convert a non-CDB to a CDB and upgrade the engine version in the same command.
You can't convert the existing custom parameter or option group when it has options or parameters that are permanent or persistent. In this situation, the DB instance reverts to the default option and parameter group. To avoid reverting to the default, specify a new parameter group with --db-parameter-group-name and a new option group with --option-group-name.
The engine version to upgrade the DB snapshot to.
The following are the database engines and engine versions that are available when you upgrade a DB snapshot.
MySQL
5.5.46 (supported for 5.1 DB snapshots)
Oracle
19.0.0.0.ru-2022-01.rur-2022-01.r1 (supported for 12.2.0.1 DB snapshots)
19.0.0.0.ru-2022-07.rur-2022-07.r1 (supported for 12.1.0.2 DB snapshots)
12.1.0.2.v8 (supported for 12.1.0.1 DB snapshots)
11.2.0.4.v12 (supported for 11.2.0.2 DB snapshots)
11.2.0.4.v11 (supported for 11.2.0.3 DB snapshots)
PostgreSQL
For the list of engine versions that are available for upgrading a DB snapshot, see Upgrading the PostgreSQL DB Engine for Amazon RDS.
" + "documentation":"The engine version to upgrade the DB snapshot to.
The following are the database engines and engine versions that are available when you upgrade a DB snapshot.
MySQL
5.5.46 (supported for 5.1 DB snapshots)
Oracle
12.1.0.2.v8 (supported for 12.1.0.1 DB snapshots)
11.2.0.4.v12 (supported for 11.2.0.2 DB snapshots)
11.2.0.4.v11 (supported for 11.2.0.3 DB snapshots)
PostgreSQL
For the list of engine versions that are available for upgrading a DB snapshot, see Upgrading the PostgreSQL DB Engine for Amazon RDS.
" }, "OptionGroupName":{ "shape":"String", @@ -11889,6 +11893,10 @@ "StorageThroughput":{ "shape":"IntegerOptional", "documentation":"The storage throughput of the DB instance.
" + }, + "Engine":{ + "shape":"String", + "documentation":"The database engine of the DB instance.
" } }, "documentation":"This data type is used as a response element in the ModifyDBInstance operation and contains changes that will be applied during the next maintenance window.
Specifies if event orchestration is enabled through Amazon EventBridge.
" + } + }, + "documentation":"The event orchestration status.
" + }, "EventPredictionSummary":{ "type":"structure", "members":{ @@ -2772,6 +2783,10 @@ "arn":{ "shape":"fraudDetectorArn", "documentation":"The entity type ARN.
" + }, + "eventOrchestration":{ + "shape":"EventOrchestration", + "documentation":"The event orchestration status.
" } }, "documentation":"The event type details.
", @@ -3836,7 +3851,7 @@ }, "unlabeledEventsTreatment":{ "shape":"UnlabeledEventsTreatment", - "documentation":"The action to take for unlabeled events.
Use IGNORE if you want the unlabeled events to be ignored. This is recommended when the majority of the events in the dataset are labeled.
Use FRAUD if you want to categorize all unlabeled events as “Fraud”. This is recommended when most of the events in your dataset are fraudulent.
Use LEGIT f you want to categorize all unlabeled events as “Legit”. This is recommended when most of the events in your dataset are legitimate.
Use AUTO if you want Amazon Fraud Detector to decide how to use the unlabeled data. This is recommended when there is significant unlabeled events in the dataset.
By default, Amazon Fraud Detector ignores the unlabeled data.
" + "documentation":"The action to take for unlabeled events.
Use IGNORE if you want the unlabeled events to be ignored. This is recommended when the majority of the events in the dataset are labeled.
Use FRAUD if you want to categorize all unlabeled events as “Fraud”. This is recommended when most of the events in your dataset are fraudulent.
Use LEGIT if you want to categorize all unlabeled events as “Legit”. This is recommended when most of the events in your dataset are legitimate.
Use AUTO if you want Amazon Fraud Detector to decide how to use the unlabeled data. This is recommended when there is significant unlabeled events in the dataset.
By default, Amazon Fraud Detector ignores the unlabeled data.
" } }, "documentation":"The label schema.
" @@ -4519,11 +4534,15 @@ }, "eventIngestion":{ "shape":"EventIngestion", - "documentation":"Specifies if ingenstion is enabled or disabled.
" + "documentation":"Specifies if ingestion is enabled or disabled.
" }, "tags":{ "shape":"tagList", "documentation":"A collection of key and value pairs.
" + }, + "eventOrchestration":{ + "shape":"EventOrchestration", + "documentation":"Enables or disables event orchestration. If enabled, you can send event predictions to select AWS services for downstream processing of the events.
" } } }, @@ -4607,7 +4626,7 @@ }, "tags":{ "shape":"tagList", - "documentation":"" + "documentation":"A collection of key and value pairs.
" } } }, @@ -5012,7 +5031,7 @@ }, "upperBoundValue":{ "shape":"float", - "documentation":"The lower bound value of the area under curve (auc).
" + "documentation":"The upper bound value of the area under curve (auc).
" } }, "documentation":"Range of area under curve (auc) expected from the model. A range greater than 0.1 indicates higher model uncertainity. A range is the difference between upper and lower bound of auc.
" @@ -5785,5 +5804,5 @@ "pattern":"^([1-9][0-9]*)$" } }, - "documentation":"This is the Amazon Fraud Detector API Reference. This guide is for developers who need detailed information about Amazon Fraud Detector API actions, data types, and errors. For more information about Amazon Fraud Detector features, see the Amazon Fraud Detector User Guide.
We provide the Query API as well as AWS software development kits (SDK) for Amazon Fraud Detector in Java and Python programming languages.
The Amazon Fraud Detector Query API provides HTTPS requests that use the HTTP verb GET or POST and a Query parameter Action. AWS SDK provides libraries, sample code, tutorials, and other resources for software developers who prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS. These libraries provide basic functions that automatically take care of tasks such as cryptographically signing your requests, retrying requests, and handling error responses, so that it is easier for you to get started. For more information about the AWS SDKs, see Tools to build on AWS.
This is the Amazon Fraud Detector API Reference. This guide is for developers who need detailed information about Amazon Fraud Detector API actions, data types, and errors. For more information about Amazon Fraud Detector features, see the Amazon Fraud Detector User Guide.
We provide the Query API as well as AWS software development kits (SDK) for Amazon Fraud Detector in Java and Python programming languages.
The Amazon Fraud Detector Query API provides HTTPS requests that use the HTTP verb GET or POST and a Query parameter Action. AWS SDK provides libraries, sample code, tutorials, and other resources for software developers who prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS. These libraries provide basic functions that automatically take care of tasks such as cryptographically signing your requests, retrying requests, and handling error responses, so that it is easier for you to get started. For more information about the AWS SDKs, go to Tools to build on AWS page, scroll down to the SDK section, and choose plus (+) sign to expand the section.
The name of the application.
" }, + "roleArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the role associated with the application.
" + }, "status":{ "shape":"ApplicationLifecycle", "documentation":"The status of the application.
" @@ -782,7 +786,7 @@ }, "Arn":{ "type":"string", - "pattern":"^arn:(aws|aws-cn|aws-iso|aws-iso-[a-z]{1}|aws-us-gov):[A-Za-z0-9][A-Za-z0-9_/.-]{0,62}:([a-z]{2}-((iso[a-z]{0,1}-)|(gov-)){0,1}[a-z]+-[0-9]):[0-9]{12}:[A-Za-z0-9/][A-Za-z0-9:_/+=,@.-]{0,1023}$" + "pattern":"^arn:(aws|aws-cn|aws-iso|aws-iso-[a-z]{1}|aws-us-gov):[A-Za-z0-9][A-Za-z0-9_/.-]{0,62}:([a-z]{2}-((iso[a-z]{0,1}-)|(gov-)){0,1}[a-z]+-[0-9]|):[0-9]{12}:[A-Za-z0-9/][A-Za-z0-9:_/+=,@.-]{0,1023}$" }, "ArnList":{ "type":"list", @@ -835,7 +839,10 @@ "shape":"Identifier", "documentation":"The unique identifier of the application that hosts this batch job.
" }, - "batchJobIdentifier":{"shape":"BatchJobIdentifier"}, + "batchJobIdentifier":{ + "shape":"BatchJobIdentifier", + "documentation":"The unique identifier of this batch job.
" + }, "endTime":{ "shape":"Timestamp", "documentation":"The timestamp when this batch job execution ended.
" @@ -858,7 +865,7 @@ }, "returnCode":{ "shape":"String", - "documentation":"" + "documentation":"The batch job return code from either the Blu Age or Micro Focus runtime engines. For more information, see Batch return codes in the IBM WebSphere Application Server documentation.
" }, "startTime":{ "shape":"Timestamp", @@ -908,7 +915,7 @@ }, "BatchParamKey":{ "type":"string", - "documentation":"Parameter key: the first character must be alphabetic. Can be of up to 8 alphanumeric characters.
", + "documentation":"See https://www.ibm.com/docs/en/workload-automation/9.3.0?topic=zos-coding-variables-in-jcl to get details about limits for both keys and values: 8 for keys (variable names), 44 for values (variable values) In addition, keys will be only alphabetic characters and digits, without any space or special characters (dash, underscore, etc ...)
Parameter key: the first character must be alphabetic. Can be of up to 8 alphanumeric characters.
", "max":8, "min":1, "pattern":"^[A-Za-z][A-Za-z0-9]{1,7}$" @@ -1006,6 +1013,10 @@ "shape":"EntityName", "documentation":"The unique identifier of the application.
" }, + "roleArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the role associated with the application.
" + }, "tags":{ "shape":"TagMap", "documentation":"A list of tags to apply to the application.
" @@ -1364,6 +1375,14 @@ "shape":"GdgDetailAttributes", "documentation":"The generation data group of the data set.
" }, + "po":{ + "shape":"PoDetailAttributes", + "documentation":"The details of a PO type data set.
" + }, + "ps":{ + "shape":"PsDetailAttributes", + "documentation":"The details of a PS type data set.
" + }, "vsam":{ "shape":"VsamDetailAttributes", "documentation":"The details of a VSAM data set.
" @@ -1379,6 +1398,14 @@ "shape":"GdgAttributes", "documentation":"The generation data group of the data set.
" }, + "po":{ + "shape":"PoAttributes", + "documentation":"The details of a PO type data set.
" + }, + "ps":{ + "shape":"PsAttributes", + "documentation":"The details of a PS type data set.
" + }, "vsam":{ "shape":"VsamAttributes", "documentation":"The details of a VSAM data set.
" @@ -1841,6 +1868,10 @@ "shape":"EntityName", "documentation":"The unique identifier of the application.
" }, + "roleArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the role associated with the application.
" + }, "status":{ "shape":"ApplicationLifecycle", "documentation":"The status of the application.
" @@ -1954,7 +1985,10 @@ "shape":"Identifier", "documentation":"The identifier of the application.
" }, - "batchJobIdentifier":{"shape":"BatchJobIdentifier"}, + "batchJobIdentifier":{ + "shape":"BatchJobIdentifier", + "documentation":"The unique identifier of this batch job.
" + }, "endTime":{ "shape":"Timestamp", "documentation":"The timestamp when the batch job execution ended.
" @@ -1981,7 +2015,7 @@ }, "returnCode":{ "shape":"String", - "documentation":"" + "documentation":"The batch job return code from either the Blu Age or Micro Focus runtime engines. For more information, see Batch return codes in the IBM WebSphere Application Server documentation.
" }, "startTime":{ "shape":"Timestamp", @@ -2800,6 +2834,46 @@ }, "documentation":"The scheduled maintenance for a runtime engine.
" }, + "PoAttributes":{ + "type":"structure", + "required":[ + "format", + "memberFileExtensions" + ], + "members":{ + "encoding":{ + "shape":"String", + "documentation":"The character set encoding of the data set.
" + }, + "format":{ + "shape":"String", + "documentation":"The format of the data set records.
" + }, + "memberFileExtensions":{ + "shape":"String20List", + "documentation":"An array containing one or more filename extensions, allowing you to specify which files to be included as PDS member.
" + } + }, + "documentation":"The supported properties for a PO type data set.
" + }, + "PoDetailAttributes":{ + "type":"structure", + "required":[ + "encoding", + "format" + ], + "members":{ + "encoding":{ + "shape":"String", + "documentation":"The character set encoding of the data set.
" + }, + "format":{ + "shape":"String", + "documentation":"The format of the data set records.
" + } + }, + "documentation":"The supported properties for a PO type data set.
" + }, "PortList":{ "type":"list", "member":{"shape":"Integer"}, @@ -2827,6 +2901,39 @@ }, "documentation":"The primary key for a KSDS data set.
" }, + "PsAttributes":{ + "type":"structure", + "required":["format"], + "members":{ + "encoding":{ + "shape":"String", + "documentation":"The character set encoding of the data set.
" + }, + "format":{ + "shape":"String", + "documentation":"The format of the data set records.
" + } + }, + "documentation":"The supported properties for a PS type data set.
" + }, + "PsDetailAttributes":{ + "type":"structure", + "required":[ + "encoding", + "format" + ], + "members":{ + "encoding":{ + "shape":"String", + "documentation":"The character set encoding of the data set.
" + }, + "format":{ + "shape":"String", + "documentation":"The format of the data set records.
" + } + }, + "documentation":"The supported properties for a PS type data set.
" + }, "RecordLength":{ "type":"structure", "required":[ @@ -3024,6 +3131,12 @@ "type":"string", "pattern":"^\\S{1,2000}$" }, + "String20List":{ + "type":"list", + "member":{"shape":"String20"}, + "max":10, + "min":1 + }, "String50":{ "type":"string", "pattern":"^\\S{1,50}$" From ef79699b8ae16a4c7164d573a09f35d311300a0c Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 31 May 2023 18:12:01 +0000 Subject: [PATCH 004/317] AWS Config Update: Resource Types Exclusion feature launch by AWS Config --- .../feature-AWSConfig-00a005f.json | 6 ++ .../codegen-resources/service-2.json | 94 +++++++++++++------ 2 files changed, 71 insertions(+), 29 deletions(-) create mode 100644 .changes/next-release/feature-AWSConfig-00a005f.json diff --git a/.changes/next-release/feature-AWSConfig-00a005f.json b/.changes/next-release/feature-AWSConfig-00a005f.json new file mode 100644 index 000000000000..6de1773aabea --- /dev/null +++ b/.changes/next-release/feature-AWSConfig-00a005f.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Config", + "contributor": "", + "description": "Resource Types Exclusion feature launch by AWS Config" +} diff --git a/services/config/src/main/resources/codegen-resources/service-2.json b/services/config/src/main/resources/codegen-resources/service-2.json index 9a8b14290b64..0735bf9921fc 100644 --- a/services/config/src/main/resources/codegen-resources/service-2.json +++ b/services/config/src/main/resources/codegen-resources/service-2.json @@ -403,7 +403,7 @@ "errors":[ {"shape":"NoSuchConfigurationRecorderException"} ], - "documentation":"Returns the current status of the specified configuration recorder as well as the status of the last recording event for the recorder. If a configuration recorder is not specified, this action returns the status of all configuration recorders associated with the account.
Currently, you can specify only one configuration recorder per region in your account. For a detailed status of recording events over time, add your Config events to Amazon CloudWatch metrics and use CloudWatch metrics.
Returns the current status of the specified configuration recorder as well as the status of the last recording event for the recorder. If a configuration recorder is not specified, this action returns the status of all configuration recorders associated with the account.
>You can specify only one configuration recorder for each Amazon Web Services Region for each account. For a detailed status of recording events over time, add your Config events to Amazon CloudWatch metrics and use CloudWatch metrics.
Returns the details for the specified configuration recorders. If the configuration recorder is not specified, this action returns the details for all configuration recorders associated with the account.
Currently, you can specify only one configuration recorder per region in your account.
Returns the details for the specified configuration recorders. If the configuration recorder is not specified, this action returns the details for all configuration recorders associated with the account.
You can specify only one configuration recorder for each Amazon Web Services Region for each account.
Creates a new configuration recorder to record the selected resource configurations.
You can use this action to change the role roleARN or the recordingGroup of an existing recorder. To change the role, call the action on the existing configuration recorder and specify a role.
Currently, you can specify only one configuration recorder per region in your account.
If ConfigurationRecorder does not have the recordingGroup parameter specified, the default is to record all supported resource types.
Creates a new configuration recorder to record configuration changes for specified resource types.
You can also use this action to change the roleARN or the recordingGroup of an existing recorder. For more information, see Managing the Configuration Recorder in the Config Developer Guide.
You can specify only one configuration recorder for each Amazon Web Services Region for each account.
If the configuration recorder does not have the recordingGroup field specified, the default is to record all supported resource types.
Creates or updates a conformance pack. A conformance pack is a collection of Config rules that can be easily deployed in an account and a region and across an organization. For information on how many conformance packs you can have per account, see Service Limits in the Config Developer Guide.
This API creates a service-linked role AWSServiceRoleForConfigConforms in your account. The service-linked role is created only when the role does not exist in your account.
You must specify only one of the follow parameters: TemplateS3Uri, TemplateBody or TemplateSSMDocumentDetails.
Creates or updates a conformance pack. A conformance pack is a collection of Config rules that can be easily deployed in an account and a region and across an organization. For information on how many conformance packs you can have per account, see Service Limits in the Config Developer Guide.
This API creates a service-linked role AWSServiceRoleForConfigConforms in your account. The service-linked role is created only when the role does not exist in your account.
You must specify only one of the follow parameters: TemplateS3Uri, TemplateBody or TemplateSSMDocumentDetails.
Deploys conformance packs across member accounts in an Amazon Web Services Organization. For information on how many organization conformance packs and how many Config rules you can have per account, see Service Limits in the Config Developer Guide.
Only a management account and a delegated administrator can call this API. When calling this API with a delegated administrator, you must ensure Organizations ListDelegatedAdministrator permissions are added. An organization can have up to 3 delegated administrators.
This API enables organization service access for config-multiaccountsetup.amazonaws.com through the EnableAWSServiceAccess action and creates a service-linked role AWSServiceRoleForConfigMultiAccountSetup in the management or delegated administrator account of your organization. The service-linked role is created only when the role does not exist in the caller account. To use this API with delegated administrator, register a delegated administrator by calling Amazon Web Services Organization register-delegate-admin for config-multiaccountsetup.amazonaws.com.
Prerequisite: Ensure you call EnableAllFeatures API to enable all features in an organization.
You must specify either the TemplateS3Uri or the TemplateBody parameter, but not both. If you provide both Config uses the TemplateS3Uri parameter and ignores the TemplateBody parameter.
Config sets the state of a conformance pack to CREATE_IN_PROGRESS and UPDATE_IN_PROGRESS until the conformance pack is created or updated. You cannot update a conformance pack while it is in this state.
Deploys conformance packs across member accounts in an Amazon Web Services Organization. For information on how many organization conformance packs and how many Config rules you can have per account, see Service Limits in the Config Developer Guide.
Only a management account and a delegated administrator can call this API. When calling this API with a delegated administrator, you must ensure Organizations ListDelegatedAdministrator permissions are added. An organization can have up to 3 delegated administrators.
This API enables organization service access for config-multiaccountsetup.amazonaws.com through the EnableAWSServiceAccess action and creates a service-linked role AWSServiceRoleForConfigMultiAccountSetup in the management or delegated administrator account of your organization. The service-linked role is created only when the role does not exist in the caller account. To use this API with delegated administrator, register a delegated administrator by calling Amazon Web Services Organization register-delegate-admin for config-multiaccountsetup.amazonaws.com.
Prerequisite: Ensure you call EnableAllFeatures API to enable all features in an organization.
You must specify either the TemplateS3Uri or the TemplateBody parameter, but not both. If you provide both Config uses the TemplateS3Uri parameter and ignores the TemplateBody parameter.
Config sets the state of a conformance pack to CREATE_IN_PROGRESS and UPDATE_IN_PROGRESS until the conformance pack is created or updated. You cannot update a conformance pack while it is in this state.
A remediation exception is when a specified resource is no longer considered for auto-remediation. This API adds a new exception or updates an existing exception for a specified resource with a specified Config rule.
Config generates a remediation exception when a problem occurs running a remediation action for a specified resource. Remediation exceptions blocks auto-remediation until the exception is cleared.
When placing an exception on an Amazon Web Services resource, it is recommended that remediation is set as manual remediation until the given Config rule for the specified resource evaluates the resource as NON_COMPLIANT. Once the resource has been evaluated as NON_COMPLIANT, you can add remediation exceptions and change the remediation type back from Manual to Auto if you want to use auto-remediation. Otherwise, using auto-remediation before a NON_COMPLIANT evaluation result can delete resources before the exception is applied.
Placing an exception can only be performed on resources that are NON_COMPLIANT. If you use this API for COMPLIANT resources or resources that are NOT_APPLICABLE, a remediation exception will not be generated. For more information on the conditions that initiate the possible Config evaluation results, see Concepts | Config Rules in the Config Developer Guide.
A remediation exception is when a specified resource is no longer considered for auto-remediation. This API adds a new exception or updates an existing exception for a specified resource with a specified Config rule.
Config generates a remediation exception when a problem occurs running a remediation action for a specified resource. Remediation exceptions blocks auto-remediation until the exception is cleared.
When placing an exception on an Amazon Web Services resource, it is recommended that remediation is set as manual remediation until the given Config rule for the specified resource evaluates the resource as NON_COMPLIANT. Once the resource has been evaluated as NON_COMPLIANT, you can add remediation exceptions and change the remediation type back from Manual to Auto if you want to use auto-remediation. Otherwise, using auto-remediation before a NON_COMPLIANT evaluation result can delete resources before the exception is applied.
Placing an exception can only be performed on resources that are NON_COMPLIANT. If you use this API for COMPLIANT resources or resources that are NOT_APPLICABLE, a remediation exception will not be generated. For more information on the conditions that initiate the possible Config evaluation results, see Concepts | Config Rules in the Config Developer Guide.
Accepts a structured query language (SQL) SELECT command and an aggregator to query configuration state of Amazon Web Services resources across multiple accounts and regions, performs the corresponding search, and returns resource configurations matching the properties.
For more information about query components, see the Query Components section in the Config Developer Guide.
If you run an aggregation query (i.e., using GROUP BY or using aggregate functions such as COUNT; e.g., SELECT resourceId, COUNT(*) WHERE resourceType = 'AWS::IAM::Role' GROUP BY resourceId) and do not specify the MaxResults or the Limit query parameters, the default page size is set to 500.
If you run a non-aggregation query (i.e., not using GROUP BY or aggregate function; e.g., SELECT * WHERE resourceType = 'AWS::IAM::Role') and do not specify the MaxResults or the Limit query parameters, the default page size is set to 25.
Accepts a structured query language (SQL) SELECT command and an aggregator to query configuration state of Amazon Web Services resources across multiple accounts and regions, performs the corresponding search, and returns resource configurations matching the properties.
For more information about query components, see the Query Components section in the Config Developer Guide.
If you run an aggregation query (i.e., using GROUP BY or using aggregate functions such as COUNT; e.g., SELECT resourceId, COUNT(*) WHERE resourceType = 'AWS::IAM::Role' GROUP BY resourceId) and do not specify the MaxResults or the Limit query parameters, the default page size is set to 500.
If you run a non-aggregation query (i.e., not using GROUP BY or aggregate function; e.g., SELECT * WHERE resourceType = 'AWS::IAM::Role') and do not specify the MaxResults or the Limit query parameters, the default page size is set to 25.
The name of the recorder. By default, Config automatically assigns the name \"default\" when creating the configuration recorder. You cannot change the assigned name.
" + "documentation":"The name of the configuration recorder. Config automatically assigns the name of \"default\" when creating the configuration recorder.
You cannot change the name of the configuration recorder after it has been created. To change the configuration recorder name, you must delete it and create a new configuration recorder with a new name.
" }, "roleARN":{ "shape":"String", - "documentation":"Amazon Resource Name (ARN) of the IAM role used to describe the Amazon Web Services resources associated with the account.
While the API model does not require this field, the server will reject a request without a defined roleARN for the configuration recorder.
Amazon Resource Name (ARN) of the IAM role assumed by Config and used by the configuration recorder.
While the API model does not require this field, the server will reject a request without a defined roleARN for the configuration recorder.
Pre-existing Config role
If you have used an Amazon Web Services service that uses Config, such as Security Hub or Control Tower, and an Config role has already been created, make sure that the IAM role that you use when setting up Config keeps the same minimum permissions as the already created Config role. You must do this so that the other Amazon Web Services service continues to run as expected.
For example, if Control Tower has an IAM role that allows Config to read Amazon Simple Storage Service (Amazon S3) objects, make sure that the same permissions are granted within the IAM role you use when setting up Config. Otherwise, it may interfere with how Control Tower operates. For more information about IAM roles for Config, see Identity and Access Management for Config in the Config Developer Guide.
Specifies the types of Amazon Web Services resources for which Config records configuration changes.
" + "documentation":"Specifies which resource types Config records for configuration changes.
High Number of Config Evaluations
You may notice increased activity in your account during your initial month recording with Config when compared to subsequent months. During the initial bootstrapping process, Config runs evaluations on all the resources in your account that you have selected for Config to record.
If you are running ephemeral workloads, you may see increased activity from Config as it records configuration changes associated with creating and deleting these temporary resources. An ephemeral workload is a temporary use of computing resources that are loaded and run when needed. Examples include Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, Amazon EMR jobs, and Auto Scaling. If you want to avoid the increased activity from running ephemeral workloads, you can run these types of workloads in a separate account with Config turned off to avoid increased configuration recording and rule evaluations.
An object that represents the recording of configuration changes of an Amazon Web Services resource.
" + "documentation":"Records configuration changes to specified resource types. For more information about the configuration recorder, see Managing the Configuration Recorder in the Config Developer Guide.
" }, "ConfigurationRecorderList":{ "type":"list", @@ -3413,7 +3413,7 @@ "documentation":"The mode of an evaluation. The valid values are Detective or Proactive.
" } }, - "documentation":"Returns a filtered list of Detective or Proactive Config rules. By default, if the filter is not defined, this API returns an unfiltered list. For more information on Detective or Proactive Config rules, see Evaluation Mode in the Config Developer Guide.
" + "documentation":"Returns a filtered list of Detective or Proactive Config rules. By default, if the filter is not defined, this API returns an unfiltered list. For more information on Detective or Proactive Config rules, see Evaluation Mode in the Config Developer Guide.
" }, "DescribeConfigRulesRequest":{ "type":"structure", @@ -3428,7 +3428,7 @@ }, "Filters":{ "shape":"DescribeConfigRulesFilters", - "documentation":"Returns a list of Detective or Proactive Config rules. By default, this API returns an unfiltered list. For more information on Detective or Proactive Config rules, see Evaluation Mode in the Config Developer Guide.
" + "documentation":"Returns a list of Detective or Proactive Config rules. By default, this API returns an unfiltered list. For more information on Detective or Proactive Config rules, see Evaluation Mode in the Config Developer Guide.
" } }, "documentation":"" @@ -4155,6 +4155,16 @@ "max":1000, "min":0 }, + "ExclusionByResourceTypes":{ + "type":"structure", + "members":{ + "resourceTypes":{ + "shape":"ResourceTypeList", + "documentation":"A comma-separated list of resource types to exclude from recording by the configuration recorder.
" + } + }, + "documentation":"Specifies whether the configuration recorder excludes resource types from being recorded. Use the resourceTypes field to enter a comma-separated list of resource types to exclude as exemptions.
You have provided a configuration recorder name that is not valid.
", + "documentation":"You have provided a name for the configuration recorder that is not valid.
", "exception":true }, "InvalidDeliveryChannelNameException":{ @@ -5018,7 +5028,7 @@ "type":"structure", "members":{ }, - "documentation":"Config throws an exception if the recording group does not contain a valid list of resource types. Values that are not valid might also be incorrectly formatted.
", + "documentation":"Indicates one of the following errors:
You have provided a combination of parameter values that is not valid. For example:
Setting the allSupported field of RecordingGroup to true, but providing a non-empty list for the resourceTypesfield of RecordingGroup.
Setting the allSupported field of RecordingGroup to true, but also setting the useOnly field of RecordingStrategy to EXCLUSION_BY_RESOURCE_TYPES.
Every parameter is either null, false, or empty.
You have reached the limit of the number of resource types you can provide for the recording group.
You have provided resource types or a recording strategy that are not valid.
You have provided a null or empty role ARN.
", + "documentation":"You have provided a null or empty Amazon Resource Name (ARN) for the IAM role assumed by Config and used by the configuration recorder.
", "exception":true }, "InvalidS3KeyPrefixException":{ @@ -5323,14 +5333,14 @@ "type":"structure", "members":{ }, - "documentation":"You have reached the limit of the number of recorders you can create.
", + "documentation":"You have reached the limit of the number of configuration recorders you can create.
", "exception":true }, "MaxNumberOfConformancePacksExceededException":{ "type":"structure", "members":{ }, - "documentation":"You have reached the limit of the number of conformance packs you can create in an account. For more information, see Service Limits in the Config Developer Guide.
", + "documentation":"You have reached the limit of the number of conformance packs you can create in an account. For more information, see Service Limits in the Config Developer Guide.
", "exception":true }, "MaxNumberOfDeliveryChannelsExceededException":{ @@ -5344,14 +5354,14 @@ "type":"structure", "members":{ }, - "documentation":"You have reached the limit of the number of organization Config rules you can create. For more information, see see Service Limits in the Config Developer Guide.
", + "documentation":"You have reached the limit of the number of organization Config rules you can create. For more information, see see Service Limits in the Config Developer Guide.
", "exception":true }, "MaxNumberOfOrganizationConformancePacksExceededException":{ "type":"structure", "members":{ }, - "documentation":"You have reached the limit of the number of organization conformance packs you can create in an account. For more information, see Service Limits in the Config Developer Guide.
", + "documentation":"You have reached the limit of the number of organization conformance packs you can create in an account. For more information, see Service Limits in the Config Developer Guide.
", "exception":true }, "MaxNumberOfRetentionConfigurationsExceededException":{ @@ -5925,7 +5935,7 @@ "documentation":"A list of accounts that you can enable debug logging for your organization Config Custom Policy rule. List is null when debug logging is enabled for all accounts.
" } }, - "documentation":"An object that specifies metadata for your organization Config Custom Policy rule including the runtime system in use, which accounts have debug logging enabled, and other custom rule metadata such as resource type, resource ID of Amazon Web Services resource, and organization trigger types that trigger Config to evaluate Amazon Web Services resources against a rule.
" + "documentation":"metadata for your organization Config Custom Policy rule including the runtime system in use, which accounts have debug logging enabled, and other custom rule metadata such as resource type, resource ID of Amazon Web Services resource, and organization trigger types that trigger Config to evaluate Amazon Web Services resources against a rule.
" }, "OrganizationCustomRuleMetadata":{ "type":"structure", @@ -5971,7 +5981,7 @@ "documentation":"The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
" } }, - "documentation":"An object that specifies organization custom rule metadata such as resource type, resource ID of Amazon Web Services resource, Lambda function ARN, and organization trigger types that trigger Config to evaluate your Amazon Web Services resources against a rule. It also provides the frequency with which you want Config to run evaluations for the rule if the trigger type is periodic.
" + "documentation":"organization custom rule metadata such as resource type, resource ID of Amazon Web Services resource, Lambda function ARN, and organization trigger types that trigger Config to evaluate your Amazon Web Services resources against a rule. It also provides the frequency with which you want Config to run evaluations for the rule if the trigger type is periodic.
" }, "OrganizationManagedRuleMetadata":{ "type":"structure", @@ -6010,7 +6020,7 @@ "documentation":"The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
" } }, - "documentation":"An object that specifies organization managed rule metadata such as resource type and ID of Amazon Web Services resource along with the rule identifier. It also provides the frequency with which you want Config to run evaluations for the rule if the trigger type is periodic.
" + "documentation":"organization managed rule metadata such as resource type and ID of Amazon Web Services resource along with the rule identifier. It also provides the frequency with which you want Config to run evaluations for the rule if the trigger type is periodic.
" }, "OrganizationResourceDetailedStatus":{ "type":"string", @@ -6214,7 +6224,7 @@ "members":{ "ConfigurationRecorder":{ "shape":"ConfigurationRecorder", - "documentation":"The configuration recorder object that records each configuration change made to the resources.
" + "documentation":"An object for the configuration recorder to record configuration changes for specified resource types.
" } }, "documentation":"The input for the PutConfigurationRecorder action.
" @@ -6590,18 +6600,44 @@ "members":{ "allSupported":{ "shape":"AllSupported", - "documentation":"Specifies whether Config records configuration changes for every supported type of regional resource.
If you set this option to true, when Config adds support for a new type of regional resource, it starts recording resources of that type automatically.
If you set this option to true, you cannot enumerate a list of resourceTypes.
Specifies whether Config records configuration changes for all supported regional resource types.
If you set this field to true, when Config adds support for a new type of regional resource, Config starts recording resources of that type automatically.
If you set this field to true, you cannot enumerate specific resource types to record in the resourceTypes field of RecordingGroup, or to exclude in the resourceTypes field of ExclusionByResourceTypes.
Specifies whether Config includes all supported types of global resources (for example, IAM resources) with the resources that it records.
Before you can set this option to true, you must set the allSupported option to true.
If you set this option to true, when Config adds support for a new type of global resource, it starts recording resources of that type automatically.
The configuration details for any global resource are the same in all regions. To prevent duplicate configuration items, you should consider customizing Config in only one region to record global resources.
" + "documentation":"Specifies whether Config records configuration changes for all supported global resources.
Before you set this field to true, set the allSupported field of RecordingGroup to true. Optionally, you can set the useOnly field of RecordingStrategy to ALL_SUPPORTED_RESOURCE_TYPES.
If you set this field to true, when Config adds support for a new type of global resource in the Region where you set up the configuration recorder, Config starts recording resources of that type automatically.
If you set this field to false but list global resource types in the resourceTypes field of RecordingGroup, Config will still record configuration changes for those specified resource types regardless of if you set the includeGlobalResourceTypes field to false.
If you do not want to record configuration changes to global resource types, make sure to not list them in the resourceTypes field in addition to setting the includeGlobalResourceTypes field to false.
A comma-separated list that specifies the types of Amazon Web Services resources for which Config records configuration changes (for example, AWS::EC2::Instance or AWS::CloudTrail::Trail).
To record all configuration changes, you must set the allSupported option to true.
If you set the AllSupported option to false and populate the ResourceTypes option with values, when Config adds support for a new type of resource, it will not record resources of that type unless you manually add that type to your recording group.
For a list of valid resourceTypes values, see the resourceType Value column in Supported Amazon Web Services resource Types.
A comma-separated list that specifies which resource types Config records.
Optionally, you can set the useOnly field of RecordingStrategy to INCLUSION_BY_RESOURCE_TYPES.
To record all configuration changes, set the allSupported field of RecordingGroup to true, and either omit this field or don't specify any resource types in this field. If you set the allSupported field to false and specify values for resourceTypes, when Config adds support for a new type of resource, it will not record resources of that type unless you manually add that type to your recording group.
For a list of valid resourceTypes values, see the Resource Type Value column in Supported Amazon Web Services resource Types in the Config developer guide.
Region Availability
Before specifying a resource type for Config to track, check Resource Coverage by Region Availability to see if the resource type is supported in the Amazon Web Services Region where you set up Config. If a resource type is supported by Config in at least one Region, you can enable the recording of that resource type in all Regions supported by Config, even if the specified resource type is not supported in the Amazon Web Services Region where you set up Config.
An object that specifies how Config excludes resource types from being recorded by the configuration recorder.
To use this option, you must set the useOnly field of RecordingStrategy to EXCLUSION_BY_RESOURCE_TYPES.
An object that specifies the recording strategy for the configuration recorder.
If you set the useOnly field of RecordingStrategy to ALL_SUPPORTED_RESOURCE_TYPES, Config records configuration changes for all supported regional resource types. You also must set the allSupported field of RecordingGroup to true. When Config adds support for a new type of regional resource, Config automatically starts recording resources of that type.
If you set the useOnly field of RecordingStrategy to INCLUSION_BY_RESOURCE_TYPES, Config records configuration changes for only the resource types you specify in the resourceTypes field of RecordingGroup.
If you set the useOnly field of RecordingStrategy to EXCLUSION_BY_RESOURCE_TYPES, Config records configuration changes for all supported resource types except the resource types that you specify as exemptions to exclude from being recorded in the resourceTypes field of ExclusionByResourceTypes.
The recordingStrategy field is optional when you set the allSupported field of RecordingGroup to true.
The recordingStrategy field is optional when you list resource types in the resourceTypes field of RecordingGroup.
The recordingStrategy field is required if you list resource types to exclude from recording in the resourceTypes field of ExclusionByResourceTypes.
If you choose EXCLUSION_BY_RESOURCE_TYPES for the recording strategy, the exclusionByResourceTypes field will override other properties in the request.
For example, even if you set includeGlobalResourceTypes to false, global resource types will still be automatically recorded in this option unless those resource types are specifically listed as exemptions in the resourceTypes field of exclusionByResourceTypes.
By default, if you choose the EXCLUSION_BY_RESOURCE_TYPES recording strategy, when Config adds support for a new resource type in the Region where you set up the configuration recorder, including global resource types, Config starts recording resources of that type automatically.
Specifies which Amazon Web Services resource types Config records for configuration changes. In the recording group, you specify whether you want to record all supported resource types or only specific types of resources.
By default, Config records the configuration changes for all supported types of regional resources that Config discovers in the region in which it is running. Regional resources are tied to a region and can be used only in that region. Examples of regional resources are EC2 instances and EBS volumes.
You can also have Config record supported types of global resources. Global resources are not tied to a specific region and can be used in all regions. The global resource types that Config supports include IAM users, groups, roles, and customer managed policies.
Global resource types onboarded to Config recording after February 2022 will only be recorded in the service's home region for the commercial partition and Amazon Web Services GovCloud (US) West for the GovCloud partition. You can view the Configuration Items for these new global resource types only in their home region and Amazon Web Services GovCloud (US) West.
Supported global resource types onboarded before February 2022 such as AWS::IAM::Group, AWS::IAM::Policy, AWS::IAM::Role, AWS::IAM::User remain unchanged, and they will continue to deliver Configuration Items in all supported regions in Config. The change will only affect new global resource types onboarded after February 2022.
To record global resource types onboarded after February 2022, enable All Supported Resource Types in the home region of the global resource type you want to record.
If you don't want Config to record all resources, you can specify which types of resources it will record with the resourceTypes parameter.
For a list of supported resource types, see Supported Resource Types.
For more information and a table of the Home Regions for Global Resource Types Onboarded after February 2022, see Selecting Which Resources Config Records.
" + "documentation":"Specifies which resource types Config records for configuration changes. In the recording group, you specify whether you want to record all supported resource types or to include or exclude specific types of resources.
By default, Config records configuration changes for all supported types of Regional resources that Config discovers in the Amazon Web Services Region in which it is running. Regional resources are tied to a Region and can be used only in that Region. Examples of Regional resources are Amazon EC2 instances and Amazon EBS volumes.
You can also have Config record supported types of global resources. Global resources are not tied to a specific Region and can be used in all Regions. The global resource types that Config supports include IAM users, groups, roles, and customer managed policies.
Global resource types onboarded to Config recording after February 2022 will be recorded only in the service's home Region for the commercial partition and Amazon Web Services GovCloud (US-West) for the Amazon Web Services GovCloud (US) partition. You can view the Configuration Items for these new global resource types only in their home Region and Amazon Web Services GovCloud (US-West).
If you don't want Config to record all resources, you can specify which types of resources Config records with the resourceTypes parameter.
For a list of supported resource types, see Supported Resource Types in the Config developer guide.
For more information and a table of the Home Regions for Global Resource Types Onboarded after February 2022, see Selecting Which Resources Config Records in the Config developer guide.
" + }, + "RecordingStrategy":{ + "type":"structure", + "members":{ + "useOnly":{ + "shape":"RecordingStrategyType", + "documentation":"The recording strategy for the configuration recorder.
If you set this option to ALL_SUPPORTED_RESOURCE_TYPES, Config records configuration changes for all supported regional resource types. You also must set the allSupported field of RecordingGroup to true.
When Config adds support for a new type of regional resource, Config automatically starts recording resources of that type. For a list of supported resource types, see Supported Resource Types in the Config developer guide.
If you set this option to INCLUSION_BY_RESOURCE_TYPES, Config records configuration changes for only the resource types that you specify in the resourceTypes field of RecordingGroup.
If you set this option to EXCLUSION_BY_RESOURCE_TYPES, Config records configuration changes for all supported resource types, except the resource types that you specify as exemptions to exclude from being recorded in the resourceTypes field of ExclusionByResourceTypes.
The recordingStrategy field is optional when you set the allSupported field of RecordingGroup to true.
The recordingStrategy field is optional when you list resource types in the resourceTypes field of RecordingGroup.
The recordingStrategy field is required if you list resource types to exclude from recording in the resourceTypes field of ExclusionByResourceTypes.
If you choose EXCLUSION_BY_RESOURCE_TYPES for the recording strategy, the exclusionByResourceTypes field will override other properties in the request.
For example, even if you set includeGlobalResourceTypes to false, global resource types will still be automatically recorded in this option unless those resource types are specifically listed as exemptions in the resourceTypes field of exclusionByResourceTypes.
By default, if you choose the EXCLUSION_BY_RESOURCE_TYPES recording strategy, when Config adds support for a new resource type in the Region where you set up the configuration recorder, including global resource types, Config starts recording resources of that type automatically.
Specifies the recording strategy of the configuration recorder.
" + }, + "RecordingStrategyType":{ + "type":"string", + "enum":[ + "ALL_SUPPORTED_RESOURCE_TYPES", + "INCLUSION_BY_RESOURCE_TYPES", + "EXCLUSION_BY_RESOURCE_TYPES" + ] }, "ReevaluateConfigRuleNames":{ "type":"list", @@ -8048,7 +8084,7 @@ "type":"structure", "members":{ }, - "documentation":"You have reached the limit of the number of tags you can use. For more information, see Service Limits in the Config Developer Guide.
", + "documentation":"You have reached the limit of the number of tags you can use. For more information, see Service Limits in the Config Developer Guide.
", "exception":true }, "UnprocessedResourceIdentifierList":{ From 65a567632414cd1da46180b760e6ea78b3bc6b0e Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 31 May 2023 18:11:58 +0000 Subject: [PATCH 005/317] Amazon WorkSpaces Web Update: WorkSpaces Web now allows you to control which IP addresses your WorkSpaces Web portal may be accessed from. --- .../feature-AmazonWorkSpacesWeb-91b4638.json | 6 + .../codegen-resources/endpoint-rule-set.json | 392 ++++++++------- .../codegen-resources/endpoint-tests.json | 473 ++++-------------- .../codegen-resources/paginators-1.json | 5 + .../codegen-resources/service-2.json | 440 +++++++++++++++- 5 files changed, 767 insertions(+), 549 deletions(-) create mode 100644 .changes/next-release/feature-AmazonWorkSpacesWeb-91b4638.json diff --git a/.changes/next-release/feature-AmazonWorkSpacesWeb-91b4638.json b/.changes/next-release/feature-AmazonWorkSpacesWeb-91b4638.json new file mode 100644 index 000000000000..48c014a50e04 --- /dev/null +++ b/.changes/next-release/feature-AmazonWorkSpacesWeb-91b4638.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon WorkSpaces Web", + "contributor": "", + "description": "WorkSpaces Web now allows you to control which IP addresses your WorkSpaces Web portal may be accessed from." +} diff --git a/services/workspacesweb/src/main/resources/codegen-resources/endpoint-rule-set.json b/services/workspacesweb/src/main/resources/codegen-resources/endpoint-rule-set.json index 3bc54f2343b1..1552c84bcb89 100644 --- a/services/workspacesweb/src/main/resources/codegen-resources/endpoint-rule-set.json +++ b/services/workspacesweb/src/main/resources/codegen-resources/endpoint-rule-set.json @@ -3,7 +3,7 @@ "parameters": { "Region": { "builtIn": "AWS::Region", - "required": true, + "required": false, "documentation": "The AWS region used to dispatch the request.", "type": "String" }, @@ -32,13 +32,12 @@ { "conditions": [ { - "fn": "aws.partition", + "fn": "isSet", "argv": [ { - "ref": "Region" + "ref": "Endpoint" } - ], - "assign": "PartitionResult" + ] } ], "type": "tree", @@ -46,14 +45,20 @@ { "conditions": [ { - "fn": "isSet", + "fn": "booleanEquals", "argv": [ { - "ref": "Endpoint" - } + "ref": "UseFIPS" + }, + true ] } ], + "error": "Invalid Configuration: FIPS and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], "type": "tree", "rules": [ { @@ -62,67 +67,42 @@ "fn": "booleanEquals", "argv": [ { - "ref": "UseFIPS" + "ref": "UseDualStack" }, true ] } ], - "error": "Invalid Configuration: FIPS and custom endpoint are not supported", + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", "type": "error" }, { "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", - "type": "error" + "endpoint": { + "url": { + "ref": "Endpoint" }, - { - "conditions": [], - "endpoint": { - "url": { - "ref": "Endpoint" - }, - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] - }, + } + ] + }, + { + "conditions": [], + "type": "tree", + "rules": [ { "conditions": [ { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseFIPS" - }, - true - ] - }, - { - "fn": "booleanEquals", + "fn": "isSet", "argv": [ { - "ref": "UseDualStack" - }, - true + "ref": "Region" + } ] } ], @@ -131,90 +111,215 @@ { "conditions": [ { - "fn": "booleanEquals", + "fn": "aws.partition", "argv": [ - true, { - "fn": "getAttr", + "ref": "Region" + } + ], + "assign": "PartitionResult" + } + ], + "type": "tree", + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseFIPS" + }, + true + ] + }, + { + "fn": "booleanEquals", "argv": [ { - "ref": "PartitionResult" + "ref": "UseDualStack" + }, + true + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } + ] }, - "supportsFIPS" + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsDualStack" + ] + } + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [], + "type": "tree", + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://workspaces-web-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ] + } ] + }, + { + "conditions": [], + "error": "FIPS and DualStack are enabled, but this partition does not support one or both", + "type": "error" } ] }, { - "fn": "booleanEquals", - "argv": [ - true, + "conditions": [ { - "fn": "getAttr", + "fn": "booleanEquals", "argv": [ { - "ref": "PartitionResult" + "ref": "UseFIPS" }, - "supportsDualStack" + true ] } + ], + "type": "tree", + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [], + "type": "tree", + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://workspaces-web-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ] + } + ] + }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" + } ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://workspaces-web-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - }, - { - "conditions": [], - "error": "FIPS and DualStack are enabled, but this partition does not support one or both", - "type": "error" - } - ] - }, - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseFIPS" }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ { - "fn": "booleanEquals", - "argv": [ - true, + "conditions": [ { - "fn": "getAttr", + "fn": "booleanEquals", "argv": [ { - "ref": "PartitionResult" + "ref": "UseDualStack" }, - "supportsFIPS" + true + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsDualStack" + ] + } + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [], + "type": "tree", + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://workspaces-web.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ] + } ] + }, + { + "conditions": [], + "error": "DualStack is enabled but this partition does not support DualStack", + "type": "error" } ] - } - ], - "type": "tree", - "rules": [ + }, { "conditions": [], "type": "tree", @@ -222,7 +327,7 @@ { "conditions": [], "endpoint": { - "url": "https://workspaces-web-fips.{Region}.{PartitionResult#dnsSuffix}", + "url": "https://workspaces-web.{Region}.{PartitionResult#dnsSuffix}", "properties": {}, "headers": {} }, @@ -231,74 +336,13 @@ ] } ] - }, - { - "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" - } - ] - }, - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://workspaces-web.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - }, - { - "conditions": [], - "error": "DualStack is enabled but this partition does not support DualStack", - "type": "error" } ] }, { "conditions": [], - "endpoint": { - "url": "https://workspaces-web.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" + "error": "Invalid Configuration: Missing Region", + "type": "error" } ] } diff --git a/services/workspacesweb/src/main/resources/codegen-resources/endpoint-tests.json b/services/workspacesweb/src/main/resources/codegen-resources/endpoint-tests.json index 02b9e9cf043d..c62e398b8733 100644 --- a/services/workspacesweb/src/main/resources/codegen-resources/endpoint-tests.json +++ b/services/workspacesweb/src/main/resources/codegen-resources/endpoint-tests.json @@ -1,198 +1,29 @@ { "testCases": [ { - "documentation": "For region ap-south-1 with FIPS enabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.ap-south-1.api.aws" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": true, - "Region": "ap-south-1" - } - }, - { - "documentation": "For region ap-south-1 with FIPS enabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.ap-south-1.amazonaws.com" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": false, - "Region": "ap-south-1" - } - }, - { - "documentation": "For region ap-south-1 with FIPS disabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.ap-south-1.api.aws" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": true, - "Region": "ap-south-1" - } - }, - { - "documentation": "For region ap-south-1 with FIPS disabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.ap-south-1.amazonaws.com" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-south-1" - } - }, - { - "documentation": "For region ca-central-1 with FIPS enabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.ca-central-1.api.aws" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": true, - "Region": "ca-central-1" - } - }, - { - "documentation": "For region ca-central-1 with FIPS enabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.ca-central-1.amazonaws.com" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": false, - "Region": "ca-central-1" - } - }, - { - "documentation": "For region ca-central-1 with FIPS disabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.ca-central-1.api.aws" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": true, - "Region": "ca-central-1" - } - }, - { - "documentation": "For region ca-central-1 with FIPS disabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.ca-central-1.amazonaws.com" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": false, - "Region": "ca-central-1" - } - }, - { - "documentation": "For region eu-central-1 with FIPS enabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.eu-central-1.api.aws" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": true, - "Region": "eu-central-1" - } - }, - { - "documentation": "For region eu-central-1 with FIPS enabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.eu-central-1.amazonaws.com" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": false, - "Region": "eu-central-1" - } - }, - { - "documentation": "For region eu-central-1 with FIPS disabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.eu-central-1.api.aws" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": true, - "Region": "eu-central-1" - } - }, - { - "documentation": "For region eu-central-1 with FIPS disabled and DualStack disabled", + "documentation": "For region eu-west-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web.eu-central-1.amazonaws.com" + "url": "https://workspaces-web.eu-west-1.amazonaws.com" } }, "params": { + "Region": "eu-west-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-central-1" - } - }, - { - "documentation": "For region us-west-2 with FIPS enabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.us-west-2.api.aws" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": true, - "Region": "us-west-2" + "UseDualStack": false } }, { - "documentation": "For region us-west-2 with FIPS enabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.us-west-2.amazonaws.com" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": false, - "Region": "us-west-2" - } - }, - { - "documentation": "For region us-west-2 with FIPS disabled and DualStack enabled", + "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web.us-west-2.api.aws" + "url": "https://workspaces-web.us-east-1.amazonaws.com" } }, "params": { + "Region": "us-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-west-2" + "UseDualStack": false } }, { @@ -203,377 +34,266 @@ } }, "params": { + "Region": "us-west-2", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-west-2" - } - }, - { - "documentation": "For region eu-west-2 with FIPS enabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.eu-west-2.api.aws" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": true, - "Region": "eu-west-2" - } - }, - { - "documentation": "For region eu-west-2 with FIPS enabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.eu-west-2.amazonaws.com" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": false, - "Region": "eu-west-2" - } - }, - { - "documentation": "For region eu-west-2 with FIPS disabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.eu-west-2.api.aws" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": true, - "Region": "eu-west-2" + "UseDualStack": false } }, { - "documentation": "For region eu-west-2 with FIPS disabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.eu-west-2.amazonaws.com" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-west-2" - } - }, - { - "documentation": "For region eu-west-1 with FIPS enabled and DualStack enabled", + "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.eu-west-1.api.aws" + "url": "https://workspaces-web-fips.us-east-1.api.aws" } }, "params": { + "Region": "us-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "eu-west-1" + "UseDualStack": true } }, { - "documentation": "For region eu-west-1 with FIPS enabled and DualStack disabled", + "documentation": "For region us-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.eu-west-1.amazonaws.com" + "url": "https://workspaces-web-fips.us-east-1.amazonaws.com" } }, "params": { + "Region": "us-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "eu-west-1" - } - }, - { - "documentation": "For region eu-west-1 with FIPS disabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.eu-west-1.api.aws" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": true, - "Region": "eu-west-1" + "UseDualStack": false } }, { - "documentation": "For region eu-west-1 with FIPS disabled and DualStack disabled", + "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://workspaces-web.eu-west-1.amazonaws.com" + "url": "https://workspaces-web.us-east-1.api.aws" } }, "params": { + "Region": "us-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-west-1" + "UseDualStack": true } }, { - "documentation": "For region ap-northeast-2 with FIPS enabled and DualStack enabled", + "documentation": "For region cn-north-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.ap-northeast-2.api.aws" + "url": "https://workspaces-web-fips.cn-north-1.api.amazonwebservices.com.cn" } }, "params": { + "Region": "cn-north-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "ap-northeast-2" + "UseDualStack": true } }, { - "documentation": "For region ap-northeast-2 with FIPS enabled and DualStack disabled", + "documentation": "For region cn-north-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.ap-northeast-2.amazonaws.com" + "url": "https://workspaces-web-fips.cn-north-1.amazonaws.com.cn" } }, "params": { + "Region": "cn-north-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "ap-northeast-2" + "UseDualStack": false } }, { - "documentation": "For region ap-northeast-2 with FIPS disabled and DualStack enabled", + "documentation": "For region cn-north-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://workspaces-web.ap-northeast-2.api.aws" + "url": "https://workspaces-web.cn-north-1.api.amazonwebservices.com.cn" } }, "params": { + "Region": "cn-north-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "ap-northeast-2" + "UseDualStack": true } }, { - "documentation": "For region ap-northeast-2 with FIPS disabled and DualStack disabled", + "documentation": "For region cn-north-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web.ap-northeast-2.amazonaws.com" + "url": "https://workspaces-web.cn-north-1.amazonaws.com.cn" } }, "params": { + "Region": "cn-north-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-northeast-2" + "UseDualStack": false } }, { - "documentation": "For region ap-northeast-1 with FIPS enabled and DualStack enabled", + "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.ap-northeast-1.api.aws" + "url": "https://workspaces-web-fips.us-gov-east-1.api.aws" } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "ap-northeast-1" + "UseDualStack": true } }, { - "documentation": "For region ap-northeast-1 with FIPS enabled and DualStack disabled", + "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.ap-northeast-1.amazonaws.com" + "url": "https://workspaces-web-fips.us-gov-east-1.amazonaws.com" } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "ap-northeast-1" + "UseDualStack": false } }, { - "documentation": "For region ap-northeast-1 with FIPS disabled and DualStack enabled", + "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://workspaces-web.ap-northeast-1.api.aws" + "url": "https://workspaces-web.us-gov-east-1.api.aws" } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "ap-northeast-1" + "UseDualStack": true } }, { - "documentation": "For region ap-northeast-1 with FIPS disabled and DualStack disabled", + "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web.ap-northeast-1.amazonaws.com" + "url": "https://workspaces-web.us-gov-east-1.amazonaws.com" } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-northeast-1" + "UseDualStack": false } }, { - "documentation": "For region ap-southeast-1 with FIPS enabled and DualStack enabled", + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled", "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.ap-southeast-1.api.aws" - } + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { + "Region": "us-iso-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "ap-southeast-1" + "UseDualStack": true } }, { - "documentation": "For region ap-southeast-1 with FIPS enabled and DualStack disabled", + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.ap-southeast-1.amazonaws.com" + "url": "https://workspaces-web-fips.us-iso-east-1.c2s.ic.gov" } }, "params": { + "Region": "us-iso-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "ap-southeast-1" + "UseDualStack": false } }, { - "documentation": "For region ap-southeast-1 with FIPS disabled and DualStack enabled", + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled", "expect": { - "endpoint": { - "url": "https://workspaces-web.ap-southeast-1.api.aws" - } + "error": "DualStack is enabled but this partition does not support DualStack" }, "params": { + "Region": "us-iso-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "ap-southeast-1" + "UseDualStack": true } }, { - "documentation": "For region ap-southeast-1 with FIPS disabled and DualStack disabled", + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web.ap-southeast-1.amazonaws.com" + "url": "https://workspaces-web.us-iso-east-1.c2s.ic.gov" } }, "params": { + "Region": "us-iso-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-southeast-1" + "UseDualStack": false } }, { - "documentation": "For region ap-southeast-2 with FIPS enabled and DualStack enabled", + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack enabled", "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.ap-southeast-2.api.aws" - } + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { + "Region": "us-isob-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "ap-southeast-2" + "UseDualStack": true } }, { - "documentation": "For region ap-southeast-2 with FIPS enabled and DualStack disabled", + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.ap-southeast-2.amazonaws.com" + "url": "https://workspaces-web-fips.us-isob-east-1.sc2s.sgov.gov" } }, "params": { + "Region": "us-isob-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "ap-southeast-2" + "UseDualStack": false } }, { - "documentation": "For region ap-southeast-2 with FIPS disabled and DualStack enabled", + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack enabled", "expect": { - "endpoint": { - "url": "https://workspaces-web.ap-southeast-2.api.aws" - } + "error": "DualStack is enabled but this partition does not support DualStack" }, "params": { + "Region": "us-isob-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "ap-southeast-2" + "UseDualStack": true } }, { - "documentation": "For region ap-southeast-2 with FIPS disabled and DualStack disabled", + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web.ap-southeast-2.amazonaws.com" + "url": "https://workspaces-web.us-isob-east-1.sc2s.sgov.gov" } }, "params": { + "Region": "us-isob-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-southeast-2" + "UseDualStack": false } }, { - "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", + "documentation": "For custom endpoint with region set and fips disabled and dualstack disabled", "expect": { "endpoint": { - "url": "https://workspaces-web-fips.us-east-1.api.aws" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": true, - "Region": "us-east-1" - } - }, - { - "documentation": "For region us-east-1 with FIPS enabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web-fips.us-east-1.amazonaws.com" - } - }, - "params": { - "UseFIPS": true, - "UseDualStack": false, - "Region": "us-east-1" - } - }, - { - "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.us-east-1.api.aws" - } - }, - "params": { - "UseFIPS": false, - "UseDualStack": true, - "Region": "us-east-1" - } - }, - { - "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled", - "expect": { - "endpoint": { - "url": "https://workspaces-web.us-east-1.amazonaws.com" + "url": "https://example.com" } }, "params": { + "Region": "us-east-1", "UseFIPS": false, "UseDualStack": false, - "Region": "us-east-1" + "Endpoint": "https://example.com" } }, { - "documentation": "For custom endpoint with fips disabled and dualstack disabled", + "documentation": "For custom endpoint with region not set and fips disabled and dualstack disabled", "expect": { "endpoint": { "url": "https://example.com" @@ -582,7 +302,6 @@ "params": { "UseFIPS": false, "UseDualStack": false, - "Region": "us-east-1", "Endpoint": "https://example.com" } }, @@ -592,9 +311,9 @@ "error": "Invalid Configuration: FIPS and custom endpoint are not supported" }, "params": { + "Region": "us-east-1", "UseFIPS": true, "UseDualStack": false, - "Region": "us-east-1", "Endpoint": "https://example.com" } }, @@ -604,11 +323,17 @@ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" }, "params": { + "Region": "us-east-1", "UseFIPS": false, "UseDualStack": true, - "Region": "us-east-1", "Endpoint": "https://example.com" } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } } ], "version": "1.0" diff --git a/services/workspacesweb/src/main/resources/codegen-resources/paginators-1.json b/services/workspacesweb/src/main/resources/codegen-resources/paginators-1.json index 202a6316819a..98a378650342 100644 --- a/services/workspacesweb/src/main/resources/codegen-resources/paginators-1.json +++ b/services/workspacesweb/src/main/resources/codegen-resources/paginators-1.json @@ -10,6 +10,11 @@ "output_token": "nextToken", "limit_key": "maxResults" }, + "ListIpAccessSettings": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults" + }, "ListNetworkSettings": { "input_token": "nextToken", "output_token": "nextToken", diff --git a/services/workspacesweb/src/main/resources/codegen-resources/service-2.json b/services/workspacesweb/src/main/resources/codegen-resources/service-2.json index aa2849446d89..1efd59c263c9 100644 --- a/services/workspacesweb/src/main/resources/codegen-resources/service-2.json +++ b/services/workspacesweb/src/main/resources/codegen-resources/service-2.json @@ -32,6 +32,26 @@ "documentation":"Associates a browser settings resource with a web portal.
", "idempotent":true }, + "AssociateIpAccessSettings":{ + "name":"AssociateIpAccessSettings", + "http":{ + "method":"PUT", + "requestUri":"/portals/{portalArn+}/ipAccessSettings", + "responseCode":200 + }, + "input":{"shape":"AssociateIpAccessSettingsRequest"}, + "output":{"shape":"AssociateIpAccessSettingsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"} + ], + "documentation":"Associates an IP access settings resource with a web portal.
", + "idempotent":true + }, "AssociateNetworkSettings":{ "name":"AssociateNetworkSettings", "http":{ @@ -151,6 +171,25 @@ ], "documentation":"Creates an identity provider resource that is then associated with a web portal.
" }, + "CreateIpAccessSettings":{ + "name":"CreateIpAccessSettings", + "http":{ + "method":"POST", + "requestUri":"/ipAccessSettings", + "responseCode":200 + }, + "input":{"shape":"CreateIpAccessSettingsRequest"}, + "output":{"shape":"CreateIpAccessSettingsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"} + ], + "documentation":"Creates an IP access settings resource that can be associated with a web portal.
" + }, "CreateNetworkSettings":{ "name":"CreateNetworkSettings", "http":{ @@ -285,6 +324,25 @@ "documentation":"Deletes the identity provider.
", "idempotent":true }, + "DeleteIpAccessSettings":{ + "name":"DeleteIpAccessSettings", + "http":{ + "method":"DELETE", + "requestUri":"/ipAccessSettings/{ipAccessSettingsArn+}", + "responseCode":200 + }, + "input":{"shape":"DeleteIpAccessSettingsRequest"}, + "output":{"shape":"DeleteIpAccessSettingsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"} + ], + "documentation":"Deletes IP access settings.
", + "idempotent":true + }, "DeleteNetworkSettings":{ "name":"DeleteNetworkSettings", "http":{ @@ -399,6 +457,25 @@ "documentation":"Disassociates browser settings from a web portal.
", "idempotent":true }, + "DisassociateIpAccessSettings":{ + "name":"DisassociateIpAccessSettings", + "http":{ + "method":"DELETE", + "requestUri":"/portals/{portalArn+}/ipAccessSettings", + "responseCode":200 + }, + "input":{"shape":"DisassociateIpAccessSettingsRequest"}, + "output":{"shape":"DisassociateIpAccessSettingsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"} + ], + "documentation":"Disassociates IP access settings from a web portal.
", + "idempotent":true + }, "DisassociateNetworkSettings":{ "name":"DisassociateNetworkSettings", "http":{ @@ -511,6 +588,24 @@ ], "documentation":"Gets the identity provider.
" }, + "GetIpAccessSettings":{ + "name":"GetIpAccessSettings", + "http":{ + "method":"GET", + "requestUri":"/ipAccessSettings/{ipAccessSettingsArn+}", + "responseCode":200 + }, + "input":{"shape":"GetIpAccessSettingsRequest"}, + "output":{"shape":"GetIpAccessSettingsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"} + ], + "documentation":"Gets the IP access settings.
" + }, "GetNetworkSettings":{ "name":"GetNetworkSettings", "http":{ @@ -671,6 +766,23 @@ ], "documentation":"Retrieves a list of identity providers for a specific web portal.
" }, + "ListIpAccessSettings":{ + "name":"ListIpAccessSettings", + "http":{ + "method":"GET", + "requestUri":"/ipAccessSettings", + "responseCode":200 + }, + "input":{"shape":"ListIpAccessSettingsRequest"}, + "output":{"shape":"ListIpAccessSettingsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"} + ], + "documentation":"Retrieves a list of IP access settings.
" + }, "ListNetworkSettings":{ "name":"ListNetworkSettings", "http":{ @@ -866,6 +978,24 @@ ], "documentation":"Updates the identity provider.
" }, + "UpdateIpAccessSettings":{ + "name":"UpdateIpAccessSettings", + "http":{ + "method":"PATCH", + "requestUri":"/ipAccessSettings/{ipAccessSettingsArn+}", + "responseCode":200 + }, + "input":{"shape":"UpdateIpAccessSettingsRequest"}, + "output":{"shape":"UpdateIpAccessSettingsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"} + ], + "documentation":"Updates IP access settings.
" + }, "UpdateNetworkSettings":{ "name":"UpdateNetworkSettings", "http":{ @@ -898,7 +1028,8 @@ {"shape":"ResourceNotFoundException"}, {"shape":"AccessDeniedException"}, {"shape":"ThrottlingException"}, - {"shape":"ValidationException"} + {"shape":"ValidationException"}, + {"shape":"ConflictException"} ], "documentation":"Updates a web portal.
", "idempotent":true @@ -1020,6 +1151,44 @@ } } }, + "AssociateIpAccessSettingsRequest":{ + "type":"structure", + "required":[ + "ipAccessSettingsArn", + "portalArn" + ], + "members":{ + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings.
", + "location":"querystring", + "locationName":"ipAccessSettingsArn" + }, + "portalArn":{ + "shape":"ARN", + "documentation":"The ARN of the web portal.
", + "location":"uri", + "locationName":"portalArn" + } + } + }, + "AssociateIpAccessSettingsResponse":{ + "type":"structure", + "required":[ + "ipAccessSettingsArn", + "portalArn" + ], + "members":{ + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings resource.
" + }, + "portalArn":{ + "shape":"ARN", + "documentation":"The ARN of the web portal.
" + } + } + }, "AssociateNetworkSettingsRequest":{ "type":"structure", "required":[ @@ -1408,6 +1577,51 @@ } } }, + "CreateIpAccessSettingsRequest":{ + "type":"structure", + "required":["ipRules"], + "members":{ + "additionalEncryptionContext":{ + "shape":"EncryptionContextMap", + "documentation":"Additional encryption context of the IP access settings.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Idempotency ensures that an API request completes only once. With an idempotent request, if the original request completes successfully, subsequent retries with the same client token returns the result from the original successful request.
If you do not specify a client token, one is automatically generated by the AWS SDK.
", + "idempotencyToken":true + }, + "customerManagedKey":{ + "shape":"keyArn", + "documentation":"The custom managed key of the IP access settings.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the IP access settings.
" + }, + "displayName":{ + "shape":"DisplayName", + "documentation":"The display name of the IP access settings.
" + }, + "ipRules":{ + "shape":"IpRuleList", + "documentation":"The IP rules of the IP access settings.
" + }, + "tags":{ + "shape":"TagList", + "documentation":"The tags to add to the browser settings resource. A tag is a key-value pair.
" + } + } + }, + "CreateIpAccessSettingsResponse":{ + "type":"structure", + "required":["ipAccessSettingsArn"], + "members":{ + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings resource.
" + } + } + }, "CreateNetworkSettingsRequest":{ "type":"structure", "required":[ @@ -1647,6 +1861,23 @@ "members":{ } }, + "DeleteIpAccessSettingsRequest":{ + "type":"structure", + "required":["ipAccessSettingsArn"], + "members":{ + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings.
", + "location":"uri", + "locationName":"ipAccessSettingsArn" + } + } + }, + "DeleteIpAccessSettingsResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteNetworkSettingsRequest":{ "type":"structure", "required":["networkSettingsArn"], @@ -1732,6 +1963,13 @@ "members":{ } }, + "Description":{ + "type":"string", + "max":256, + "min":1, + "pattern":"^.+$", + "sensitive":true + }, "DisassociateBrowserSettingsRequest":{ "type":"structure", "required":["portalArn"], @@ -1749,6 +1987,23 @@ "members":{ } }, + "DisassociateIpAccessSettingsRequest":{ + "type":"structure", + "required":["portalArn"], + "members":{ + "portalArn":{ + "shape":"ARN", + "documentation":"The ARN of the web portal.
", + "location":"uri", + "locationName":"portalArn" + } + } + }, + "DisassociateIpAccessSettingsResponse":{ + "type":"structure", + "members":{ + } + }, "DisassociateNetworkSettingsRequest":{ "type":"structure", "required":["portalArn"], @@ -1886,6 +2141,27 @@ } } }, + "GetIpAccessSettingsRequest":{ + "type":"structure", + "required":["ipAccessSettingsArn"], + "members":{ + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings.
", + "location":"uri", + "locationName":"ipAccessSettingsArn" + } + } + }, + "GetIpAccessSettingsResponse":{ + "type":"structure", + "members":{ + "ipAccessSettings":{ + "shape":"IpAccessSettings", + "documentation":"The IP access settings.
" + } + } + }, "GetNetworkSettingsRequest":{ "type":"structure", "required":["networkSettingsArn"], @@ -2142,6 +2418,91 @@ "exception":true, "fault":true }, + "IpAccessSettings":{ + "type":"structure", + "required":["ipAccessSettingsArn"], + "members":{ + "associatedPortalArns":{ + "shape":"ArnList", + "documentation":"A list of web portal ARNs that this IP access settings resource is associated with.
" + }, + "creationDate":{ + "shape":"Timestamp", + "documentation":"The creation date timestamp of the IP access settings.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the IP access settings.
" + }, + "displayName":{ + "shape":"DisplayName", + "documentation":"The display name of the IP access settings.
" + }, + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings resource.
" + }, + "ipRules":{ + "shape":"IpRuleList", + "documentation":"The IP rules of the IP access settings.
" + } + }, + "documentation":"The IP access settings resource that can be associated with a web portal.
" + }, + "IpAccessSettingsList":{ + "type":"list", + "member":{"shape":"IpAccessSettingsSummary"} + }, + "IpAccessSettingsSummary":{ + "type":"structure", + "members":{ + "creationDate":{ + "shape":"Timestamp", + "documentation":"The creation date timestamp of the IP access settings.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the IP access settings.
" + }, + "displayName":{ + "shape":"DisplayName", + "documentation":"The display name of the IP access settings.
" + }, + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of IP access settings.
" + } + }, + "documentation":"The summary of IP access settings.
" + }, + "IpRange":{ + "type":"string", + "documentation":"A single IP address or an IP address range in CIDR notation
", + "pattern":"^\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}(?:/([0-9]|[12][0-9]|3[0-2])|)$", + "sensitive":true + }, + "IpRule":{ + "type":"structure", + "required":["ipRange"], + "members":{ + "description":{ + "shape":"Description", + "documentation":"The description of the IP rule.
" + }, + "ipRange":{ + "shape":"IpRange", + "documentation":"The IP range of the IP rule.
" + } + }, + "documentation":"The IP rules of the IP access settings.
" + }, + "IpRuleList":{ + "type":"list", + "member":{"shape":"IpRule"}, + "max":100, + "min":1, + "sensitive":true + }, "KinesisStreamArn":{ "type":"string", "documentation":"Kinesis stream ARN to which log events are published.
", @@ -2216,6 +2577,36 @@ } } }, + "ListIpAccessSettingsRequest":{ + "type":"structure", + "members":{ + "maxResults":{ + "shape":"MaxResults", + "documentation":"The maximum number of results to be included in the next page.
", + "location":"querystring", + "locationName":"maxResults" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"The pagination token used to retrieve the next page of results for this operation.
", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListIpAccessSettingsResponse":{ + "type":"structure", + "members":{ + "ipAccessSettings":{ + "shape":"IpAccessSettingsList", + "documentation":"The IP access settings.
" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"The pagination token used to retrieve the next page of results for this operation.
" + } + } + }, "ListNetworkSettingsRequest":{ "type":"structure", "members":{ @@ -2507,6 +2898,10 @@ "shape":"DisplayName", "documentation":"The name of the web portal.
" }, + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings.
" + }, "networkSettingsArn":{ "shape":"ARN", "documentation":"The ARN of the network settings that is associated with the web portal.
" @@ -2587,6 +2982,10 @@ "shape":"DisplayName", "documentation":"The name of the web portal.
" }, + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings.
" + }, "networkSettingsArn":{ "shape":"ARN", "documentation":"The ARN of the network settings that is associated with the web portal.
" @@ -2963,6 +3362,45 @@ } } }, + "UpdateIpAccessSettingsRequest":{ + "type":"structure", + "required":["ipAccessSettingsArn"], + "members":{ + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Idempotency ensures that an API request completes only once. With an idempotent request, if the original request completes successfully, subsequent retries with the same client token return the result from the original successful request.
If you do not specify a client token, one is automatically generated by the AWS SDK.
", + "idempotencyToken":true + }, + "description":{ + "shape":"Description", + "documentation":"The description of the IP access settings.
" + }, + "displayName":{ + "shape":"DisplayName", + "documentation":"The display name of the IP access settings.
" + }, + "ipAccessSettingsArn":{ + "shape":"ARN", + "documentation":"The ARN of the IP access settings.
", + "location":"uri", + "locationName":"ipAccessSettingsArn" + }, + "ipRules":{ + "shape":"IpRuleList", + "documentation":"The updated IP rules of the IP access settings.
" + } + } + }, + "UpdateIpAccessSettingsResponse":{ + "type":"structure", + "required":["ipAccessSettings"], + "members":{ + "ipAccessSettings":{ + "shape":"IpAccessSettings", + "documentation":"The IP access settings.
" + } + } + }, "UpdateNetworkSettingsRequest":{ "type":"structure", "required":["networkSettingsArn"], From 84ea7ec14524255150191e7a6c978e4243977d97 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 31 May 2023 18:11:58 +0000 Subject: [PATCH 006/317] Amazon HealthLake Update: This release adds a new request parameter to the CreateFHIRDatastore API operation. IdentityProviderConfiguration specifies how you want to authenticate incoming requests to your Healthlake Data Store. --- .../feature-AmazonHealthLake-305c08e.json | 6 + .../codegen-resources/endpoint-rule-set.json | 399 ++++++++++-------- .../codegen-resources/endpoint-tests.json | 261 +++++++++--- .../codegen-resources/service-2.json | 68 ++- 4 files changed, 484 insertions(+), 250 deletions(-) create mode 100644 .changes/next-release/feature-AmazonHealthLake-305c08e.json diff --git a/.changes/next-release/feature-AmazonHealthLake-305c08e.json b/.changes/next-release/feature-AmazonHealthLake-305c08e.json new file mode 100644 index 000000000000..6430f2d3ce50 --- /dev/null +++ b/.changes/next-release/feature-AmazonHealthLake-305c08e.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon HealthLake", + "contributor": "", + "description": "This release adds a new request parameter to the CreateFHIRDatastore API operation. IdentityProviderConfiguration specifies how you want to authenticate incoming requests to your Healthlake Data Store." +} diff --git a/services/healthlake/src/main/resources/codegen-resources/endpoint-rule-set.json b/services/healthlake/src/main/resources/codegen-resources/endpoint-rule-set.json index d21bd84ee0dc..8acf26e8a35b 100644 --- a/services/healthlake/src/main/resources/codegen-resources/endpoint-rule-set.json +++ b/services/healthlake/src/main/resources/codegen-resources/endpoint-rule-set.json @@ -32,13 +32,12 @@ { "conditions": [ { - "fn": "aws.partition", + "fn": "isSet", "argv": [ { - "ref": "Region" + "ref": "Endpoint" } - ], - "assign": "PartitionResult" + ] } ], "type": "tree", @@ -46,23 +45,20 @@ { "conditions": [ { - "fn": "isSet", + "fn": "booleanEquals", "argv": [ { - "ref": "Endpoint" - } + "ref": "UseFIPS" + }, + true ] - }, - { - "fn": "parseURL", - "argv": [ - { - "ref": "Endpoint" - } - ], - "assign": "url" } ], + "error": "Invalid Configuration: FIPS and custom endpoint are not supported", + "type": "error" + }, + { + "conditions": [], "type": "tree", "rules": [ { @@ -71,67 +67,42 @@ "fn": "booleanEquals", "argv": [ { - "ref": "UseFIPS" + "ref": "UseDualStack" }, true ] } ], - "error": "Invalid Configuration: FIPS and custom endpoint are not supported", + "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", "type": "error" }, { "conditions": [], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "error": "Invalid Configuration: Dualstack and custom endpoint are not supported", - "type": "error" + "endpoint": { + "url": { + "ref": "Endpoint" }, - { - "conditions": [], - "endpoint": { - "url": { - "ref": "Endpoint" - }, - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] + "properties": {}, + "headers": {} + }, + "type": "endpoint" } ] - }, + } + ] + }, + { + "conditions": [], + "type": "tree", + "rules": [ { "conditions": [ { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseFIPS" - }, - true - ] - }, - { - "fn": "booleanEquals", + "fn": "isSet", "argv": [ { - "ref": "UseDualStack" - }, - true + "ref": "Region" + } ] } ], @@ -140,90 +111,215 @@ { "conditions": [ { - "fn": "booleanEquals", + "fn": "aws.partition", "argv": [ - true, { - "fn": "getAttr", + "ref": "Region" + } + ], + "assign": "PartitionResult" + } + ], + "type": "tree", + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", "argv": [ { - "ref": "PartitionResult" + "ref": "UseFIPS" }, - "supportsFIPS" + true + ] + }, + { + "fn": "booleanEquals", + "argv": [ + { + "ref": "UseDualStack" + }, + true ] } + ], + "type": "tree", + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } + ] + }, + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsDualStack" + ] + } + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [], + "type": "tree", + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://healthlake-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ] + } + ] + }, + { + "conditions": [], + "error": "FIPS and DualStack are enabled, but this partition does not support one or both", + "type": "error" + } ] }, { - "fn": "booleanEquals", - "argv": [ - true, + "conditions": [ { - "fn": "getAttr", + "fn": "booleanEquals", "argv": [ { - "ref": "PartitionResult" + "ref": "UseFIPS" }, - "supportsDualStack" + true + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsFIPS" + ] + } + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [], + "type": "tree", + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://healthlake-fips.{Region}.{PartitionResult#dnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ] + } ] + }, + { + "conditions": [], + "error": "FIPS is enabled but this partition does not support FIPS", + "type": "error" } ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://healthlake-fips.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - }, - { - "conditions": [], - "error": "FIPS and DualStack are enabled, but this partition does not support one or both", - "type": "error" - } - ] - }, - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseFIPS" }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ { - "fn": "booleanEquals", - "argv": [ - true, + "conditions": [ { - "fn": "getAttr", + "fn": "booleanEquals", "argv": [ { - "ref": "PartitionResult" + "ref": "UseDualStack" }, - "supportsFIPS" + true ] } + ], + "type": "tree", + "rules": [ + { + "conditions": [ + { + "fn": "booleanEquals", + "argv": [ + true, + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "supportsDualStack" + ] + } + ] + } + ], + "type": "tree", + "rules": [ + { + "conditions": [], + "type": "tree", + "rules": [ + { + "conditions": [], + "endpoint": { + "url": "https://healthlake.{Region}.{PartitionResult#dualStackDnsSuffix}", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + } + ] + } + ] + }, + { + "conditions": [], + "error": "DualStack is enabled but this partition does not support DualStack", + "type": "error" + } ] - } - ], - "type": "tree", - "rules": [ + }, { "conditions": [], "type": "tree", @@ -231,7 +327,7 @@ { "conditions": [], "endpoint": { - "url": "https://healthlake-fips.{Region}.{PartitionResult#dnsSuffix}", + "url": "https://healthlake.{Region}.{PartitionResult#dnsSuffix}", "properties": {}, "headers": {} }, @@ -240,74 +336,13 @@ ] } ] - }, - { - "conditions": [], - "error": "FIPS is enabled but this partition does not support FIPS", - "type": "error" - } - ] - }, - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - { - "ref": "UseDualStack" - }, - true - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [ - { - "fn": "booleanEquals", - "argv": [ - true, - { - "fn": "getAttr", - "argv": [ - { - "ref": "PartitionResult" - }, - "supportsDualStack" - ] - } - ] - } - ], - "type": "tree", - "rules": [ - { - "conditions": [], - "endpoint": { - "url": "https://healthlake.{Region}.{PartitionResult#dualStackDnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" - } - ] - }, - { - "conditions": [], - "error": "DualStack is enabled but this partition does not support DualStack", - "type": "error" } ] }, { "conditions": [], - "endpoint": { - "url": "https://healthlake.{Region}.{PartitionResult#dnsSuffix}", - "properties": {}, - "headers": {} - }, - "type": "endpoint" + "error": "Invalid Configuration: Missing Region", + "type": "error" } ] } diff --git a/services/healthlake/src/main/resources/codegen-resources/endpoint-tests.json b/services/healthlake/src/main/resources/codegen-resources/endpoint-tests.json index 1aef86cbbe42..a234a21aa71e 100644 --- a/services/healthlake/src/main/resources/codegen-resources/endpoint-tests.json +++ b/services/healthlake/src/main/resources/codegen-resources/endpoint-tests.json @@ -1,42 +1,29 @@ { "testCases": [ { - "documentation": "For region us-west-2 with FIPS enabled and DualStack enabled", - "expect": { - "endpoint": { - "url": "https://healthlake-fips.us-west-2.api.aws" - } - }, - "params": { - "UseDualStack": true, - "Region": "us-west-2", - "UseFIPS": true - } - }, - { - "documentation": "For region us-west-2 with FIPS enabled and DualStack disabled", + "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://healthlake-fips.us-west-2.amazonaws.com" + "url": "https://healthlake.us-east-1.amazonaws.com" } }, "params": { - "UseDualStack": false, - "Region": "us-west-2", - "UseFIPS": true + "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": false } }, { - "documentation": "For region us-west-2 with FIPS disabled and DualStack enabled", + "documentation": "For region us-east-2 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://healthlake.us-west-2.api.aws" + "url": "https://healthlake.us-east-2.amazonaws.com" } }, "params": { - "UseDualStack": true, - "Region": "us-west-2", - "UseFIPS": false + "Region": "us-east-2", + "UseFIPS": false, + "UseDualStack": false } }, { @@ -47,9 +34,9 @@ } }, "params": { - "UseDualStack": false, "Region": "us-west-2", - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -60,9 +47,9 @@ } }, "params": { - "UseDualStack": true, "Region": "us-east-1", - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": true } }, { @@ -73,9 +60,9 @@ } }, "params": { - "UseDualStack": false, "Region": "us-east-1", - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": false } }, { @@ -86,87 +73,235 @@ } }, "params": { - "UseDualStack": true, "Region": "us-east-1", - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": true } }, { - "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled", + "documentation": "For region cn-north-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://healthlake.us-east-1.amazonaws.com" + "url": "https://healthlake-fips.cn-north-1.api.amazonwebservices.com.cn" } }, "params": { - "UseDualStack": false, - "Region": "us-east-1", - "UseFIPS": false + "Region": "cn-north-1", + "UseFIPS": true, + "UseDualStack": true } }, { - "documentation": "For region us-east-2 with FIPS enabled and DualStack enabled", + "documentation": "For region cn-north-1 with FIPS enabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://healthlake-fips.us-east-2.api.aws" + "url": "https://healthlake-fips.cn-north-1.amazonaws.com.cn" } }, "params": { - "UseDualStack": true, - "Region": "us-east-2", - "UseFIPS": true + "Region": "cn-north-1", + "UseFIPS": true, + "UseDualStack": false } }, { - "documentation": "For region us-east-2 with FIPS enabled and DualStack disabled", + "documentation": "For region cn-north-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://healthlake-fips.us-east-2.amazonaws.com" + "url": "https://healthlake.cn-north-1.api.amazonwebservices.com.cn" } }, "params": { - "UseDualStack": false, - "Region": "us-east-2", - "UseFIPS": true + "Region": "cn-north-1", + "UseFIPS": false, + "UseDualStack": true } }, { - "documentation": "For region us-east-2 with FIPS disabled and DualStack enabled", + "documentation": "For region cn-north-1 with FIPS disabled and DualStack disabled", "expect": { "endpoint": { - "url": "https://healthlake.us-east-2.api.aws" + "url": "https://healthlake.cn-north-1.amazonaws.com.cn" } }, "params": { - "UseDualStack": true, - "Region": "us-east-2", - "UseFIPS": false + "Region": "cn-north-1", + "UseFIPS": false, + "UseDualStack": false } }, { - "documentation": "For region us-east-2 with FIPS disabled and DualStack disabled", + "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://healthlake.us-east-2.amazonaws.com" + "url": "https://healthlake-fips.us-gov-east-1.api.aws" } }, "params": { - "UseDualStack": false, - "Region": "us-east-2", - "UseFIPS": false + "Region": "us-gov-east-1", + "UseFIPS": true, + "UseDualStack": true + } + }, + { + "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://healthlake-fips.us-gov-east-1.amazonaws.com" + } + }, + "params": { + "Region": "us-gov-east-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://healthlake.us-gov-east-1.api.aws" + } + }, + "params": { + "Region": "us-gov-east-1", + "UseFIPS": false, + "UseDualStack": true + } + }, + { + "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://healthlake.us-gov-east-1.amazonaws.com" + } + }, + "params": { + "Region": "us-gov-east-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": true, + "UseDualStack": true } }, { - "documentation": "For custom endpoint with fips disabled and dualstack disabled", + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://healthlake-fips.us-iso-east-1.c2s.ic.gov" + } + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": false, + "UseDualStack": true + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://healthlake.us-iso-east-1.c2s.ic.gov" + } + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": true, + "UseDualStack": true + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://healthlake-fips.us-isob-east-1.sc2s.sgov.gov" + } + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": false, + "UseDualStack": true + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://healthlake.us-isob-east-1.sc2s.sgov.gov" + } + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For custom endpoint with region set and fips disabled and dualstack disabled", "expect": { "endpoint": { "url": "https://example.com" } }, "params": { - "UseDualStack": false, "Region": "us-east-1", "UseFIPS": false, + "UseDualStack": false, + "Endpoint": "https://example.com" + } + }, + { + "documentation": "For custom endpoint with region not set and fips disabled and dualstack disabled", + "expect": { + "endpoint": { + "url": "https://example.com" + } + }, + "params": { + "UseFIPS": false, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -176,9 +311,9 @@ "error": "Invalid Configuration: FIPS and custom endpoint are not supported" }, "params": { - "UseDualStack": false, "Region": "us-east-1", "UseFIPS": true, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -188,11 +323,17 @@ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" }, "params": { - "UseDualStack": true, "Region": "us-east-1", "UseFIPS": false, + "UseDualStack": true, "Endpoint": "https://example.com" } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } } ], "version": "1.0" diff --git a/services/healthlake/src/main/resources/codegen-resources/service-2.json b/services/healthlake/src/main/resources/codegen-resources/service-2.json index 99777b5cfb48..7d769be96806 100644 --- a/services/healthlake/src/main/resources/codegen-resources/service-2.json +++ b/services/healthlake/src/main/resources/codegen-resources/service-2.json @@ -205,7 +205,7 @@ {"shape":"ValidationException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"Adds a user specifed key and value tag to a Data Store.
" + "documentation":"Adds a user specified key and value tag to a Data Store.
" }, "UntagResource":{ "name":"UntagResource", @@ -237,6 +237,14 @@ "min":1, "pattern":"^arn:aws((-us-gov)|(-iso)|(-iso-b)|(-cn))?:healthlake:[a-z0-9-]+:\\d{12}:datastore\\/fhir\\/.{32}" }, + "AuthorizationStrategy":{ + "type":"string", + "enum":[ + "SMART_ON_FHIR_V1", + "AWS_AUTH" + ] + }, + "Boolean":{"type":"boolean"}, "BoundedLengthString":{ "type":"string", "max":5000, @@ -256,6 +264,7 @@ "AWS_OWNED_KMS_KEY" ] }, + "ConfigurationMetadata":{"type":"string"}, "ConflictException":{ "type":"structure", "members":{ @@ -292,6 +301,10 @@ "Tags":{ "shape":"TagList", "documentation":"Resource tags that are applied to a Data Store when it is created.
" + }, + "IdentityProviderConfiguration":{ + "shape":"IdentityProviderConfiguration", + "documentation":"The configuration of the identity provider that you want to use for your Data Store.
" } } }, @@ -310,7 +323,7 @@ }, "DatastoreArn":{ "shape":"DatastoreArn", - "documentation":"The datastore ARN is generated during the creation of the Data Store and can be found in the output from the initial Data Store creation call.
" + "documentation":"The Data Store ARN is generated during the creation of the Data Store and can be found in the output from the initial Data Store creation call.
" }, "DatastoreStatus":{ "shape":"DatastoreStatus", @@ -318,7 +331,7 @@ }, "DatastoreEndpoint":{ "shape":"BoundedLengthString", - "documentation":"The AWS endpoint for the created Data Store. For preview, only US-east-1 endpoints are supported.
" + "documentation":"The AWS endpoint for the created Data Store.
" } } }, @@ -405,9 +418,13 @@ "PreloadDataConfig":{ "shape":"PreloadDataConfig", "documentation":"The preloaded data configuration for the Data Store. Only data preloaded from Synthea is supported.
" + }, + "IdentityProviderConfiguration":{ + "shape":"IdentityProviderConfiguration", + "documentation":"The identity provider that you selected when you created the Data Store.
" } }, - "documentation":"Displays the properties of the Data Store, including the ID, Arn, name, and the status of the Data Store.
" + "documentation":"Displays the properties of the Data Store, including the ID, ARN, name, and the status of the Data Store.
" }, "DatastorePropertiesList":{ "type":"list", @@ -424,6 +441,7 @@ }, "DeleteFHIRDatastoreRequest":{ "type":"structure", + "required":["DatastoreId"], "members":{ "DatastoreId":{ "shape":"DatastoreId", @@ -460,10 +478,11 @@ }, "DescribeFHIRDatastoreRequest":{ "type":"structure", + "required":["DatastoreId"], "members":{ "DatastoreId":{ "shape":"DatastoreId", - "documentation":"The AWS-generated Data Store id. This is part of the ‘CreateFHIRDatastore’ output.
" + "documentation":"The AWS-generated Data Store ID.
" } } }, @@ -600,6 +619,29 @@ "min":20, "pattern":"arn:aws(-[^:]+)?:iam::[0-9]{12}:role/.+" }, + "IdentityProviderConfiguration":{ + "type":"structure", + "required":["AuthorizationStrategy"], + "members":{ + "AuthorizationStrategy":{ + "shape":"AuthorizationStrategy", + "documentation":"The authorization strategy that you selected when you created the Data Store.
" + }, + "FineGrainedAuthorizationEnabled":{ + "shape":"Boolean", + "documentation":"If you enabled fine-grained authorization when you created the Data Store.
" + }, + "Metadata":{ + "shape":"ConfigurationMetadata", + "documentation":"The JSON metadata elements that you want to use in your identity provider configuration. Required elements are listed based on the launch specification of the SMART application. For more information on all possible elements, see Metadata in SMART's App Launch specification.
authorization_endpoint: The URL to the OAuth2 authorization endpoint.
grant_types_supported: An array of grant types that are supported at the token endpoint. You must provide at least one grant type option. Valid options are authorization_code and client_credentials.
token_endpoint: The URL to the OAuth2 token endpoint.
capabilities: An array of strings of the SMART capabilities that the authorization server supports.
code_challenge_methods_supported: An array of strings of supported PKCE code challenge methods. You must include the S256 method in the array of PKCE code challenge methods.
The Amazon Resource Name (ARN) of the Lambda function that you want to use to decode the access token created by the authorization server.
" + } + }, + "documentation":"The identity provider configuration that you gave when the Data Store was created.
" + }, "ImportJobProperties":{ "type":"structure", "required":[ @@ -620,7 +662,7 @@ }, "JobStatus":{ "shape":"JobStatus", - "documentation":"The job status for an Import job. Possible statuses are SUBMITTED, IN_PROGRESS, COMPLETED, FAILED.
" + "documentation":"The job status for an Import job. Possible statuses are SUBMITTED, IN_PROGRESS, COMPLETED_WITH_ERRORS, COMPLETED, FAILED.
" }, "SubmitTime":{ "shape":"Timestamp", @@ -693,7 +735,11 @@ "IN_PROGRESS", "COMPLETED_WITH_ERRORS", "COMPLETED", - "FAILED" + "FAILED", + "CANCEL_SUBMITTED", + "CANCEL_IN_PROGRESS", + "CANCEL_COMPLETED", + "CANCEL_FAILED" ] }, "KmsEncryptionConfig":{ @@ -711,6 +757,12 @@ }, "documentation":"The customer-managed-key(CMK) used when creating a Data Store. If a customer owned key is not specified, an AWS owned key will be used for encryption.
" }, + "LambdaArn":{ + "type":"string", + "max":256, + "min":49, + "pattern":"arn:aws:lambda:[a-z]{2}-[a-z]+-\\d{1}:\\d{12}:function:[a-zA-Z0-9\\-_\\.]+(:(\\$LATEST|[a-zA-Z0-9\\-_]+))?" + }, "ListFHIRDatastoresRequest":{ "type":"structure", "members":{ @@ -1067,7 +1119,7 @@ }, "Value":{ "shape":"TagValue", - "documentation":"The value portion of tag. Tag values are case sensitive.
" + "documentation":"The value portion of a tag. Tag values are case sensitive.
" } }, "documentation":"A tag is a label consisting of a user-defined key and value. The form for tags is {\"Key\", \"Value\"}
" From 3a7fb75e3cea3f3e72749e3f6ab74b41ff1c8337 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 31 May 2023 18:12:02 +0000 Subject: [PATCH 007/317] AWS Service Catalog Update: Documentation updates for ServiceCatalog. --- .../feature-AWSServiceCatalog-27cb4d7.json | 6 ++++++ .../resources/codegen-resources/service-2.json | 14 +++++++------- 2 files changed, 13 insertions(+), 7 deletions(-) create mode 100644 .changes/next-release/feature-AWSServiceCatalog-27cb4d7.json diff --git a/.changes/next-release/feature-AWSServiceCatalog-27cb4d7.json b/.changes/next-release/feature-AWSServiceCatalog-27cb4d7.json new file mode 100644 index 000000000000..938b3a91970a --- /dev/null +++ b/.changes/next-release/feature-AWSServiceCatalog-27cb4d7.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Service Catalog", + "contributor": "", + "description": "Documentation updates for ServiceCatalog." +} diff --git a/services/servicecatalog/src/main/resources/codegen-resources/service-2.json b/services/servicecatalog/src/main/resources/codegen-resources/service-2.json index 65f3ed6bc2d0..d727481d3a1c 100644 --- a/services/servicecatalog/src/main/resources/codegen-resources/service-2.json +++ b/services/servicecatalog/src/main/resources/codegen-resources/service-2.json @@ -649,7 +649,7 @@ {"shape":"InvalidParametersException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"Disassociates a previously associated principal ARN from a specified portfolio.
The PrincipalType and PrincipalARN must match the AssociatePrincipalWithPortfolio call request details. For example, to disassociate an association created with a PrincipalARN of PrincipalType IAM you must use the PrincipalType IAM when calling DisassociatePrincipalFromPortfolio.
For portfolios that have been shared with principal name sharing enabled: after disassociating a principal, share recipient accounts will no longer be able to provision products in this portfolio using a role matching the name of the associated principal.
" + "documentation":"Disassociates a previously associated principal ARN from a specified portfolio.
The PrincipalType and PrincipalARN must match the AssociatePrincipalWithPortfolio call request details. For example, to disassociate an association created with a PrincipalARN of PrincipalType IAM you must use the PrincipalType IAM when calling DisassociatePrincipalFromPortfolio.
For portfolios that have been shared with principal name sharing enabled: after disassociating a principal, share recipient accounts will no longer be able to provision products in this portfolio using a role matching the name of the associated principal.
For more information, review associate-principal-with-portfolio in the Amazon Web Services CLI Command Reference.
If you disassociate a principal from a portfolio, with PrincipalType as IAM, the same principal will still have access to the portfolio if it matches one of the associated principals of type IAM_PATTERN. To fully remove access for a principal, verify all the associated Principals of type IAM_PATTERN, and then ensure you disassociate any IAM_PATTERN principals that match the principal whose access you are removing.
The ARN of the principal (user, role, or group). This field allows an ARN with no accountID if PrincipalType is IAM_PATTERN.
You can associate multiple IAM patterns even if the account has no principal with that name. This is useful in Principal Name Sharing if you want to share a principal without creating it in the account that owns the portfolio.
The ARN of the principal (user, role, or group). The supported value is a fully defined IAM ARN if the PrincipalType is IAM. If the PrincipalType is IAM_PATTERN, the supported value is an IAM ARN without an AccountID in the following format:
arn:partition:iam:::resource-type/resource-id
The resource-id can be either of the following:
Fully formed, for example arn:aws:iam:::role/resource-name or arn:aws:iam:::role/resource-path/resource-name
A wildcard ARN. The wildcard ARN accepts IAM_PATTERN values with a \"*\" or \"?\" in the resource-id segment of the ARN, for example arn:partition:service:::resource-type/resource-path/resource-name. The new symbols are exclusive to the resource-path and resource-name and cannot be used to replace the resource-type or other ARN values.
Examples of an acceptable wildcard ARN:
arn:aws:iam:::role/ResourceName_*
arn:aws:iam:::role/*/ResourceName_?
Examples of an unacceptable wildcard ARN:
arn:aws:iam:::*/ResourceName
You can associate multiple IAM_PATTERNs even if the account has no principal with that name.
The ARN path and principal name allow unlimited wildcard characters.
The \"?\" wildcard character matches zero or one of any character. This is similar to \".?\" in regular regex context.
The \"*\" wildcard character matches any number of any characters. This is similar \".*\" in regular regex context.
In the IAM Principal ARNs format (arn:partition:iam:::resource-type/resource-path/resource-name), valid resource-type values include user/, group/, or role/. The \"?\" and \"*\" are allowed only after the resource-type, in the resource-id segment. You can use special characters anywhere within the resource-id.
The \"*\" also matches the \"/\" character, allowing paths to be formed within the resource-id. For example, arn:aws:iam:::role/*/ResourceName_? matches both arn:aws:iam:::role/pathA/pathB/ResourceName_1 and arn:aws:iam:::role/pathA/ResourceName_1.
The principal type. The supported value is IAM if you use a fully defined ARN, or IAM_PATTERN if you use an ARN with no accountID.
The principal type. The supported value is IAM if you use a fully defined ARN, or IAM_PATTERN if you use an ARN with no accountID, with or without wildcard characters.
The ARN of the principal (user, role, or group). This field allows an ARN with no accountID if PrincipalType is IAM_PATTERN.
The ARN of the principal (user, role, or group). This field allows an ARN with no accountID with or without wildcard characters if PrincipalType is IAM_PATTERN.
The supported value is IAM if you use a fully defined ARN, or IAM_PATTERN if you use no accountID.
The supported value is IAM if you use a fully defined ARN, or IAM_PATTERN if you specify an IAM ARN with no AccountId, with or without wildcard characters.
The ARN of the principal (user, role, or group). This field allows for an ARN with no accountID if the PrincipalType is an IAM_PATTERN.
The ARN of the principal (user, role, or group). This field allows for an ARN with no accountID, with or without wildcard characters if the PrincipalType is an IAM_PATTERN.
For more information, review associate-principal-with-portfolio in the Amazon Web Services CLI Command Reference.
" }, "PrincipalType":{ "shape":"PrincipalType", - "documentation":"The principal type. The supported value is IAM if you use a fully defined ARN, or IAM_PATTERN if you use an ARN with no accountID.
The principal type. The supported value is IAM if you use a fully defined ARN, or IAM_PATTERN if you use an ARN with no accountID, with or without wildcard characters.
Information about a principal.
" From b8b7593b8a31b572e4558a7695069300305a27f9 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 31 May 2023 18:14:14 +0000 Subject: [PATCH 008/317] Release 2.20.76. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.20.76.json | 54 +++++++++++++++++++ .../bugfix-AWSSDKforJavav2-5d8e168.json | 6 --- .../feature-AWSConfig-00a005f.json | 6 --- ...ure-AWSMainframeModernization-fa33804.json | 6 --- .../feature-AWSServiceCatalog-27cb4d7.json | 6 --- .../feature-AmazonFraudDetector-7b918b9.json | 6 --- .../feature-AmazonHealthLake-305c08e.json | 6 --- ...azonRelationalDatabaseService-6161e54.json | 6 --- .../feature-AmazonWorkSpacesWeb-91b4638.json | 6 --- CHANGELOG.md | 33 ++++++++++++ README.md | 8 +-- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 2 +- bom-internal/pom.xml | 2 +- bom/pom.xml | 2 +- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth-crt/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/crt-core/pom.xml | 2 +- core/endpoints-spi/pom.xml | 2 +- core/imds/pom.xml | 2 +- core/json-utils/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services-custom/s3-transfer-manager/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/account/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/amplifyuibuilder/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appconfigdata/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationcostprofiler/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/apprunner/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/arczonalshift/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/backupgateway/pom.xml | 2 +- services/backupstorage/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/billingconductor/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/chimesdkidentity/pom.xml | 2 +- services/chimesdkmediapipelines/pom.xml | 2 +- services/chimesdkmeetings/pom.xml | 2 +- services/chimesdkmessaging/pom.xml | 2 +- services/chimesdkvoice/pom.xml | 2 +- services/cleanrooms/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/cloudcontrol/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudtraildata/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecatalyst/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcampaigns/pom.xml | 2 +- services/connectcases/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/controltower/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/docdbelastic/pom.xml | 2 +- services/drs/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/emrserverless/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/evidently/pom.xml | 2 +- services/finspace/pom.xml | 2 +- services/finspacedata/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/gamesparks/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/grafana/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/inspector2/pom.xml | 2 +- services/internetmonitor/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotfleetwise/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotroborunner/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iottwinmaker/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/ivschat/pom.xml | 2 +- services/ivsrealtime/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kafkaconnect/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kendraranking/pom.xml | 2 +- services/keyspaces/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kinesisvideowebrtcstorage/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- .../licensemanagerlinuxsubscriptions/pom.xml | 2 +- .../licensemanagerusersubscriptions/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutequipment/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/m2/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagev2/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/memorydb/pom.xml | 2 +- services/mgn/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/migrationhuborchestrator/pom.xml | 2 +- services/migrationhubrefactorspaces/pom.xml | 2 +- services/migrationhubstrategy/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/nimble/pom.xml | 2 +- services/oam/pom.xml | 2 +- services/omics/pom.xml | 2 +- services/opensearch/pom.xml | 2 +- services/opensearchserverless/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/osis/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/panorama/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/pinpointsmsvoicev2/pom.xml | 2 +- services/pipes/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 2 +- services/pricing/pom.xml | 2 +- services/privatenetworks/pom.xml | 2 +- services/proton/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rbin/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/redshiftserverless/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resiliencehub/pom.xml | 2 +- services/resourceexplorer2/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/rolesanywhere/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53recoverycluster/pom.xml | 2 +- services/route53recoverycontrolconfig/pom.xml | 2 +- services/route53recoveryreadiness/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/rum/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakergeospatial/pom.xml | 2 +- services/sagemakermetrics/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/scheduler/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- services/securitylake/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/simspaceweaver/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/snowdevicemanagement/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/ssmcontacts/pom.xml | 2 +- services/ssmincidents/pom.xml | 2 +- services/ssmsap/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/supportapp/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/tnb/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/voiceid/pom.xml | 2 +- services/vpclattice/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/wisdom/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/workspacesweb/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/auth-tests/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/region-testing/pom.xml | 2 +- test/ruleset-testing-core/pom.xml | 2 +- test/s3-benchmarks/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- third-party/pom.xml | 2 +- third-party/third-party-jackson-core/pom.xml | 2 +- .../pom.xml | 2 +- utils/pom.xml | 2 +- 418 files changed, 498 insertions(+), 459 deletions(-) create mode 100644 .changes/2.20.76.json delete mode 100644 .changes/next-release/bugfix-AWSSDKforJavav2-5d8e168.json delete mode 100644 .changes/next-release/feature-AWSConfig-00a005f.json delete mode 100644 .changes/next-release/feature-AWSMainframeModernization-fa33804.json delete mode 100644 .changes/next-release/feature-AWSServiceCatalog-27cb4d7.json delete mode 100644 .changes/next-release/feature-AmazonFraudDetector-7b918b9.json delete mode 100644 .changes/next-release/feature-AmazonHealthLake-305c08e.json delete mode 100644 .changes/next-release/feature-AmazonRelationalDatabaseService-6161e54.json delete mode 100644 .changes/next-release/feature-AmazonWorkSpacesWeb-91b4638.json diff --git a/.changes/2.20.76.json b/.changes/2.20.76.json new file mode 100644 index 000000000000..1de8fa20aecf --- /dev/null +++ b/.changes/2.20.76.json @@ -0,0 +1,54 @@ +{ + "version": "2.20.76", + "date": "2023-05-31", + "entries": [ + { + "type": "bugfix", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Fix an issue where the optimal number of parts calculated could be higher than 10,000" + }, + { + "type": "feature", + "category": "AWS Config", + "contributor": "", + "description": "Resource Types Exclusion feature launch by AWS Config" + }, + { + "type": "feature", + "category": "AWSMainframeModernization", + "contributor": "", + "description": "Adds an optional create-only 'roleArn' property to Application resources. Enables PS and PO data set org types." + }, + { + "type": "feature", + "category": "AWS Service Catalog", + "contributor": "", + "description": "Documentation updates for ServiceCatalog." + }, + { + "type": "feature", + "category": "Amazon Fraud Detector", + "contributor": "", + "description": "This release enables publishing event predictions from Amazon Fraud Detector (AFD) to Amazon EventBridge. For example, after getting predictions from AFD, Amazon EventBridge rules can be configured to trigger notification through an SNS topic, send a message with SES, or trigger Lambda workflows." + }, + { + "type": "feature", + "category": "Amazon HealthLake", + "contributor": "", + "description": "This release adds a new request parameter to the CreateFHIRDatastore API operation. IdentityProviderConfiguration specifies how you want to authenticate incoming requests to your Healthlake Data Store." + }, + { + "type": "feature", + "category": "Amazon Relational Database Service", + "contributor": "", + "description": "This release adds support for changing the engine for Oracle using the ModifyDbInstance API" + }, + { + "type": "feature", + "category": "Amazon WorkSpaces Web", + "contributor": "", + "description": "WorkSpaces Web now allows you to control which IP addresses your WorkSpaces Web portal may be accessed from." + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/bugfix-AWSSDKforJavav2-5d8e168.json b/.changes/next-release/bugfix-AWSSDKforJavav2-5d8e168.json deleted file mode 100644 index 275dca03c559..000000000000 --- a/.changes/next-release/bugfix-AWSSDKforJavav2-5d8e168.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "bugfix", - "category": "AWS SDK for Java v2", - "contributor": "", - "description": "Fix an issue where the optimal number of parts calculated could be higher than 10,000" -} diff --git a/.changes/next-release/feature-AWSConfig-00a005f.json b/.changes/next-release/feature-AWSConfig-00a005f.json deleted file mode 100644 index 6de1773aabea..000000000000 --- a/.changes/next-release/feature-AWSConfig-00a005f.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Config", - "contributor": "", - "description": "Resource Types Exclusion feature launch by AWS Config" -} diff --git a/.changes/next-release/feature-AWSMainframeModernization-fa33804.json b/.changes/next-release/feature-AWSMainframeModernization-fa33804.json deleted file mode 100644 index 7ffc652528cb..000000000000 --- a/.changes/next-release/feature-AWSMainframeModernization-fa33804.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWSMainframeModernization", - "contributor": "", - "description": "Adds an optional create-only 'roleArn' property to Application resources. Enables PS and PO data set org types." -} diff --git a/.changes/next-release/feature-AWSServiceCatalog-27cb4d7.json b/.changes/next-release/feature-AWSServiceCatalog-27cb4d7.json deleted file mode 100644 index 938b3a91970a..000000000000 --- a/.changes/next-release/feature-AWSServiceCatalog-27cb4d7.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Service Catalog", - "contributor": "", - "description": "Documentation updates for ServiceCatalog." -} diff --git a/.changes/next-release/feature-AmazonFraudDetector-7b918b9.json b/.changes/next-release/feature-AmazonFraudDetector-7b918b9.json deleted file mode 100644 index da4375bbcbca..000000000000 --- a/.changes/next-release/feature-AmazonFraudDetector-7b918b9.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Fraud Detector", - "contributor": "", - "description": "This release enables publishing event predictions from Amazon Fraud Detector (AFD) to Amazon EventBridge. For example, after getting predictions from AFD, Amazon EventBridge rules can be configured to trigger notification through an SNS topic, send a message with SES, or trigger Lambda workflows." -} diff --git a/.changes/next-release/feature-AmazonHealthLake-305c08e.json b/.changes/next-release/feature-AmazonHealthLake-305c08e.json deleted file mode 100644 index 6430f2d3ce50..000000000000 --- a/.changes/next-release/feature-AmazonHealthLake-305c08e.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon HealthLake", - "contributor": "", - "description": "This release adds a new request parameter to the CreateFHIRDatastore API operation. IdentityProviderConfiguration specifies how you want to authenticate incoming requests to your Healthlake Data Store." -} diff --git a/.changes/next-release/feature-AmazonRelationalDatabaseService-6161e54.json b/.changes/next-release/feature-AmazonRelationalDatabaseService-6161e54.json deleted file mode 100644 index e49a2599c1d6..000000000000 --- a/.changes/next-release/feature-AmazonRelationalDatabaseService-6161e54.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Relational Database Service", - "contributor": "", - "description": "This release adds support for changing the engine for Oracle using the ModifyDbInstance API" -} diff --git a/.changes/next-release/feature-AmazonWorkSpacesWeb-91b4638.json b/.changes/next-release/feature-AmazonWorkSpacesWeb-91b4638.json deleted file mode 100644 index 48c014a50e04..000000000000 --- a/.changes/next-release/feature-AmazonWorkSpacesWeb-91b4638.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon WorkSpaces Web", - "contributor": "", - "description": "WorkSpaces Web now allows you to control which IP addresses your WorkSpaces Web portal may be accessed from." -} diff --git a/CHANGELOG.md b/CHANGELOG.md index 13a5ebc75184..f8036466c5c1 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,36 @@ +# __2.20.76__ __2023-05-31__ +## __AWS Config__ + - ### Features + - Resource Types Exclusion feature launch by AWS Config + +## __AWS SDK for Java v2__ + - ### Bugfixes + - Fix an issue where the optimal number of parts calculated could be higher than 10,000 + +## __AWS Service Catalog__ + - ### Features + - Documentation updates for ServiceCatalog. + +## __AWSMainframeModernization__ + - ### Features + - Adds an optional create-only 'roleArn' property to Application resources. Enables PS and PO data set org types. + +## __Amazon Fraud Detector__ + - ### Features + - This release enables publishing event predictions from Amazon Fraud Detector (AFD) to Amazon EventBridge. For example, after getting predictions from AFD, Amazon EventBridge rules can be configured to trigger notification through an SNS topic, send a message with SES, or trigger Lambda workflows. + +## __Amazon HealthLake__ + - ### Features + - This release adds a new request parameter to the CreateFHIRDatastore API operation. IdentityProviderConfiguration specifies how you want to authenticate incoming requests to your Healthlake Data Store. + +## __Amazon Relational Database Service__ + - ### Features + - This release adds support for changing the engine for Oracle using the ModifyDbInstance API + +## __Amazon WorkSpaces Web__ + - ### Features + - WorkSpaces Web now allows you to control which IP addresses your WorkSpaces Web portal may be accessed from. + # __2.20.75__ __2023-05-30__ ## __AWS Glue__ - ### Features diff --git a/README.md b/README.md index 8cc7530470e4..34ebf869c1d3 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ To automatically manage module versions (currently all modules have the same verInspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using commas and no added spaces.
Matches against the header order string are case insensitive.
" + "documentation":"Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using colons and no added spaces, for example Host:User-Agent:Accept:Authorization:Referer.
Matches against the header order string are case insensitive.
" } }, "documentation":"The part of the web request that you want WAF to inspect. Include the single FieldToMatch type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in FieldToMatch for each rule statement that requires it. To inspect more than one component of the web request, create a separate rule statement for each component.
Example JSON for a QueryString field to match:
\"FieldToMatch\": { \"QueryString\": {} }
Example JSON for a Method field to match specification:
\"FieldToMatch\": { \"Method\": { \"Name\": \"DELETE\" } }
What WAF should do if the headers of the request are more numerous or larger than WAF can inspect. WAF does not support inspecting the entire contents of request headers when they exceed 8 KB (8192 bytes) or 200 total headers. The underlying host service forwards a maximum of 200 headers and at most 8 KB of header contents to WAF.
The options for oversize handling are the following:
CONTINUE - Inspect the available headers normally, according to the rule inspection criteria.
MATCH - Treat the web request as matching the rule statement. WAF applies the rule action to the request.
NO_MATCH - Treat the web request as not matching the rule statement.
Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using commas and no added spaces.
Matches against the header order string are case insensitive.
" + "documentation":"Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using colons and no added spaces, for example Host:User-Agent:Accept:Authorization:Referer.
Matches against the header order string are case insensitive.
" }, "HeaderValue":{"type":"string"}, "Headers":{ From 7a704dc9d0747bffcafb86a359ffaad7e58530e2 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 1 Jun 2023 18:09:23 +0000 Subject: [PATCH 011/317] Amazon Appflow Update: Added ability to select DataTransferApiType for DescribeConnector and CreateFlow requests when using Async supported connectors. Added supportedDataTransferType to DescribeConnector/DescribeConnectors/ListConnector response. --- .../feature-AmazonAppflow-be53087.json | 6 ++ .../codegen-resources/service-2.json | 58 +++++++++++++++++++ 2 files changed, 64 insertions(+) create mode 100644 .changes/next-release/feature-AmazonAppflow-be53087.json diff --git a/.changes/next-release/feature-AmazonAppflow-be53087.json b/.changes/next-release/feature-AmazonAppflow-be53087.json new file mode 100644 index 000000000000..065626b08b41 --- /dev/null +++ b/.changes/next-release/feature-AmazonAppflow-be53087.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Appflow", + "contributor": "", + "description": "Added ability to select DataTransferApiType for DescribeConnector and CreateFlow requests when using Async supported connectors. Added supportedDataTransferType to DescribeConnector/DescribeConnectors/ListConnector response." +} diff --git a/services/appflow/src/main/resources/codegen-resources/service-2.json b/services/appflow/src/main/resources/codegen-resources/service-2.json index 6cfbd10169ab..e48d27ecd73c 100644 --- a/services/appflow/src/main/resources/codegen-resources/service-2.json +++ b/services/appflow/src/main/resources/codegen-resources/service-2.json @@ -870,6 +870,14 @@ "registeredBy":{ "shape":"RegisteredBy", "documentation":"Information about who registered the connector.
" + }, + "supportedDataTransferTypes":{ + "shape":"SupportedDataTransferTypeList", + "documentation":"The data transfer types that the connector supports.
Structured records.
Files or binary data.
The APIs of the connector application that Amazon AppFlow can use to transfer your data.
" } }, "documentation":"The configuration settings related to a given connector.
" @@ -930,6 +938,10 @@ "connectorModes":{ "shape":"ConnectorModeList", "documentation":"The connection mode that the connector supports.
" + }, + "supportedDataTransferTypes":{ + "shape":"SupportedDataTransferTypeList", + "documentation":"The data transfer types that the connector supports.
Structured records.
Files or binary data.
Information about the registered connector.
" @@ -1839,6 +1851,10 @@ "customProperties":{ "shape":"CustomProperties", "documentation":"Custom properties that are required to use the custom connector as a source.
" + }, + "dataTransferApi":{ + "shape":"DataTransferApi", + "documentation":"The API of the connector application that Amazon AppFlow uses to transfer your data.
" } }, "documentation":"The properties that are applied when the custom connector is being used as a source.
" @@ -1894,6 +1910,33 @@ "Complete" ] }, + "DataTransferApi":{ + "type":"structure", + "members":{ + "Name":{ + "shape":"DataTransferApiTypeName", + "documentation":"The name of the connector application API.
" + }, + "Type":{ + "shape":"DataTransferApiType", + "documentation":"You can specify one of the following types:
The default. Optimizes a flow for datasets that fluctuate in size from small to large. For each flow run, Amazon AppFlow chooses to use the SYNC or ASYNC API type based on the amount of data that the run transfers.
A synchronous API. This type of API optimizes a flow for small to medium-sized datasets.
An asynchronous API. This type of API optimizes a flow for large datasets.
The API of the connector application that Amazon AppFlow uses to transfer your data.
" + }, + "DataTransferApiType":{ + "type":"string", + "enum":[ + "SYNC", + "ASYNC", + "AUTOMATIC" + ] + }, + "DataTransferApiTypeName":{ + "type":"string", + "max":64, + "pattern":"[\\w/-]+" + }, "DatabaseName":{ "type":"string", "max":512, @@ -4995,6 +5038,21 @@ "type":"list", "member":{"shape":"SupportedApiVersion"} }, + "SupportedDataTransferApis":{ + "type":"list", + "member":{"shape":"DataTransferApi"} + }, + "SupportedDataTransferType":{ + "type":"string", + "enum":[ + "RECORD", + "FILE" + ] + }, + "SupportedDataTransferTypeList":{ + "type":"list", + "member":{"shape":"SupportedDataTransferType"} + }, "SupportedFieldTypeDetails":{ "type":"structure", "required":["v1"], From 3e19dc89c9d8190529d92ee39b8d004bcbe0a7f0 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 1 Jun 2023 18:09:26 +0000 Subject: [PATCH 012/317] Amazon SageMaker Service Update: Amazon Sagemaker Autopilot adds support for Parquet file input to NLP text classification jobs. --- .../feature-AmazonSageMakerService-046c91e.json | 6 ++++++ .../src/main/resources/codegen-resources/service-2.json | 8 ++++---- 2 files changed, 10 insertions(+), 4 deletions(-) create mode 100644 .changes/next-release/feature-AmazonSageMakerService-046c91e.json diff --git a/.changes/next-release/feature-AmazonSageMakerService-046c91e.json b/.changes/next-release/feature-AmazonSageMakerService-046c91e.json new file mode 100644 index 000000000000..77975a930be3 --- /dev/null +++ b/.changes/next-release/feature-AmazonSageMakerService-046c91e.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon SageMaker Service", + "contributor": "", + "description": "Amazon Sagemaker Autopilot adds support for Parquet file input to NLP text classification jobs." +} diff --git a/services/sagemaker/src/main/resources/codegen-resources/service-2.json b/services/sagemaker/src/main/resources/codegen-resources/service-2.json index bed442e48b4b..bfb0aa2bccf3 100644 --- a/services/sagemaker/src/main/resources/codegen-resources/service-2.json +++ b/services/sagemaker/src/main/resources/codegen-resources/service-2.json @@ -4974,7 +4974,7 @@ }, "ContentType":{ "shape":"ContentType", - "documentation":"The content type of the data from the input source. The following are the allowed content types for different problems:
ImageClassification: image/png, image/jpeg, image/*
TextClassification: text/csv;header=present
The content type of the data from the input source. The following are the allowed content types for different problems:
ImageClassification: image/png, image/jpeg, or image/*. The default value is image/*.
TextClassification: text/csv;header=present or x-application/vnd.amazon+parquet. The default value is text/csv;header=present.
Status of the deployment recommendation. NOT_APPLICABLE means that SageMaker is unable to provide a default recommendation for the model using the information provided.
Status of the deployment recommendation. The status NOT_APPLICABLE means that SageMaker is unable to provide a default recommendation for the model using the information provided. If the deployment status is IN_PROGRESS, retry your API call after a few seconds to get a COMPLETED deployment recommendation.
A list of RealTimeInferenceRecommendation items.
" } }, - "documentation":"A set of recommended deployment configurations for the model.
" + "documentation":"A set of recommended deployment configurations for the model. To get more advanced recommendations, see CreateInferenceRecommendationsJob to create an inference recommendation job.
" }, "DeploymentStage":{ "type":"structure", @@ -28534,7 +28534,7 @@ "documentation":"The level of permissions that the user has within the RStudioServerPro app. This value defaults to `User`. The `Admin` value allows the user access to the RStudio Administrative Dashboard.
A collection of settings that configure user interaction with the RStudioServerPro app. RStudioServerProAppSettings cannot be updated. The RStudioServerPro app must be deleted and a new one created to make any changes.
A collection of settings that configure user interaction with the RStudioServerPro app.
Associates a skill with the organization under the customer's AWS account. If a skill is private, the user implicitly accepts access to this skill during enablement.
" + "documentation":"Associates a skill with the organization under the customer's AWS account. If a skill is private, the user implicitly accepts access to this skill during enablement.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "AssociateContactWithAddressBook":{ "name":"AssociateContactWithAddressBook", @@ -38,7 +40,9 @@ "errors":[ {"shape":"LimitExceededException"} ], - "documentation":"Associates a contact with a given address book.
" + "documentation":"Associates a contact with a given address book.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "AssociateDeviceWithNetworkProfile":{ "name":"AssociateDeviceWithNetworkProfile", @@ -53,7 +57,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"DeviceNotRegisteredException"} ], - "documentation":"Associates a device with the specified network profile.
" + "documentation":"Associates a device with the specified network profile.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "AssociateDeviceWithRoom":{ "name":"AssociateDeviceWithRoom", @@ -68,7 +74,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"DeviceNotRegisteredException"} ], - "documentation":"Associates a device with a given room. This applies all the settings from the room profile to the device, and all the skills in any skill groups added to that room. This operation requires the device to be online, or else a manual sync is required.
" + "documentation":"Associates a device with a given room. This applies all the settings from the room profile to the device, and all the skills in any skill groups added to that room. This operation requires the device to be online, or else a manual sync is required.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "AssociateSkillGroupWithRoom":{ "name":"AssociateSkillGroupWithRoom", @@ -81,7 +89,9 @@ "errors":[ {"shape":"ConcurrentModificationException"} ], - "documentation":"Associates a skill group with a given room. This enables all skills in the associated skill group on all devices in the room.
" + "documentation":"Associates a skill group with a given room. This enables all skills in the associated skill group on all devices in the room.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "AssociateSkillWithSkillGroup":{ "name":"AssociateSkillWithSkillGroup", @@ -96,7 +106,9 @@ {"shape":"NotFoundException"}, {"shape":"SkillNotLinkedException"} ], - "documentation":"Associates a skill with a skill group.
" + "documentation":"Associates a skill with a skill group.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "AssociateSkillWithUsers":{ "name":"AssociateSkillWithUsers", @@ -110,7 +122,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"NotFoundException"} ], - "documentation":"Makes a private skill available for enrolled users to enable on their devices.
" + "documentation":"Makes a private skill available for enrolled users to enable on their devices.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateAddressBook":{ "name":"CreateAddressBook", @@ -124,7 +138,9 @@ {"shape":"AlreadyExistsException"}, {"shape":"LimitExceededException"} ], - "documentation":"Creates an address book with the specified details.
" + "documentation":"Creates an address book with the specified details.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateBusinessReportSchedule":{ "name":"CreateBusinessReportSchedule", @@ -137,7 +153,9 @@ "errors":[ {"shape":"AlreadyExistsException"} ], - "documentation":"Creates a recurring schedule for usage reports to deliver to the specified S3 location with a specified daily or weekly interval.
" + "documentation":"Creates a recurring schedule for usage reports to deliver to the specified S3 location with a specified daily or weekly interval.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateConferenceProvider":{ "name":"CreateConferenceProvider", @@ -150,7 +168,9 @@ "errors":[ {"shape":"AlreadyExistsException"} ], - "documentation":"Adds a new conference provider under the user's AWS account.
" + "documentation":"Adds a new conference provider under the user's AWS account.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateContact":{ "name":"CreateContact", @@ -164,7 +184,9 @@ {"shape":"AlreadyExistsException"}, {"shape":"LimitExceededException"} ], - "documentation":"Creates a contact with the specified details.
" + "documentation":"Creates a contact with the specified details.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateGatewayGroup":{ "name":"CreateGatewayGroup", @@ -178,7 +200,9 @@ {"shape":"AlreadyExistsException"}, {"shape":"LimitExceededException"} ], - "documentation":"Creates a gateway group with the specified details.
" + "documentation":"Creates a gateway group with the specified details.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateNetworkProfile":{ "name":"CreateNetworkProfile", @@ -195,7 +219,9 @@ {"shape":"InvalidCertificateAuthorityException"}, {"shape":"InvalidServiceLinkedRoleStateException"} ], - "documentation":"Creates a network profile with the specified details.
" + "documentation":"Creates a network profile with the specified details.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateProfile":{ "name":"CreateProfile", @@ -210,7 +236,9 @@ {"shape":"AlreadyExistsException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Creates a new room profile with the specified details.
" + "documentation":"Creates a new room profile with the specified details.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateRoom":{ "name":"CreateRoom", @@ -224,7 +252,9 @@ {"shape":"AlreadyExistsException"}, {"shape":"LimitExceededException"} ], - "documentation":"Creates a room with the specified details.
" + "documentation":"Creates a room with the specified details.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateSkillGroup":{ "name":"CreateSkillGroup", @@ -239,7 +269,9 @@ {"shape":"LimitExceededException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Creates a skill group with a specified name and description.
" + "documentation":"Creates a skill group with a specified name and description.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "CreateUser":{ "name":"CreateUser", @@ -254,7 +286,9 @@ {"shape":"LimitExceededException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Creates a user.
" + "documentation":"Creates a user.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteAddressBook":{ "name":"DeleteAddressBook", @@ -268,7 +302,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Deletes an address book by the address book ARN.
" + "documentation":"Deletes an address book by the address book ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteBusinessReportSchedule":{ "name":"DeleteBusinessReportSchedule", @@ -282,7 +318,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Deletes the recurring report delivery schedule with the specified schedule ARN.
" + "documentation":"Deletes the recurring report delivery schedule with the specified schedule ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteConferenceProvider":{ "name":"DeleteConferenceProvider", @@ -295,7 +333,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Deletes a conference provider.
" + "documentation":"Deletes a conference provider.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteContact":{ "name":"DeleteContact", @@ -309,7 +349,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Deletes a contact by the contact ARN.
" + "documentation":"Deletes a contact by the contact ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteDevice":{ "name":"DeleteDevice", @@ -324,7 +366,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"InvalidCertificateAuthorityException"} ], - "documentation":"Removes a device from Alexa For Business.
" + "documentation":"Removes a device from Alexa For Business.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteDeviceUsageData":{ "name":"DeleteDeviceUsageData", @@ -339,7 +383,9 @@ {"shape":"DeviceNotRegisteredException"}, {"shape":"LimitExceededException"} ], - "documentation":"When this action is called for a specified shared device, it allows authorized users to delete the device's entire previous history of voice input data and associated response data. This action can be called once every 24 hours for a specific shared device.
" + "documentation":"When this action is called for a specified shared device, it allows authorized users to delete the device's entire previous history of voice input data and associated response data. This action can be called once every 24 hours for a specific shared device.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteGatewayGroup":{ "name":"DeleteGatewayGroup", @@ -352,7 +398,9 @@ "errors":[ {"shape":"ResourceAssociatedException"} ], - "documentation":"Deletes a gateway group.
" + "documentation":"Deletes a gateway group.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteNetworkProfile":{ "name":"DeleteNetworkProfile", @@ -367,7 +415,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"NotFoundException"} ], - "documentation":"Deletes a network profile by the network profile ARN.
" + "documentation":"Deletes a network profile by the network profile ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteProfile":{ "name":"DeleteProfile", @@ -381,7 +431,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Deletes a room profile by the profile ARN.
" + "documentation":"Deletes a room profile by the profile ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteRoom":{ "name":"DeleteRoom", @@ -395,7 +447,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Deletes a room by the room ARN.
" + "documentation":"Deletes a room by the room ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteRoomSkillParameter":{ "name":"DeleteRoomSkillParameter", @@ -408,7 +462,9 @@ "errors":[ {"shape":"ConcurrentModificationException"} ], - "documentation":"Deletes room skill parameter details by room, skill, and parameter key ID.
" + "documentation":"Deletes room skill parameter details by room, skill, and parameter key ID.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteSkillAuthorization":{ "name":"DeleteSkillAuthorization", @@ -422,7 +478,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Unlinks a third-party account from a skill.
" + "documentation":"Unlinks a third-party account from a skill.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteSkillGroup":{ "name":"DeleteSkillGroup", @@ -436,7 +494,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Deletes a skill group by skill group ARN.
" + "documentation":"Deletes a skill group by skill group ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DeleteUser":{ "name":"DeleteUser", @@ -450,7 +510,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Deletes a specified user by user ARN and enrollment ARN.
" + "documentation":"Deletes a specified user by user ARN and enrollment ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DisassociateContactFromAddressBook":{ "name":"DisassociateContactFromAddressBook", @@ -460,7 +522,9 @@ }, "input":{"shape":"DisassociateContactFromAddressBookRequest"}, "output":{"shape":"DisassociateContactFromAddressBookResponse"}, - "documentation":"Disassociates a contact from a given address book.
" + "documentation":"Disassociates a contact from a given address book.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DisassociateDeviceFromRoom":{ "name":"DisassociateDeviceFromRoom", @@ -474,7 +538,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"DeviceNotRegisteredException"} ], - "documentation":"Disassociates a device from its current room. The device continues to be connected to the Wi-Fi network and is still registered to the account. The device settings and skills are removed from the room.
" + "documentation":"Disassociates a device from its current room. The device continues to be connected to the Wi-Fi network and is still registered to the account. The device settings and skills are removed from the room.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DisassociateSkillFromSkillGroup":{ "name":"DisassociateSkillFromSkillGroup", @@ -488,7 +554,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"NotFoundException"} ], - "documentation":"Disassociates a skill from a skill group.
" + "documentation":"Disassociates a skill from a skill group.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DisassociateSkillFromUsers":{ "name":"DisassociateSkillFromUsers", @@ -502,7 +570,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"NotFoundException"} ], - "documentation":"Makes a private skill unavailable for enrolled users and prevents them from enabling it on their devices.
" + "documentation":"Makes a private skill unavailable for enrolled users and prevents them from enabling it on their devices.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "DisassociateSkillGroupFromRoom":{ "name":"DisassociateSkillGroupFromRoom", @@ -515,7 +585,9 @@ "errors":[ {"shape":"ConcurrentModificationException"} ], - "documentation":"Disassociates a skill group from a specified room. This disables all skills in the skill group on all devices in the room.
" + "documentation":"Disassociates a skill group from a specified room. This disables all skills in the skill group on all devices in the room.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ForgetSmartHomeAppliances":{ "name":"ForgetSmartHomeAppliances", @@ -528,7 +600,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Forgets smart home appliances associated to a room.
" + "documentation":"Forgets smart home appliances associated to a room.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetAddressBook":{ "name":"GetAddressBook", @@ -541,7 +615,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Gets address the book details by the address book ARN.
" + "documentation":"Gets address the book details by the address book ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetConferencePreference":{ "name":"GetConferencePreference", @@ -554,7 +630,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Retrieves the existing conference preferences.
" + "documentation":"Retrieves the existing conference preferences.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetConferenceProvider":{ "name":"GetConferenceProvider", @@ -567,7 +645,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Gets details about a specific conference provider.
" + "documentation":"Gets details about a specific conference provider.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetContact":{ "name":"GetContact", @@ -580,7 +660,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Gets the contact details by the contact ARN.
" + "documentation":"Gets the contact details by the contact ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetDevice":{ "name":"GetDevice", @@ -593,7 +675,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Gets the details of a device by device ARN.
" + "documentation":"Gets the details of a device by device ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetGateway":{ "name":"GetGateway", @@ -606,7 +690,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Retrieves the details of a gateway.
" + "documentation":"Retrieves the details of a gateway.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetGatewayGroup":{ "name":"GetGatewayGroup", @@ -619,7 +705,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Retrieves the details of a gateway group.
" + "documentation":"Retrieves the details of a gateway group.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetInvitationConfiguration":{ "name":"GetInvitationConfiguration", @@ -632,7 +720,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Retrieves the configured values for the user enrollment invitation email template.
" + "documentation":"Retrieves the configured values for the user enrollment invitation email template.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetNetworkProfile":{ "name":"GetNetworkProfile", @@ -646,7 +736,9 @@ {"shape":"NotFoundException"}, {"shape":"InvalidSecretsManagerResourceException"} ], - "documentation":"Gets the network profile details by the network profile ARN.
" + "documentation":"Gets the network profile details by the network profile ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetProfile":{ "name":"GetProfile", @@ -659,7 +751,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Gets the details of a room profile by profile ARN.
" + "documentation":"Gets the details of a room profile by profile ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetRoom":{ "name":"GetRoom", @@ -672,7 +766,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Gets room details by room ARN.
" + "documentation":"Gets room details by room ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetRoomSkillParameter":{ "name":"GetRoomSkillParameter", @@ -685,7 +781,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Gets room skill parameter details by room, skill, and parameter key ARN.
" + "documentation":"Gets room skill parameter details by room, skill, and parameter key ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "GetSkillGroup":{ "name":"GetSkillGroup", @@ -698,7 +796,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Gets skill group details by skill group ARN.
" + "documentation":"Gets skill group details by skill group ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListBusinessReportSchedules":{ "name":"ListBusinessReportSchedules", @@ -708,7 +808,9 @@ }, "input":{"shape":"ListBusinessReportSchedulesRequest"}, "output":{"shape":"ListBusinessReportSchedulesResponse"}, - "documentation":"Lists the details of the schedules that a user configured. A download URL of the report associated with each schedule is returned every time this action is called. A new download URL is returned each time, and is valid for 24 hours.
" + "documentation":"Lists the details of the schedules that a user configured. A download URL of the report associated with each schedule is returned every time this action is called. A new download URL is returned each time, and is valid for 24 hours.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListConferenceProviders":{ "name":"ListConferenceProviders", @@ -718,7 +820,9 @@ }, "input":{"shape":"ListConferenceProvidersRequest"}, "output":{"shape":"ListConferenceProvidersResponse"}, - "documentation":"Lists conference providers under a specific AWS account.
" + "documentation":"Lists conference providers under a specific AWS account.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListDeviceEvents":{ "name":"ListDeviceEvents", @@ -731,7 +835,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Lists the device event history, including device connection status, for up to 30 days.
" + "documentation":"Lists the device event history, including device connection status, for up to 30 days.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListGatewayGroups":{ "name":"ListGatewayGroups", @@ -741,7 +847,9 @@ }, "input":{"shape":"ListGatewayGroupsRequest"}, "output":{"shape":"ListGatewayGroupsResponse"}, - "documentation":"Retrieves a list of gateway group summaries. Use GetGatewayGroup to retrieve details of a specific gateway group.
" + "documentation":"Retrieves a list of gateway group summaries. Use GetGatewayGroup to retrieve details of a specific gateway group.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListGateways":{ "name":"ListGateways", @@ -751,7 +859,9 @@ }, "input":{"shape":"ListGatewaysRequest"}, "output":{"shape":"ListGatewaysResponse"}, - "documentation":"Retrieves a list of gateway summaries. Use GetGateway to retrieve details of a specific gateway. An optional gateway group ARN can be provided to only retrieve gateway summaries of gateways that are associated with that gateway group ARN.
" + "documentation":"Retrieves a list of gateway summaries. Use GetGateway to retrieve details of a specific gateway. An optional gateway group ARN can be provided to only retrieve gateway summaries of gateways that are associated with that gateway group ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListSkills":{ "name":"ListSkills", @@ -761,7 +871,9 @@ }, "input":{"shape":"ListSkillsRequest"}, "output":{"shape":"ListSkillsResponse"}, - "documentation":"Lists all enabled skills in a specific skill group.
" + "documentation":"Lists all enabled skills in a specific skill group.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListSkillsStoreCategories":{ "name":"ListSkillsStoreCategories", @@ -771,7 +883,9 @@ }, "input":{"shape":"ListSkillsStoreCategoriesRequest"}, "output":{"shape":"ListSkillsStoreCategoriesResponse"}, - "documentation":"Lists all categories in the Alexa skill store.
" + "documentation":"Lists all categories in the Alexa skill store.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListSkillsStoreSkillsByCategory":{ "name":"ListSkillsStoreSkillsByCategory", @@ -781,7 +895,9 @@ }, "input":{"shape":"ListSkillsStoreSkillsByCategoryRequest"}, "output":{"shape":"ListSkillsStoreSkillsByCategoryResponse"}, - "documentation":"Lists all skills in the Alexa skill store by category.
" + "documentation":"Lists all skills in the Alexa skill store by category.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListSmartHomeAppliances":{ "name":"ListSmartHomeAppliances", @@ -794,7 +910,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Lists all of the smart home appliances associated with a room.
" + "documentation":"Lists all of the smart home appliances associated with a room.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ListTags":{ "name":"ListTags", @@ -807,7 +925,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Lists all tags for the specified resource.
" + "documentation":"Lists all tags for the specified resource.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "PutConferencePreference":{ "name":"PutConferencePreference", @@ -820,7 +940,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Sets the conference preferences on a specific conference provider at the account level.
" + "documentation":"Sets the conference preferences on a specific conference provider at the account level.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "PutInvitationConfiguration":{ "name":"PutInvitationConfiguration", @@ -834,7 +956,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Configures the email template for the user enrollment invitation with the specified attributes.
" + "documentation":"Configures the email template for the user enrollment invitation with the specified attributes.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "PutRoomSkillParameter":{ "name":"PutRoomSkillParameter", @@ -847,7 +971,9 @@ "errors":[ {"shape":"ConcurrentModificationException"} ], - "documentation":"Updates room skill parameter details by room, skill, and parameter key ID. Not all skills have a room skill parameter.
" + "documentation":"Updates room skill parameter details by room, skill, and parameter key ID. Not all skills have a room skill parameter.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "PutSkillAuthorization":{ "name":"PutSkillAuthorization", @@ -861,7 +987,9 @@ {"shape":"UnauthorizedException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Links a user's account to a third-party skill provider. If this API operation is called by an assumed IAM role, the skill being linked must be a private skill. Also, the skill must be owned by the AWS account that assumed the IAM role.
" + "documentation":"Links a user's account to a third-party skill provider. If this API operation is called by an assumed IAM role, the skill being linked must be a private skill. Also, the skill must be owned by the AWS account that assumed the IAM role.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "RegisterAVSDevice":{ "name":"RegisterAVSDevice", @@ -877,7 +1005,9 @@ {"shape":"NotFoundException"}, {"shape":"InvalidDeviceException"} ], - "documentation":"Registers an Alexa-enabled device built by an Original Equipment Manufacturer (OEM) using Alexa Voice Service (AVS).
" + "documentation":"Registers an Alexa-enabled device built by an Original Equipment Manufacturer (OEM) using Alexa Voice Service (AVS).
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "RejectSkill":{ "name":"RejectSkill", @@ -891,7 +1021,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"NotFoundException"} ], - "documentation":"Disassociates a skill from the organization under a user's AWS account. If the skill is a private skill, it moves to an AcceptStatus of PENDING. Any private or public skill that is rejected can be added later by calling the ApproveSkill API.
" + "documentation":"Disassociates a skill from the organization under a user's AWS account. If the skill is a private skill, it moves to an AcceptStatus of PENDING. Any private or public skill that is rejected can be added later by calling the ApproveSkill API.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "ResolveRoom":{ "name":"ResolveRoom", @@ -904,7 +1036,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Determines the details for the room from which a skill request was invoked. This operation is used by skill developers.
To query ResolveRoom from an Alexa skill, the skill ID needs to be authorized. When the skill is using an AWS Lambda function, the skill is automatically authorized when you publish your skill as a private skill to your AWS account. Skills that are hosted using a custom web service must be manually authorized. To get your skill authorized, contact AWS Support with your AWS account ID that queries the ResolveRoom API and skill ID.
" + "documentation":"Determines the details for the room from which a skill request was invoked. This operation is used by skill developers.
To query ResolveRoom from an Alexa skill, the skill ID needs to be authorized. When the skill is using an AWS Lambda function, the skill is automatically authorized when you publish your skill as a private skill to your AWS account. Skills that are hosted using a custom web service must be manually authorized. To get your skill authorized, contact AWS Support with your AWS account ID that queries the ResolveRoom API and skill ID.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "RevokeInvitation":{ "name":"RevokeInvitation", @@ -918,7 +1052,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Revokes an invitation and invalidates the enrollment URL.
" + "documentation":"Revokes an invitation and invalidates the enrollment URL.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SearchAddressBooks":{ "name":"SearchAddressBooks", @@ -928,7 +1064,9 @@ }, "input":{"shape":"SearchAddressBooksRequest"}, "output":{"shape":"SearchAddressBooksResponse"}, - "documentation":"Searches address books and lists the ones that meet a set of filter and sort criteria.
" + "documentation":"Searches address books and lists the ones that meet a set of filter and sort criteria.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SearchContacts":{ "name":"SearchContacts", @@ -938,7 +1076,9 @@ }, "input":{"shape":"SearchContactsRequest"}, "output":{"shape":"SearchContactsResponse"}, - "documentation":"Searches contacts and lists the ones that meet a set of filter and sort criteria.
" + "documentation":"Searches contacts and lists the ones that meet a set of filter and sort criteria.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SearchDevices":{ "name":"SearchDevices", @@ -948,7 +1088,9 @@ }, "input":{"shape":"SearchDevicesRequest"}, "output":{"shape":"SearchDevicesResponse"}, - "documentation":"Searches devices and lists the ones that meet a set of filter criteria.
" + "documentation":"Searches devices and lists the ones that meet a set of filter criteria.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SearchNetworkProfiles":{ "name":"SearchNetworkProfiles", @@ -958,7 +1100,9 @@ }, "input":{"shape":"SearchNetworkProfilesRequest"}, "output":{"shape":"SearchNetworkProfilesResponse"}, - "documentation":"Searches network profiles and lists the ones that meet a set of filter and sort criteria.
" + "documentation":"Searches network profiles and lists the ones that meet a set of filter and sort criteria.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SearchProfiles":{ "name":"SearchProfiles", @@ -968,7 +1112,9 @@ }, "input":{"shape":"SearchProfilesRequest"}, "output":{"shape":"SearchProfilesResponse"}, - "documentation":"Searches room profiles and lists the ones that meet a set of filter criteria.
" + "documentation":"Searches room profiles and lists the ones that meet a set of filter criteria.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SearchRooms":{ "name":"SearchRooms", @@ -978,7 +1124,9 @@ }, "input":{"shape":"SearchRoomsRequest"}, "output":{"shape":"SearchRoomsResponse"}, - "documentation":"Searches rooms and lists the ones that meet a set of filter and sort criteria.
" + "documentation":"Searches rooms and lists the ones that meet a set of filter and sort criteria.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SearchSkillGroups":{ "name":"SearchSkillGroups", @@ -988,7 +1136,9 @@ }, "input":{"shape":"SearchSkillGroupsRequest"}, "output":{"shape":"SearchSkillGroupsResponse"}, - "documentation":"Searches skill groups and lists the ones that meet a set of filter and sort criteria.
" + "documentation":"Searches skill groups and lists the ones that meet a set of filter and sort criteria.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SearchUsers":{ "name":"SearchUsers", @@ -998,7 +1148,9 @@ }, "input":{"shape":"SearchUsersRequest"}, "output":{"shape":"SearchUsersResponse"}, - "documentation":"Searches users and lists the ones that meet a set of filter and sort criteria.
" + "documentation":"Searches users and lists the ones that meet a set of filter and sort criteria.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "SendAnnouncement":{ "name":"SendAnnouncement", @@ -1027,7 +1179,9 @@ {"shape":"InvalidUserStatusException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Sends an enrollment invitation email with a URL to a user. The URL is valid for 30 days or until you call this operation again, whichever comes first.
" + "documentation":"Sends an enrollment invitation email with a URL to a user. The URL is valid for 30 days or until you call this operation again, whichever comes first.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "StartDeviceSync":{ "name":"StartDeviceSync", @@ -1040,7 +1194,9 @@ "errors":[ {"shape":"DeviceNotRegisteredException"} ], - "documentation":"Resets a device and its account to the known default settings. This clears all information and settings set by previous users in the following ways:
Bluetooth - This unpairs all bluetooth devices paired with your echo device.
Volume - This resets the echo device's volume to the default value.
Notifications - This clears all notifications from your echo device.
Lists - This clears all to-do items from your echo device.
Settings - This internally syncs the room's profile (if the device is assigned to a room), contacts, address books, delegation access for account linking, and communications (if enabled on the room profile).
Resets a device and its account to the known default settings. This clears all information and settings set by previous users in the following ways:
Bluetooth - This unpairs all bluetooth devices paired with your echo device.
Volume - This resets the echo device's volume to the default value.
Notifications - This clears all notifications from your echo device.
Lists - This clears all to-do items from your echo device.
Settings - This internally syncs the room's profile (if the device is assigned to a room), contacts, address books, delegation access for account linking, and communications (if enabled on the room profile).
Initiates the discovery of any smart home appliances associated with the room.
" + "documentation":"Initiates the discovery of any smart home appliances associated with the room.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "TagResource":{ "name":"TagResource", @@ -1066,7 +1224,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Adds metadata tags to a specified resource.
" + "documentation":"Adds metadata tags to a specified resource.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UntagResource":{ "name":"UntagResource", @@ -1079,7 +1239,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Removes metadata tags from a specified resource.
" + "documentation":"Removes metadata tags from a specified resource.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateAddressBook":{ "name":"UpdateAddressBook", @@ -1094,7 +1256,9 @@ {"shape":"NameInUseException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Updates address book details by the address book ARN.
" + "documentation":"Updates address book details by the address book ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateBusinessReportSchedule":{ "name":"UpdateBusinessReportSchedule", @@ -1108,7 +1272,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Updates the configuration of the report delivery schedule with the specified schedule ARN.
" + "documentation":"Updates the configuration of the report delivery schedule with the specified schedule ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateConferenceProvider":{ "name":"UpdateConferenceProvider", @@ -1121,7 +1287,9 @@ "errors":[ {"shape":"NotFoundException"} ], - "documentation":"Updates an existing conference provider's settings.
" + "documentation":"Updates an existing conference provider's settings.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateContact":{ "name":"UpdateContact", @@ -1135,7 +1303,9 @@ {"shape":"NotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Updates the contact details by the contact ARN.
" + "documentation":"Updates the contact details by the contact ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateDevice":{ "name":"UpdateDevice", @@ -1150,7 +1320,9 @@ {"shape":"ConcurrentModificationException"}, {"shape":"DeviceNotRegisteredException"} ], - "documentation":"Updates the device name by device ARN.
" + "documentation":"Updates the device name by device ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateGateway":{ "name":"UpdateGateway", @@ -1164,7 +1336,9 @@ {"shape":"NotFoundException"}, {"shape":"NameInUseException"} ], - "documentation":"Updates the details of a gateway. If any optional field is not provided, the existing corresponding value is left unmodified.
" + "documentation":"Updates the details of a gateway. If any optional field is not provided, the existing corresponding value is left unmodified.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateGatewayGroup":{ "name":"UpdateGatewayGroup", @@ -1178,7 +1352,9 @@ {"shape":"NotFoundException"}, {"shape":"NameInUseException"} ], - "documentation":"Updates the details of a gateway group. If any optional field is not provided, the existing corresponding value is left unmodified.
" + "documentation":"Updates the details of a gateway group. If any optional field is not provided, the existing corresponding value is left unmodified.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateNetworkProfile":{ "name":"UpdateNetworkProfile", @@ -1195,7 +1371,9 @@ {"shape":"InvalidCertificateAuthorityException"}, {"shape":"InvalidSecretsManagerResourceException"} ], - "documentation":"Updates a network profile by the network profile ARN.
" + "documentation":"Updates a network profile by the network profile ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateProfile":{ "name":"UpdateProfile", @@ -1210,7 +1388,9 @@ {"shape":"NameInUseException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Updates an existing room profile by room profile ARN.
" + "documentation":"Updates an existing room profile by room profile ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateRoom":{ "name":"UpdateRoom", @@ -1224,7 +1404,9 @@ {"shape":"NotFoundException"}, {"shape":"NameInUseException"} ], - "documentation":"Updates room details by room ARN.
" + "documentation":"Updates room details by room ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" }, "UpdateSkillGroup":{ "name":"UpdateSkillGroup", @@ -1239,7 +1421,9 @@ {"shape":"NameInUseException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"Updates skill group details by skill group ARN.
" + "documentation":"Updates skill group details by skill group ARN.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" } }, "shapes":{ @@ -2126,7 +2310,8 @@ "RequireCheckIn":{ "shape":"CreateRequireCheckIn", "documentation":"Settings for requiring a check in when a room is reserved. Alexa can cancel a room reservation if it's not checked into to make the room available for others. Users can check in by joining the meeting with Alexa or an AVS device, or by saying “Alexa, check in.”
" - } + }, + "ProactiveJoin":{"shape":"CreateProactiveJoin"} }, "documentation":"Creates meeting room settings of a room profile.
" }, @@ -2194,6 +2379,13 @@ } } }, + "CreateProactiveJoin":{ + "type":"structure", + "required":["EnabledByMotion"], + "members":{ + "EnabledByMotion":{"shape":"Boolean"} + } + }, "CreateProfileRequest":{ "type":"structure", "required":[ @@ -3040,7 +3232,7 @@ "type":"string", "max":128, "min":1, - "pattern":"([0-9a-zA-Z]([+-.\\w]*[0-9a-zA-Z])*@([0-9a-zA-Z]([-\\w]*[0-9a-zA-Z]+)*\\.)+[a-zA-Z]{2,9})" + "pattern":"\\w[+-.\\w]*@\\w[\\w\\.\\-]+\\.[0-9a-zA-Z]{2,24}" }, "EnablementType":{ "type":"string", @@ -3974,7 +4166,8 @@ "RequireCheckIn":{ "shape":"RequireCheckIn", "documentation":"Settings for requiring a check in when a room is reserved. Alexa can cancel a room reservation if it's not checked into. This makes the room available for others. Users can check in by joining the meeting with Alexa or an AVS device, or by saying “Alexa, check in.”
" - } + }, + "ProactiveJoin":{"shape":"ProactiveJoin"} }, "documentation":"Meeting room settings of a room profile.
" }, @@ -4220,6 +4413,12 @@ "sensitive":true }, "PrivacyPolicy":{"type":"string"}, + "ProactiveJoin":{ + "type":"structure", + "members":{ + "EnabledByMotion":{"shape":"Boolean"} + } + }, "ProductDescription":{"type":"string"}, "ProductId":{ "type":"string", @@ -5852,7 +6051,8 @@ "RequireCheckIn":{ "shape":"UpdateRequireCheckIn", "documentation":"Settings for requiring a check in when a room is reserved. Alexa can cancel a room reservation if it's not checked into to make the room available for others. Users can check in by joining the meeting with Alexa or an AVS device, or by saying “Alexa, check in.”
" - } + }, + "ProactiveJoin":{"shape":"UpdateProactiveJoin"} }, "documentation":"Updates meeting room settings of a room profile.
" }, @@ -5895,6 +6095,13 @@ "members":{ } }, + "UpdateProactiveJoin":{ + "type":"structure", + "required":["EnabledByMotion"], + "members":{ + "EnabledByMotion":{"shape":"Boolean"} + } + }, "UpdateProfileRequest":{ "type":"structure", "members":{ @@ -6105,5 +6312,7 @@ "pattern":"[a-zA-Z0-9@_+.-]*" } }, - "documentation":"Alexa for Business helps you use Alexa in your organization. Alexa for Business provides you with the tools to manage Alexa devices, enroll your users, and assign skills, at scale. You can build your own context-aware voice skills using the Alexa Skills Kit and the Alexa for Business API operations. You can also make these available as private skills for your organization. Alexa for Business makes it efficient to voice-enable your products and services, thus providing context-aware voice experiences for your customers. Device makers building with the Alexa Voice Service (AVS) can create fully integrated solutions, register their products with Alexa for Business, and manage them as shared devices in their organization.
" + "documentation":"Alexa for Business has been retired and is no longer supported.
", + "deprecated":true, + "deprecatedMessage":"Alexa For Business is no longer supported" } From 585cff6c4f5da82a80ed9f47b3056cb726a5a06e Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 1 Jun 2023 18:09:55 +0000 Subject: [PATCH 014/317] Amazon Interactive Video Service Update: API Update for IVS Advanced Channel type --- ...AmazonInteractiveVideoService-643eb10.json | 6 + .../codegen-resources/endpoint-tests.json | 110 +++++++++--------- .../codegen-resources/service-2.json | 39 ++++++- 3 files changed, 95 insertions(+), 60 deletions(-) create mode 100644 .changes/next-release/feature-AmazonInteractiveVideoService-643eb10.json diff --git a/.changes/next-release/feature-AmazonInteractiveVideoService-643eb10.json b/.changes/next-release/feature-AmazonInteractiveVideoService-643eb10.json new file mode 100644 index 000000000000..c315a003d2b6 --- /dev/null +++ b/.changes/next-release/feature-AmazonInteractiveVideoService-643eb10.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Interactive Video Service", + "contributor": "", + "description": "API Update for IVS Advanced Channel type" +} diff --git a/services/ivs/src/main/resources/codegen-resources/endpoint-tests.json b/services/ivs/src/main/resources/codegen-resources/endpoint-tests.json index 893c2009eed2..de653a93c40a 100644 --- a/services/ivs/src/main/resources/codegen-resources/endpoint-tests.json +++ b/services/ivs/src/main/resources/codegen-resources/endpoint-tests.json @@ -8,9 +8,9 @@ } }, "params": { + "Region": "ap-northeast-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-northeast-1" + "UseDualStack": false } }, { @@ -21,9 +21,9 @@ } }, "params": { + "Region": "ap-northeast-2", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-northeast-2" + "UseDualStack": false } }, { @@ -34,9 +34,9 @@ } }, "params": { + "Region": "ap-south-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-south-1" + "UseDualStack": false } }, { @@ -47,9 +47,9 @@ } }, "params": { + "Region": "eu-central-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-central-1" + "UseDualStack": false } }, { @@ -60,9 +60,9 @@ } }, "params": { + "Region": "eu-west-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-west-1" + "UseDualStack": false } }, { @@ -73,9 +73,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-east-1" + "UseDualStack": false } }, { @@ -86,9 +86,9 @@ } }, "params": { + "Region": "us-west-2", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-west-2" + "UseDualStack": false } }, { @@ -99,9 +99,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "us-east-1" + "UseDualStack": true } }, { @@ -112,9 +112,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "us-east-1" + "UseDualStack": false } }, { @@ -125,9 +125,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-east-1" + "UseDualStack": true } }, { @@ -138,9 +138,9 @@ } }, "params": { + "Region": "cn-north-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "cn-north-1" + "UseDualStack": true } }, { @@ -151,9 +151,9 @@ } }, "params": { + "Region": "cn-north-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "cn-north-1" + "UseDualStack": false } }, { @@ -164,9 +164,9 @@ } }, "params": { + "Region": "cn-north-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "cn-north-1" + "UseDualStack": true } }, { @@ -177,9 +177,9 @@ } }, "params": { + "Region": "cn-north-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "cn-north-1" + "UseDualStack": false } }, { @@ -190,9 +190,9 @@ } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "us-gov-east-1" + "UseDualStack": true } }, { @@ -203,9 +203,9 @@ } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "us-gov-east-1" + "UseDualStack": false } }, { @@ -216,9 +216,9 @@ } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-gov-east-1" + "UseDualStack": true } }, { @@ -229,9 +229,9 @@ } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-gov-east-1" + "UseDualStack": false } }, { @@ -240,9 +240,9 @@ "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { + "Region": "us-iso-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "us-iso-east-1" + "UseDualStack": true } }, { @@ -253,9 +253,9 @@ } }, "params": { + "Region": "us-iso-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "us-iso-east-1" + "UseDualStack": false } }, { @@ -264,9 +264,9 @@ "error": "DualStack is enabled but this partition does not support DualStack" }, "params": { + "Region": "us-iso-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-iso-east-1" + "UseDualStack": true } }, { @@ -277,9 +277,9 @@ } }, "params": { + "Region": "us-iso-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-iso-east-1" + "UseDualStack": false } }, { @@ -288,9 +288,9 @@ "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { + "Region": "us-isob-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "us-isob-east-1" + "UseDualStack": true } }, { @@ -301,9 +301,9 @@ } }, "params": { + "Region": "us-isob-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "us-isob-east-1" + "UseDualStack": false } }, { @@ -312,9 +312,9 @@ "error": "DualStack is enabled but this partition does not support DualStack" }, "params": { + "Region": "us-isob-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-isob-east-1" + "UseDualStack": true } }, { @@ -325,9 +325,9 @@ } }, "params": { + "Region": "us-isob-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-isob-east-1" + "UseDualStack": false } }, { @@ -338,9 +338,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": false, "UseDualStack": false, - "Region": "us-east-1", "Endpoint": "https://example.com" } }, @@ -363,9 +363,9 @@ "error": "Invalid Configuration: FIPS and custom endpoint are not supported" }, "params": { + "Region": "us-east-1", "UseFIPS": true, "UseDualStack": false, - "Region": "us-east-1", "Endpoint": "https://example.com" } }, @@ -375,9 +375,9 @@ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" }, "params": { + "Region": "us-east-1", "UseFIPS": false, "UseDualStack": true, - "Region": "us-east-1", "Endpoint": "https://example.com" } }, diff --git a/services/ivs/src/main/resources/codegen-resources/service-2.json b/services/ivs/src/main/resources/codegen-resources/service-2.json index 0f305c5fb55d..1a8e9808d6bc 100644 --- a/services/ivs/src/main/resources/codegen-resources/service-2.json +++ b/services/ivs/src/main/resources/codegen-resources/service-2.json @@ -467,7 +467,7 @@ {"shape":"PendingVerification"}, {"shape":"ConflictException"} ], - "documentation":"Updates a channel's configuration. This does not affect an ongoing stream of this channel. You must stop and restart the stream for the changes to take effect.
" + "documentation":"Updates a channel's configuration. Live channels cannot be updated. You must stop the ongoing stream, update the channel, and restart the stream for the changes to take effect.
" } }, "shapes":{ @@ -608,6 +608,10 @@ "shape":"PlaybackURL", "documentation":"Channel playback URL.
" }, + "preset":{ + "shape":"TranscodePreset", + "documentation":"Optional transcode preset for the channel. This is selectable only for ADVANCED_HD and ADVANCED_SD channel types. For those channel types, the default preset is HIGHER_BANDWIDTH_DELIVERY. For other channel types (BASIC and STANDARD), preset is the empty string (\"\").
Recording-configuration ARN. A value other than an empty string indicates that recording is enabled. Default: \"\" (empty string, recording is disabled).
" @@ -618,7 +622,7 @@ }, "type":{ "shape":"ChannelType", - "documentation":"Channel type, which determines the allowable resolution and bitrate. If you exceed the allowable resolution or bitrate, the stream probably will disconnect immediately. Default: STANDARD. Valid values:
STANDARD: Video is transcoded: multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Transcoding allows higher playback quality across a range of download speeds. Resolution can be up to 1080p and bitrate can be up to 8.5 Mbps. Audio is transcoded only for renditions 360p and below; above that, audio is passed through. This is the default.
BASIC: Video is transmuxed: Amazon IVS delivers the original input to viewers. The viewer’s video-quality choice is limited to the original input. Resolution can be up to 1080p and bitrate can be up to 1.5 Mbps for 480p and up to 3.5 Mbps for resolutions between 480p and 1080p.
Channel type, which determines the allowable resolution and bitrate. If you exceed the allowable input resolution or bitrate, the stream probably will disconnect immediately. Some types generate multiple qualities (renditions) from the original input; this automatically gives viewers the best experience for their devices and network conditions. Some types provide transcoded video; transcoding allows higher playback quality across a range of download speeds. Default: STANDARD. Valid values:
BASIC: Video is transmuxed: Amazon IVS delivers the original input quality to viewers. The viewer’s video-quality choice is limited to the original input. Input resolution can be up to 1080p and bitrate can be up to 1.5 Mbps for 480p and up to 3.5 Mbps for resolutions between 480p and 1080p. Original audio is passed through.
STANDARD: Video is transcoded: multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Transcoding allows higher playback quality across a range of download speeds. Resolution can be up to 1080p and bitrate can be up to 8.5 Mbps. Audio is transcoded only for renditions 360p and below; above that, audio is passed through. This is the default when you create a channel.
ADVANCED_SD: Video is transcoded; multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Input resolution can be up to 1080p and bitrate can be up to 8.5 Mbps; output is capped at SD quality (480p). You can select an optional transcode preset (see below). Audio for all renditions is transcoded, and an audio-only rendition is available.
ADVANCED_HD: Video is transcoded; multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Input resolution can be up to 1080p and bitrate can be up to 8.5 Mbps; output is capped at HD quality (720p). You can select an optional transcode preset (see below). Audio for all renditions is transcoded, and an audio-only rendition is available.
Optional transcode presets (available for the ADVANCED types) allow you to trade off available download bandwidth and video quality, to optimize the viewing experience. There are two presets:
Constrained bandwidth delivery uses a lower bitrate for each quality level. Use it if you have low download bandwidth and/or simple video content (e.g., talking heads)
Higher bandwidth delivery uses a higher bitrate for each quality level. Use it if you have high download bandwidth and/or complex video content (e.g., flashes and quick scene changes).
Object specifying a channel.
" @@ -696,6 +700,10 @@ "shape":"ChannelName", "documentation":"Channel name.
" }, + "preset":{ + "shape":"TranscodePreset", + "documentation":"Optional transcode preset for the channel. This is selectable only for ADVANCED_HD and ADVANCED_SD channel types. For those channel types, the default preset is HIGHER_BANDWIDTH_DELIVERY. For other channel types (BASIC and STANDARD), preset is the empty string (\"\").
Recording-configuration ARN. A value other than an empty string indicates that recording is enabled. Default: \"\" (empty string, recording is disabled).
" @@ -703,6 +711,10 @@ "tags":{ "shape":"Tags", "documentation":"Tags attached to the resource. Array of 1-50 maps, each of the form string:string (key:value). See Tagging Amazon Web Services Resources for more information, including restrictions that apply to tags and \"Tag naming limits and requirements\"; Amazon IVS has no service-specific constraints beyond what is documented there.
Channel type, which determines the allowable resolution and bitrate. If you exceed the allowable input resolution or bitrate, the stream probably will disconnect immediately. Some types generate multiple qualities (renditions) from the original input; this automatically gives viewers the best experience for their devices and network conditions. Some types provide transcoded video; transcoding allows higher playback quality across a range of download speeds. Default: STANDARD. Valid values:
BASIC: Video is transmuxed: Amazon IVS delivers the original input quality to viewers. The viewer’s video-quality choice is limited to the original input. Input resolution can be up to 1080p and bitrate can be up to 1.5 Mbps for 480p and up to 3.5 Mbps for resolutions between 480p and 1080p. Original audio is passed through.
STANDARD: Video is transcoded: multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Transcoding allows higher playback quality across a range of download speeds. Resolution can be up to 1080p and bitrate can be up to 8.5 Mbps. Audio is transcoded only for renditions 360p and below; above that, audio is passed through. This is the default when you create a channel.
ADVANCED_SD: Video is transcoded; multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Input resolution can be up to 1080p and bitrate can be up to 8.5 Mbps; output is capped at SD quality (480p). You can select an optional transcode preset (see below). Audio for all renditions is transcoded, and an audio-only rendition is available.
ADVANCED_HD: Video is transcoded; multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Input resolution can be up to 1080p and bitrate can be up to 8.5 Mbps; output is capped at HD quality (720p). You can select an optional transcode preset (see below). Audio for all renditions is transcoded, and an audio-only rendition is available.
Optional transcode presets (available for the ADVANCED types) allow you to trade off available download bandwidth and video quality, to optimize the viewing experience. There are two presets:
Constrained bandwidth delivery uses a lower bitrate for each quality level. Use it if you have low download bandwidth and/or simple video content (e.g., talking heads)
Higher bandwidth delivery uses a higher bitrate for each quality level. Use it if you have high download bandwidth and/or complex video content (e.g., flashes and quick scene changes).
Summary information about a channel.
" @@ -711,7 +723,9 @@ "type":"string", "enum":[ "BASIC", - "STANDARD" + "STANDARD", + "ADVANCED_SD", + "ADVANCED_HD" ] }, "Channels":{ @@ -752,6 +766,10 @@ "shape":"ChannelName", "documentation":"Channel name.
" }, + "preset":{ + "shape":"TranscodePreset", + "documentation":"Optional transcode preset for the channel. This is selectable only for ADVANCED_HD and ADVANCED_SD channel types. For those channel types, the default preset is HIGHER_BANDWIDTH_DELIVERY. For other channel types (BASIC and STANDARD), preset is the empty string (\"\").
Recording-configuration ARN. Default: \"\" (empty string, recording is disabled).
" @@ -762,7 +780,7 @@ }, "type":{ "shape":"ChannelType", - "documentation":"Channel type, which determines the allowable resolution and bitrate. If you exceed the allowable resolution or bitrate, the stream probably will disconnect immediately. Default: STANDARD. Valid values:
STANDARD: Video is transcoded: multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Transcoding allows higher playback quality across a range of download speeds. Resolution can be up to 1080p and bitrate can be up to 8.5 Mbps. Audio is transcoded only for renditions 360p and below; above that, audio is passed through. This is the default.
BASIC: Video is transmuxed: Amazon IVS delivers the original input to viewers. The viewer’s video-quality choice is limited to the original input. Resolution can be up to 1080p and bitrate can be up to 1.5 Mbps for 480p and up to 3.5 Mbps for resolutions between 480p and 1080p.
Channel type, which determines the allowable resolution and bitrate. If you exceed the allowable input resolution or bitrate, the stream probably will disconnect immediately. Some types generate multiple qualities (renditions) from the original input; this automatically gives viewers the best experience for their devices and network conditions. Some types provide transcoded video; transcoding allows higher playback quality across a range of download speeds. Default: STANDARD. Valid values:
BASIC: Video is transmuxed: Amazon IVS delivers the original input quality to viewers. The viewer’s video-quality choice is limited to the original input. Input resolution can be up to 1080p and bitrate can be up to 1.5 Mbps for 480p and up to 3.5 Mbps for resolutions between 480p and 1080p. Original audio is passed through.
STANDARD: Video is transcoded: multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Transcoding allows higher playback quality across a range of download speeds. Resolution can be up to 1080p and bitrate can be up to 8.5 Mbps. Audio is transcoded only for renditions 360p and below; above that, audio is passed through. This is the default when you create a channel.
ADVANCED_SD: Video is transcoded; multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Input resolution can be up to 1080p and bitrate can be up to 8.5 Mbps; output is capped at SD quality (480p). You can select an optional transcode preset (see below). Audio for all renditions is transcoded, and an audio-only rendition is available.
ADVANCED_HD: Video is transcoded; multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Input resolution can be up to 1080p and bitrate can be up to 8.5 Mbps; output is capped at HD quality (720p). You can select an optional transcode preset (see below). Audio for all renditions is transcoded, and an audio-only rendition is available.
Optional transcode presets (available for the ADVANCED types) allow you to trade off available download bandwidth and video quality, to optimize the viewing experience. There are two presets:
Constrained bandwidth delivery uses a lower bitrate for each quality level. Use it if you have low download bandwidth and/or simple video content (e.g., talking heads)
Higher bandwidth delivery uses a higher bitrate for each quality level. Use it if you have high download bandwidth and/or complex video content (e.g., flashes and quick scene changes).
Channel name.
" }, + "preset":{ + "shape":"TranscodePreset", + "documentation":"Optional transcode preset for the channel. This is selectable only for ADVANCED_HD and ADVANCED_SD channel types. For those channel types, the default preset is HIGHER_BANDWIDTH_DELIVERY. For other channel types (BASIC and STANDARD), preset is the empty string (\"\").
Recording-configuration ARN. If this is set to an empty string, recording is disabled. A value other than an empty string indicates that recording is enabled
" }, "type":{ "shape":"ChannelType", - "documentation":"Channel type, which determines the allowable resolution and bitrate. If you exceed the allowable resolution or bitrate, the stream probably will disconnect immediately. Valid values:
STANDARD: Video is transcoded: multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Transcoding allows higher playback quality across a range of download speeds. Resolution can be up to 1080p and bitrate can be up to 8.5 Mbps. Audio is transcoded only for renditions 360p and below; above that, audio is passed through. This is the default.
BASIC: Video is transmuxed: Amazon IVS delivers the original input to viewers. The viewer’s video-quality choice is limited to the original input. Resolution can be up to 1080p and bitrate can be up to 1.5 Mbps for 480p and up to 3.5 Mbps for resolutions between 480p and 1080p.
Channel type, which determines the allowable resolution and bitrate. If you exceed the allowable input resolution or bitrate, the stream probably will disconnect immediately. Some types generate multiple qualities (renditions) from the original input; this automatically gives viewers the best experience for their devices and network conditions. Some types provide transcoded video; transcoding allows higher playback quality across a range of download speeds. Default: STANDARD. Valid values:
BASIC: Video is transmuxed: Amazon IVS delivers the original input quality to viewers. The viewer’s video-quality choice is limited to the original input. Input resolution can be up to 1080p and bitrate can be up to 1.5 Mbps for 480p and up to 3.5 Mbps for resolutions between 480p and 1080p. Original audio is passed through.
STANDARD: Video is transcoded: multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Transcoding allows higher playback quality across a range of download speeds. Resolution can be up to 1080p and bitrate can be up to 8.5 Mbps. Audio is transcoded only for renditions 360p and below; above that, audio is passed through. This is the default when you create a channel.
ADVANCED_SD: Video is transcoded; multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Input resolution can be up to 1080p and bitrate can be up to 8.5 Mbps; output is capped at SD quality (480p). You can select an optional transcode preset (see below). Audio for all renditions is transcoded, and an audio-only rendition is available.
ADVANCED_HD: Video is transcoded; multiple qualities are generated from the original input, to automatically give viewers the best experience for their devices and network conditions. Input resolution can be up to 1080p and bitrate can be up to 8.5 Mbps; output is capped at HD quality (720p). You can select an optional transcode preset (see below). Audio for all renditions is transcoded, and an audio-only rendition is available.
Optional transcode presets (available for the ADVANCED types) allow you to trade off available download bandwidth and video quality, to optimize the viewing experience. There are two presets:
Constrained bandwidth delivery uses a lower bitrate for each quality level. Use it if you have low download bandwidth and/or simple video content (e.g., talking heads)
Higher bandwidth delivery uses a higher bitrate for each quality level. Use it if you have high download bandwidth and/or complex video content (e.g., flashes and quick scene changes).
Associates a new key value with a specific profile, such as a Contact Record ContactId.
A profile object can have a single unique key and any number of additional keys that can be used to identify the profile that it belongs to.
" }, + "CreateCalculatedAttributeDefinition":{ + "name":"CreateCalculatedAttributeDefinition", + "http":{ + "method":"POST", + "requestUri":"/domains/{DomainName}/calculated-attributes/{CalculatedAttributeName}" + }, + "input":{"shape":"CreateCalculatedAttributeDefinitionRequest"}, + "output":{"shape":"CreateCalculatedAttributeDefinitionResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Creates a new calculated attribute definition. After creation, new object data ingested into Customer Profiles will be included in the calculated attribute, which can be retrieved for a profile using the GetCalculatedAttributeForProfile API. Defining a calculated attribute makes it available for all profiles within a domain. Each calculated attribute can only reference one ObjectType and at most, two fields from that ObjectType.
Creates a standard profile.
A standard profile represents the following attributes for a customer profile in a domain.
" }, + "DeleteCalculatedAttributeDefinition":{ + "name":"DeleteCalculatedAttributeDefinition", + "http":{ + "method":"DELETE", + "requestUri":"/domains/{DomainName}/calculated-attributes/{CalculatedAttributeName}" + }, + "input":{"shape":"DeleteCalculatedAttributeDefinitionRequest"}, + "output":{"shape":"DeleteCalculatedAttributeDefinitionResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Deletes an existing calculated attribute definition. Note that deleting a default calculated attribute is possible, however once deleted, you will be unable to undo that action and will need to recreate it on your own using the CreateCalculatedAttributeDefinition API if you want it back.
" + }, "DeleteDomain":{ "name":"DeleteDomain", "http":{ @@ -217,6 +251,40 @@ ], "documentation":"Tests the auto-merging settings of your Identity Resolution Job without merging your data. It randomly selects a sample of matching groups from the existing matching results, and applies the automerging settings that you provided. You can then view the number of profiles in the sample, the number of matches, and the number of profiles identified to be merged. This enables you to evaluate the accuracy of the attributes in your matching list.
You can't view which profiles are matched and would be merged.
We strongly recommend you use this API to do a dry run of the automerging process before running the Identity Resolution Job. Include at least two matching attributes. If your matching list includes too few attributes (such as only FirstName or only LastName), there may be a large number of matches. This increases the chances of erroneous merges.
Provides more information on a calculated attribute definition for Customer Profiles.
" + }, + "GetCalculatedAttributeForProfile":{ + "name":"GetCalculatedAttributeForProfile", + "http":{ + "method":"GET", + "requestUri":"/domains/{DomainName}/profile/{ProfileId}/calculated-attributes/{CalculatedAttributeName}" + }, + "input":{"shape":"GetCalculatedAttributeForProfileRequest"}, + "output":{"shape":"GetCalculatedAttributeForProfileResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Retrieve a calculated attribute for a customer profile.
" + }, "GetDomain":{ "name":"GetDomain", "http":{ @@ -370,6 +438,40 @@ ], "documentation":"Lists all of the integrations associated to a specific URI in the AWS account.
" }, + "ListCalculatedAttributeDefinitions":{ + "name":"ListCalculatedAttributeDefinitions", + "http":{ + "method":"GET", + "requestUri":"/domains/{DomainName}/calculated-attributes" + }, + "input":{"shape":"ListCalculatedAttributeDefinitionsRequest"}, + "output":{"shape":"ListCalculatedAttributeDefinitionsResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Lists calculated attribute definitions for Customer Profiles
" + }, + "ListCalculatedAttributesForProfile":{ + "name":"ListCalculatedAttributesForProfile", + "http":{ + "method":"GET", + "requestUri":"/domains/{DomainName}/profile/{ProfileId}/calculated-attributes" + }, + "input":{"shape":"ListCalculatedAttributesForProfileRequest"}, + "output":{"shape":"ListCalculatedAttributesForProfileResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Retrieve a list of calculated attributes for a customer profile.
" + }, "ListDomains":{ "name":"ListDomains", "http":{ @@ -618,6 +720,23 @@ ], "documentation":"Removes one or more tags from the specified Amazon Connect Customer Profiles resource. In Connect Customer Profiles, domains, profile object types, and integrations can be tagged.
" }, + "UpdateCalculatedAttributeDefinition":{ + "name":"UpdateCalculatedAttributeDefinition", + "http":{ + "method":"PUT", + "requestUri":"/domains/{DomainName}/calculated-attributes/{CalculatedAttributeName}" + }, + "input":{"shape":"UpdateCalculatedAttributeDefinitionRequest"}, + "output":{"shape":"UpdateCalculatedAttributeDefinitionResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Updates an existing calculated attribute definition. When updating the Conditions, note that increasing the date range of a calculated attribute will not trigger inclusion of historical data greater than the current date range.
" + }, "UpdateDomain":{ "name":"UpdateDomain", "http":{ @@ -880,6 +999,41 @@ }, "documentation":"Workflow step details for APPFLOW_INTEGRATION workflow.
A list of attribute items specified in the mathematical expression.
" + }, + "Expression":{ + "shape":"string1To255", + "documentation":"Mathematical expression that is performed on attribute items provided in the attribute list. Each element in the expression should follow the structure of \\\"{ObjectTypeName.AttributeName}\\\".
" + } + }, + "documentation":"Mathematical expression and a list of attribute items specified in that expression.
" + }, + "AttributeItem":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"attributeName", + "documentation":"The name of an attribute defined in a profile object type.
" + } + }, + "documentation":"The details of a single attribute item specified in the mathematical expression.
" + }, + "AttributeList":{ + "type":"list", + "member":{"shape":"AttributeItem"}, + "max":2, + "min":1 + }, "AttributeSourceIdMap":{ "type":"map", "key":{"shape":"string1To255"}, @@ -955,6 +1109,32 @@ "max":512, "pattern":".*" }, + "CalculatedAttributeDefinitionsList":{ + "type":"list", + "member":{"shape":"ListCalculatedAttributeDefinitionItem"} + }, + "CalculatedAttributesForProfileList":{ + "type":"list", + "member":{"shape":"ListCalculatedAttributeForProfileItem"} + }, + "Conditions":{ + "type":"structure", + "members":{ + "Range":{ + "shape":"Range", + "documentation":"The relative time period over which data is included in the aggregation.
" + }, + "ObjectCount":{ + "shape":"ObjectCount", + "documentation":"The number of profile objects used for the calculated attribute.
" + }, + "Threshold":{ + "shape":"Threshold", + "documentation":"The threshold for the calculated attribute.
" + } + }, + "documentation":"The conditions including range, object count, and threshold for the calculated attribute.
" + }, "ConflictResolution":{ "type":"structure", "required":["ConflictResolvingModel"], @@ -1019,6 +1199,94 @@ }, "documentation":"The matching criteria to be used during the auto-merging process.
" }, + "CreateCalculatedAttributeDefinitionRequest":{ + "type":"structure", + "required":[ + "DomainName", + "CalculatedAttributeName", + "AttributeDetails", + "Statistic" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
", + "location":"uri", + "locationName":"CalculatedAttributeName" + }, + "DisplayName":{ + "shape":"displayName", + "documentation":"The display name of the calculated attribute.
" + }, + "Description":{ + "shape":"text", + "documentation":"The description of the calculated attribute.
" + }, + "AttributeDetails":{ + "shape":"AttributeDetails", + "documentation":"Mathematical expression and a list of attribute items specified in that expression.
" + }, + "Conditions":{ + "shape":"Conditions", + "documentation":"The conditions including range, object count, and threshold for the calculated attribute.
" + }, + "Statistic":{ + "shape":"Statistic", + "documentation":"The aggregation operation to perform for the calculated attribute.
" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + } + }, + "CreateCalculatedAttributeDefinitionResponse":{ + "type":"structure", + "members":{ + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
" + }, + "DisplayName":{ + "shape":"displayName", + "documentation":"The display name of the calculated attribute.
" + }, + "Description":{ + "shape":"text", + "documentation":"The description of the calculated attribute.
" + }, + "AttributeDetails":{ + "shape":"AttributeDetails", + "documentation":"Mathematical expression and a list of attribute items specified in that expression.
" + }, + "Conditions":{ + "shape":"Conditions", + "documentation":"The conditions including range, object count, and threshold for the calculated attribute.
" + }, + "Statistic":{ + "shape":"Statistic", + "documentation":"The aggregation operation to perform for the calculated attribute.
" + }, + "CreatedAt":{ + "shape":"timestamp", + "documentation":"The timestamp of when the calculated attribute definition was created.
" + }, + "LastUpdatedAt":{ + "shape":"timestamp", + "documentation":"The timestamp of when the calculated attribute definition was most recently edited.
" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + } + }, "CreateDomainRequest":{ "type":"structure", "required":[ @@ -1279,6 +1547,32 @@ "max":256, "pattern":".*" }, + "DeleteCalculatedAttributeDefinitionRequest":{ + "type":"structure", + "required":[ + "DomainName", + "CalculatedAttributeName" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
", + "location":"uri", + "locationName":"CalculatedAttributeName" + } + } + }, + "DeleteCalculatedAttributeDefinitionResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteDomainRequest":{ "type":"structure", "required":["DomainName"], @@ -1777,6 +2071,117 @@ } } }, + "GetCalculatedAttributeDefinitionRequest":{ + "type":"structure", + "required":[ + "DomainName", + "CalculatedAttributeName" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
", + "location":"uri", + "locationName":"CalculatedAttributeName" + } + } + }, + "GetCalculatedAttributeDefinitionResponse":{ + "type":"structure", + "members":{ + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
" + }, + "DisplayName":{ + "shape":"displayName", + "documentation":"The display name of the calculated attribute.
" + }, + "Description":{ + "shape":"text", + "documentation":"The description of the calculated attribute.
" + }, + "CreatedAt":{ + "shape":"timestamp", + "documentation":"The timestamp of when the calculated attribute definition was created.
" + }, + "LastUpdatedAt":{ + "shape":"timestamp", + "documentation":"The timestamp of when the calculated attribute definition was most recently edited.
" + }, + "Statistic":{ + "shape":"Statistic", + "documentation":"The aggregation operation to perform for the calculated attribute.
" + }, + "Conditions":{ + "shape":"Conditions", + "documentation":"The conditions including range, object count, and threshold for the calculated attribute.
" + }, + "AttributeDetails":{ + "shape":"AttributeDetails", + "documentation":"Mathematical expression and a list of attribute items specified in that expression.
" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + } + }, + "GetCalculatedAttributeForProfileRequest":{ + "type":"structure", + "required":[ + "DomainName", + "ProfileId", + "CalculatedAttributeName" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "ProfileId":{ + "shape":"uuid", + "documentation":"The unique identifier of a customer profile.
", + "location":"uri", + "locationName":"ProfileId" + }, + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
", + "location":"uri", + "locationName":"CalculatedAttributeName" + } + } + }, + "GetCalculatedAttributeForProfileResponse":{ + "type":"structure", + "members":{ + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
" + }, + "DisplayName":{ + "shape":"displayName", + "documentation":"The display name of the calculated attribute.
" + }, + "IsDataPartial":{ + "shape":"string1To255", + "documentation":"Indicates whether the calculated attribute’s value is based on partial data. If data is partial, it is set to true.
" + }, + "Value":{ + "shape":"string1To255", + "documentation":"The value of the calculated attribute.
" + } + } + }, "GetDomainRequest":{ "type":"structure", "required":["DomainName"], @@ -2445,6 +2850,141 @@ } } }, + "ListCalculatedAttributeDefinitionItem":{ + "type":"structure", + "members":{ + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
" + }, + "DisplayName":{ + "shape":"displayName", + "documentation":"The display name of the calculated attribute.
" + }, + "Description":{ + "shape":"text", + "documentation":"The threshold for the calculated attribute.
" + }, + "CreatedAt":{ + "shape":"timestamp", + "documentation":"The threshold for the calculated attribute.
" + }, + "LastUpdatedAt":{ + "shape":"timestamp", + "documentation":"The timestamp of when the calculated attribute definition was most recently edited.
" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + }, + "documentation":"The details of a single calculated attribute definition.
" + }, + "ListCalculatedAttributeDefinitionsRequest":{ + "type":"structure", + "required":["DomainName"], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "NextToken":{ + "shape":"token", + "documentation":"The pagination token from the previous call to ListCalculatedAttributeDefinitions.
", + "location":"querystring", + "locationName":"next-token" + }, + "MaxResults":{ + "shape":"maxSize100", + "documentation":"The maximum number of calculated attribute definitions returned per page.
", + "location":"querystring", + "locationName":"max-results" + } + } + }, + "ListCalculatedAttributeDefinitionsResponse":{ + "type":"structure", + "members":{ + "Items":{ + "shape":"CalculatedAttributeDefinitionsList", + "documentation":"The list of calculated attribute definitions.
" + }, + "NextToken":{ + "shape":"token", + "documentation":"The pagination token from the previous call to ListCalculatedAttributeDefinitions.
" + } + } + }, + "ListCalculatedAttributeForProfileItem":{ + "type":"structure", + "members":{ + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
" + }, + "DisplayName":{ + "shape":"displayName", + "documentation":"The display name of the calculated attribute.
" + }, + "IsDataPartial":{ + "shape":"string1To255", + "documentation":"Indicates whether the calculated attribute’s value is based on partial data. If data is partial, it is set to true.
" + }, + "Value":{ + "shape":"string1To255", + "documentation":"The value of the calculated attribute.
" + } + }, + "documentation":"The details of a single calculated attribute for a profile.
" + }, + "ListCalculatedAttributesForProfileRequest":{ + "type":"structure", + "required":[ + "DomainName", + "ProfileId" + ], + "members":{ + "NextToken":{ + "shape":"token", + "documentation":"The pagination token from the previous call to ListCalculatedAttributesForProfile.
", + "location":"querystring", + "locationName":"next-token" + }, + "MaxResults":{ + "shape":"maxSize100", + "documentation":"The maximum number of calculated attributes returned per page.
", + "location":"querystring", + "locationName":"max-results" + }, + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "ProfileId":{ + "shape":"uuid", + "documentation":"The unique identifier of a customer profile.
", + "location":"uri", + "locationName":"ProfileId" + } + } + }, + "ListCalculatedAttributesForProfileResponse":{ + "type":"structure", + "members":{ + "Items":{ + "shape":"CalculatedAttributesForProfileList", + "documentation":"The list of calculated attributes.
" + }, + "NextToken":{ + "shape":"token", + "documentation":"The pagination token from the previous call to ListCalculatedAttributesForProfile.
" + } + } + }, "ListDomainItem":{ "type":"structure", "required":[ @@ -2799,7 +3339,7 @@ }, "ObjectFilter":{ "shape":"ObjectFilter", - "documentation":"Applies a filter to the response to include profile objects with the specified index values. This filter is only supported for ObjectTypeName _asset, _case and _order.
" + "documentation":"Applies a filter to the response to include profile objects with the specified index values.
" } } }, @@ -3081,6 +3621,11 @@ "max":512, "pattern":"\\S+" }, + "ObjectCount":{ + "type":"integer", + "max":100, + "min":1 + }, "ObjectFilter":{ "type":"structure", "required":[ @@ -3090,14 +3635,14 @@ "members":{ "KeyName":{ "shape":"name", - "documentation":"A searchable identifier of a standard profile object. The predefined keys you can use to search for _asset include: _assetId, _assetName, _serialNumber. The predefined keys you can use to search for _case include: _caseId. The predefined keys you can use to search for _order include: _orderId.
" + "documentation":"A searchable identifier of a profile object. The predefined keys you can use to search for _asset include: _assetId, _assetName, and _serialNumber. The predefined keys you can use to search for _case include: _caseId. The predefined keys you can use to search for _order include: _orderId.
A list of key values.
" } }, - "documentation":"The filter applied to ListProfileObjects response to include profile objects with the specified index values. This filter is only supported for ObjectTypeName _asset, _case and _order.
" + "documentation":"The filter applied to ListProfileObjects response to include profile objects with the specified index values.
The amount of time of the specified unit.
" + }, + "Unit":{ + "shape":"Unit", + "documentation":"The unit of time.
" + } + }, + "documentation":"The relative time period over which data is included in the aggregation.
" + }, "ResourceNotFoundException":{ "type":"structure", "members":{ @@ -3883,6 +4455,19 @@ "type":"list", "member":{"shape":"StandardIdentifier"} }, + "Statistic":{ + "type":"string", + "enum":[ + "FIRST_OCCURRENCE", + "LAST_OCCURRENCE", + "COUNT", + "SUM", + "MINIMUM", + "MAXIMUM", + "AVERAGE", + "MAX_OCCURRENCE" + ] + }, "Status":{ "type":"string", "enum":[ @@ -3998,6 +4583,24 @@ "type":"list", "member":{"shape":"Task"} }, + "Threshold":{ + "type":"structure", + "required":[ + "Value", + "Operator" + ], + "members":{ + "Value":{ + "shape":"string1To255", + "documentation":"The value of the threshold.
" + }, + "Operator":{ + "shape":"Operator", + "documentation":"The operator of the threshold.
" + } + }, + "documentation":"The threshold for the calculated attribute.
" + }, "ThrottlingException":{ "type":"structure", "members":{ @@ -4045,6 +4648,10 @@ "OnDemand" ] }, + "Unit":{ + "type":"string", + "enum":["DAYS"] + }, "UntagResourceRequest":{ "type":"structure", "required":[ @@ -4122,6 +4729,80 @@ "key":{"shape":"string1To255"}, "value":{"shape":"string0To255"} }, + "UpdateCalculatedAttributeDefinitionRequest":{ + "type":"structure", + "required":[ + "DomainName", + "CalculatedAttributeName" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
", + "location":"uri", + "locationName":"CalculatedAttributeName" + }, + "DisplayName":{ + "shape":"displayName", + "documentation":"The display name of the calculated attribute.
" + }, + "Description":{ + "shape":"text", + "documentation":"The description of the calculated attribute.
" + }, + "Conditions":{ + "shape":"Conditions", + "documentation":"The conditions including range, object count, and threshold for the calculated attribute.
" + } + } + }, + "UpdateCalculatedAttributeDefinitionResponse":{ + "type":"structure", + "members":{ + "CalculatedAttributeName":{ + "shape":"typeName", + "documentation":"The unique name of the calculated attribute.
" + }, + "DisplayName":{ + "shape":"displayName", + "documentation":"The display name of the calculated attribute.
" + }, + "Description":{ + "shape":"text", + "documentation":"The description of the calculated attribute.
" + }, + "CreatedAt":{ + "shape":"timestamp", + "documentation":"The timestamp of when the calculated attribute definition was created.
" + }, + "LastUpdatedAt":{ + "shape":"timestamp", + "documentation":"The timestamp of when the calculated attribute definition was most recently edited.
" + }, + "Statistic":{ + "shape":"Statistic", + "documentation":"The aggregation operation to perform for the calculated attribute.
" + }, + "Conditions":{ + "shape":"Conditions", + "documentation":"The conditions including range, object count, and threshold for the calculated attribute.
" + }, + "AttributeDetails":{ + "shape":"AttributeDetails", + "documentation":"The mathematical expression and a list of attribute items specified in that expression.
" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + } + }, "UpdateDomainRequest":{ "type":"structure", "required":["DomainName"], @@ -4317,6 +4998,11 @@ } } }, + "Value":{ + "type":"integer", + "max":366, + "min":1 + }, "WorkflowAttributes":{ "type":"structure", "members":{ @@ -4395,7 +5081,19 @@ "max":4, "min":1 }, + "attributeName":{ + "type":"string", + "max":64, + "min":1, + "pattern":"^[a-zA-Z0-9_.-]+$" + }, "boolean":{"type":"boolean"}, + "displayName":{ + "type":"string", + "max":255, + "min":1, + "pattern":"^[a-zA-Z_][a-zA-Z_0-9-\\s]*$" + }, "encryptionKey":{ "type":"string", "max":255, From e61ab44ca06a7f7c8bcb0af51c21c510f089e768 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 1 Jun 2023 18:11:00 +0000 Subject: [PATCH 016/317] Updated endpoints.json and partitions.json. --- .../feature-AWSSDKforJavav2-0443982.json | 6 +++ .../regions/internal/region/endpoints.json | 37 ++++++++++++++++++- 2 files changed, 42 insertions(+), 1 deletion(-) create mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json new file mode 100644 index 000000000000..e5b5ee3ca5e3 --- /dev/null +++ b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." +} diff --git a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json index a302f64d9758..b7e9f016fdf7 100644 --- a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json +++ b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json @@ -2213,6 +2213,7 @@ "ap-southeast-1" : { }, "ap-southeast-2" : { }, "ap-southeast-3" : { }, + "ap-southeast-4" : { }, "ca-central-1" : { }, "eu-central-1" : { }, "eu-central-2" : { }, @@ -13997,6 +13998,8 @@ "securitylake" : { "endpoints" : { "ap-northeast-1" : { }, + "ap-northeast-2" : { }, + "ap-south-1" : { }, "ap-southeast-1" : { }, "ap-southeast-2" : { }, "eu-central-1" : { }, @@ -14005,6 +14008,7 @@ "sa-east-1" : { }, "us-east-1" : { }, "us-east-2" : { }, + "us-west-1" : { }, "us-west-2" : { } } }, @@ -21529,6 +21533,36 @@ "us-gov-west-1" : { } } }, + "mgn" : { + "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "deprecated" : true, + "hostname" : "mgn-fips.us-gov-east-1.amazonaws.com" + }, + "fips-us-gov-west-1" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "deprecated" : true, + "hostname" : "mgn-fips.us-gov-west-1.amazonaws.com" + }, + "us-gov-east-1" : { + "variants" : [ { + "hostname" : "mgn-fips.us-gov-east-1.amazonaws.com", + "tags" : [ "fips" ] + } ] + }, + "us-gov-west-1" : { + "variants" : [ { + "hostname" : "mgn-fips.us-gov-west-1.amazonaws.com", + "tags" : [ "fips" ] + } ] + } + } + }, "models.lex" : { "defaults" : { "credentialScope" : { @@ -23770,7 +23804,8 @@ }, "tagging" : { "endpoints" : { - "us-iso-east-1" : { } + "us-iso-east-1" : { }, + "us-iso-west-1" : { } } }, "transcribe" : { From 2440a9ceca7e13ef48caed4ab70610f7b942204b Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 1 Jun 2023 18:12:02 +0000 Subject: [PATCH 017/317] Release 2.20.77. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.20.77.json | 48 +++++++++++++++++++ .../feature-AWSSDKforJavav2-0443982.json | 6 --- .../feature-AWSWAFV2-d5198fc.json | 6 --- .../feature-AlexaForBusiness-5f643cc.json | 6 --- .../feature-AmazonAppflow-be53087.json | 6 --- ...AmazonConnectCustomerProfiles-c9ad6e3.json | 6 --- ...AmazonInteractiveVideoService-643eb10.json | 6 --- ...eature-AmazonSageMakerService-046c91e.json | 6 --- CHANGELOG.md | 29 +++++++++++ README.md | 8 ++-- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 2 +- bom-internal/pom.xml | 2 +- bom/pom.xml | 2 +- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth-crt/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/crt-core/pom.xml | 2 +- core/endpoints-spi/pom.xml | 2 +- core/imds/pom.xml | 2 +- core/json-utils/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services-custom/s3-transfer-manager/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/account/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/amplifyuibuilder/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appconfigdata/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationcostprofiler/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/apprunner/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/arczonalshift/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/backupgateway/pom.xml | 2 +- services/backupstorage/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/billingconductor/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/chimesdkidentity/pom.xml | 2 +- services/chimesdkmediapipelines/pom.xml | 2 +- services/chimesdkmeetings/pom.xml | 2 +- services/chimesdkmessaging/pom.xml | 2 +- services/chimesdkvoice/pom.xml | 2 +- services/cleanrooms/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/cloudcontrol/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudtraildata/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecatalyst/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcampaigns/pom.xml | 2 +- services/connectcases/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/controltower/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/docdbelastic/pom.xml | 2 +- services/drs/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/emrserverless/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/evidently/pom.xml | 2 +- services/finspace/pom.xml | 2 +- services/finspacedata/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/gamesparks/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/grafana/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/inspector2/pom.xml | 2 +- services/internetmonitor/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotfleetwise/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotroborunner/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iottwinmaker/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/ivschat/pom.xml | 2 +- services/ivsrealtime/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kafkaconnect/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kendraranking/pom.xml | 2 +- services/keyspaces/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kinesisvideowebrtcstorage/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- .../licensemanagerlinuxsubscriptions/pom.xml | 2 +- .../licensemanagerusersubscriptions/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutequipment/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/m2/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagev2/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/memorydb/pom.xml | 2 +- services/mgn/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/migrationhuborchestrator/pom.xml | 2 +- services/migrationhubrefactorspaces/pom.xml | 2 +- services/migrationhubstrategy/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/nimble/pom.xml | 2 +- services/oam/pom.xml | 2 +- services/omics/pom.xml | 2 +- services/opensearch/pom.xml | 2 +- services/opensearchserverless/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/osis/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/panorama/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/pinpointsmsvoicev2/pom.xml | 2 +- services/pipes/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 2 +- services/pricing/pom.xml | 2 +- services/privatenetworks/pom.xml | 2 +- services/proton/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rbin/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/redshiftserverless/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resiliencehub/pom.xml | 2 +- services/resourceexplorer2/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/rolesanywhere/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53recoverycluster/pom.xml | 2 +- services/route53recoverycontrolconfig/pom.xml | 2 +- services/route53recoveryreadiness/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/rum/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakergeospatial/pom.xml | 2 +- services/sagemakermetrics/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/scheduler/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- services/securitylake/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/simspaceweaver/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/snowdevicemanagement/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/ssmcontacts/pom.xml | 2 +- services/ssmincidents/pom.xml | 2 +- services/ssmsap/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/supportapp/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/tnb/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/voiceid/pom.xml | 2 +- services/vpclattice/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/wisdom/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/workspacesweb/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/auth-tests/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/region-testing/pom.xml | 2 +- test/ruleset-testing-core/pom.xml | 2 +- test/s3-benchmarks/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- third-party/pom.xml | 2 +- third-party/third-party-jackson-core/pom.xml | 2 +- .../pom.xml | 2 +- utils/pom.xml | 2 +- 417 files changed, 488 insertions(+), 453 deletions(-) create mode 100644 .changes/2.20.77.json delete mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json delete mode 100644 .changes/next-release/feature-AWSWAFV2-d5198fc.json delete mode 100644 .changes/next-release/feature-AlexaForBusiness-5f643cc.json delete mode 100644 .changes/next-release/feature-AmazonAppflow-be53087.json delete mode 100644 .changes/next-release/feature-AmazonConnectCustomerProfiles-c9ad6e3.json delete mode 100644 .changes/next-release/feature-AmazonInteractiveVideoService-643eb10.json delete mode 100644 .changes/next-release/feature-AmazonSageMakerService-046c91e.json diff --git a/.changes/2.20.77.json b/.changes/2.20.77.json new file mode 100644 index 000000000000..9d11b565f6e7 --- /dev/null +++ b/.changes/2.20.77.json @@ -0,0 +1,48 @@ +{ + "version": "2.20.77", + "date": "2023-06-01", + "entries": [ + { + "type": "feature", + "category": "AWS WAFV2", + "contributor": "", + "description": "Corrected the information for the header order FieldToMatch setting" + }, + { + "type": "feature", + "category": "Alexa For Business", + "contributor": "", + "description": "Alexa for Business has been deprecated and is no longer supported." + }, + { + "type": "feature", + "category": "Amazon Appflow", + "contributor": "", + "description": "Added ability to select DataTransferApiType for DescribeConnector and CreateFlow requests when using Async supported connectors. Added supportedDataTransferType to DescribeConnector/DescribeConnectors/ListConnector response." + }, + { + "type": "feature", + "category": "Amazon Connect Customer Profiles", + "contributor": "", + "description": "This release introduces calculated attribute related APIs." + }, + { + "type": "feature", + "category": "Amazon Interactive Video Service", + "contributor": "", + "description": "API Update for IVS Advanced Channel type" + }, + { + "type": "feature", + "category": "Amazon SageMaker Service", + "contributor": "", + "description": "Amazon Sagemaker Autopilot adds support for Parquet file input to NLP text classification jobs." + }, + { + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json deleted file mode 100644 index e5b5ee3ca5e3..000000000000 --- a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS SDK for Java v2", - "contributor": "", - "description": "Updated endpoint and partition metadata." -} diff --git a/.changes/next-release/feature-AWSWAFV2-d5198fc.json b/.changes/next-release/feature-AWSWAFV2-d5198fc.json deleted file mode 100644 index 5ab2fe8d4d26..000000000000 --- a/.changes/next-release/feature-AWSWAFV2-d5198fc.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS WAFV2", - "contributor": "", - "description": "Corrected the information for the header order FieldToMatch setting" -} diff --git a/.changes/next-release/feature-AlexaForBusiness-5f643cc.json b/.changes/next-release/feature-AlexaForBusiness-5f643cc.json deleted file mode 100644 index 9f7c8e9ecf6f..000000000000 --- a/.changes/next-release/feature-AlexaForBusiness-5f643cc.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Alexa For Business", - "contributor": "", - "description": "Alexa for Business has been deprecated and is no longer supported." -} diff --git a/.changes/next-release/feature-AmazonAppflow-be53087.json b/.changes/next-release/feature-AmazonAppflow-be53087.json deleted file mode 100644 index 065626b08b41..000000000000 --- a/.changes/next-release/feature-AmazonAppflow-be53087.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Appflow", - "contributor": "", - "description": "Added ability to select DataTransferApiType for DescribeConnector and CreateFlow requests when using Async supported connectors. Added supportedDataTransferType to DescribeConnector/DescribeConnectors/ListConnector response." -} diff --git a/.changes/next-release/feature-AmazonConnectCustomerProfiles-c9ad6e3.json b/.changes/next-release/feature-AmazonConnectCustomerProfiles-c9ad6e3.json deleted file mode 100644 index 375c41bbf350..000000000000 --- a/.changes/next-release/feature-AmazonConnectCustomerProfiles-c9ad6e3.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Connect Customer Profiles", - "contributor": "", - "description": "This release introduces calculated attribute related APIs." -} diff --git a/.changes/next-release/feature-AmazonInteractiveVideoService-643eb10.json b/.changes/next-release/feature-AmazonInteractiveVideoService-643eb10.json deleted file mode 100644 index c315a003d2b6..000000000000 --- a/.changes/next-release/feature-AmazonInteractiveVideoService-643eb10.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Interactive Video Service", - "contributor": "", - "description": "API Update for IVS Advanced Channel type" -} diff --git a/.changes/next-release/feature-AmazonSageMakerService-046c91e.json b/.changes/next-release/feature-AmazonSageMakerService-046c91e.json deleted file mode 100644 index 77975a930be3..000000000000 --- a/.changes/next-release/feature-AmazonSageMakerService-046c91e.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon SageMaker Service", - "contributor": "", - "description": "Amazon Sagemaker Autopilot adds support for Parquet file input to NLP text classification jobs." -} diff --git a/CHANGELOG.md b/CHANGELOG.md index f8036466c5c1..ffaa5391a9fc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,32 @@ +# __2.20.77__ __2023-06-01__ +## __AWS SDK for Java v2__ + - ### Features + - Updated endpoint and partition metadata. + +## __AWS WAFV2__ + - ### Features + - Corrected the information for the header order FieldToMatch setting + +## __Alexa For Business__ + - ### Features + - Alexa for Business has been deprecated and is no longer supported. + +## __Amazon Appflow__ + - ### Features + - Added ability to select DataTransferApiType for DescribeConnector and CreateFlow requests when using Async supported connectors. Added supportedDataTransferType to DescribeConnector/DescribeConnectors/ListConnector response. + +## __Amazon Connect Customer Profiles__ + - ### Features + - This release introduces calculated attribute related APIs. + +## __Amazon Interactive Video Service__ + - ### Features + - API Update for IVS Advanced Channel type + +## __Amazon SageMaker Service__ + - ### Features + - Amazon Sagemaker Autopilot adds support for Parquet file input to NLP text classification jobs. + # __2.20.76__ __2023-05-31__ ## __AWS Config__ - ### Features diff --git a/README.md b/README.md index 34ebf869c1d3..0f41d0812cf3 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ To automatically manage module versions (currently all modules have the same verDeletes the specified WebACL.
You can only use this if ManagedByFirewallManager is false in the specified WebACL.
Before deleting any web ACL, first disassociate it from all resources.
To retrieve a list of the resources that are associated with a web ACL, use the following calls:
For regional resources, call ListResourcesForWebACL.
For Amazon CloudFront distributions, use the CloudFront call ListDistributionsByWebACLId. For information, see ListDistributionsByWebACLId in the Amazon CloudFront API Reference.
To disassociate a resource from a web ACL, use the following calls:
For regional resources, call DisassociateWebACL.
For Amazon CloudFront distributions, provide an empty web ACL ID in the CloudFront call UpdateDistribution. For information, see UpdateDistribution in the Amazon CloudFront API Reference.
Provides high-level information for the Amazon Web Services Managed Rules rule groups and Amazon Web Services Marketplace managed rule groups.
" + }, + "DescribeManagedProductsByVendor":{ + "name":"DescribeManagedProductsByVendor", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeManagedProductsByVendorRequest"}, + "output":{"shape":"DescribeManagedProductsByVendorResponse"}, + "errors":[ + {"shape":"WAFInvalidOperationException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidParameterException"} + ], + "documentation":"Provides high-level information for the managed rule groups owned by a specific vendor.
" + }, "DescribeManagedRuleGroup":{ "name":"DescribeManagedRuleGroup", "http":{ @@ -2100,6 +2129,51 @@ "members":{ } }, + "DescribeAllManagedProductsRequest":{ + "type":"structure", + "required":["Scope"], + "members":{ + "Scope":{ + "shape":"Scope", + "documentation":"Specifies whether this is for an Amazon CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB), an Amazon API Gateway REST API, an AppSync GraphQL API, an Amazon Cognito user pool, an App Runner service, or an Amazon Web Services Verified Access instance.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1.
API and SDKs - For all calls, use the Region endpoint us-east-1.
High-level information for the Amazon Web Services Managed Rules rule groups and Amazon Web Services Marketplace managed rule groups.
" + } + } + }, + "DescribeManagedProductsByVendorRequest":{ + "type":"structure", + "required":[ + "VendorName", + "Scope" + ], + "members":{ + "VendorName":{ + "shape":"VendorName", + "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify a rule group.
" + }, + "Scope":{ + "shape":"Scope", + "documentation":"Specifies whether this is for an Amazon CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB), an Amazon API Gateway REST API, an AppSync GraphQL API, an Amazon Cognito user pool, an App Runner service, or an Amazon Web Services Verified Access instance.
To work with CloudFront, you must also specify the Region US East (N. Virginia) as follows:
CLI - Specify the Region when you use the CloudFront scope: --scope=CLOUDFRONT --region=us-east-1.
API and SDKs - For all calls, use the Region endpoint us-east-1.
High-level information for the managed rule groups owned by the specified vendor.
" + } + } + }, "DescribeManagedRuleGroupRequest":{ "type":"structure", "required":[ @@ -2110,7 +2184,7 @@ "members":{ "VendorName":{ "shape":"VendorName", - "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify the rule group.
" + "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify a rule group.
" }, "Name":{ "shape":"EntityName", @@ -2135,7 +2209,7 @@ }, "SnsTopicArn":{ "shape":"ResourceArn", - "documentation":"The Amazon resource name (ARN) of the Amazon Simple Notification Service SNS topic that's used to record changes to the managed rule group. You can subscribe to the SNS topic to receive notifications when the managed rule group is modified, such as for new versions and for version expiration. For more information, see the Amazon Simple Notification Service Developer Guide.
" + "documentation":"The Amazon resource name (ARN) of the Amazon Simple Notification Service SNS topic that's used to provide notification of changes to the managed rule group. You can subscribe to the SNS topic to receive notifications when the managed rule group is modified, such as for new versions and for version expiration. For more information, see the Amazon Simple Notification Service Developer Guide.
" }, "Capacity":{ "shape":"CapacityUnit", @@ -2289,7 +2363,7 @@ }, "HeaderOrder":{ "shape":"HeaderOrder", - "documentation":"Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using colons and no added spaces, for example Host:User-Agent:Accept:Authorization:Referer.
Matches against the header order string are case insensitive.
" + "documentation":"Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using commas and no added spaces.
Matches against the header order string are case insensitive.
" } }, "documentation":"The part of the web request that you want WAF to inspect. Include the single FieldToMatch type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in FieldToMatch for each rule statement that requires it. To inspect more than one component of the web request, create a separate rule statement for each component.
Example JSON for a QueryString field to match:
\"FieldToMatch\": { \"QueryString\": {} }
Example JSON for a Method field to match specification:
\"FieldToMatch\": { \"Method\": { \"Name\": \"DELETE\" } }
What WAF should do if the headers of the request are more numerous or larger than WAF can inspect. WAF does not support inspecting the entire contents of request headers when they exceed 8 KB (8192 bytes) or 200 total headers. The underlying host service forwards a maximum of 200 headers and at most 8 KB of header contents to WAF.
The options for oversize handling are the following:
CONTINUE - Inspect the available headers normally, according to the rule inspection criteria.
MATCH - Treat the web request as matching the rule statement. WAF applies the rule action to the request.
NO_MATCH - Treat the web request as not matching the rule statement.
Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using colons and no added spaces, for example Host:User-Agent:Accept:Authorization:Referer.
Matches against the header order string are case insensitive.
" + "documentation":"Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using commas and no added spaces.
Matches against the header order string are case insensitive.
" }, "HeaderValue":{"type":"string"}, "Headers":{ @@ -3287,7 +3361,7 @@ "members":{ "VendorName":{ "shape":"VendorName", - "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify the rule group.
" + "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify a rule group.
" }, "Name":{ "shape":"EntityName", @@ -3701,6 +3775,52 @@ "min":1, "pattern":".*\\S.*" }, + "ManagedProductDescriptor":{ + "type":"structure", + "members":{ + "VendorName":{ + "shape":"VendorName", + "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify a rule group.
" + }, + "ManagedRuleSetName":{ + "shape":"EntityName", + "documentation":"The name of the managed rule group. For example, AWSManagedRulesAnonymousIpList or AWSManagedRulesATPRuleSet.
A unique identifier for the rule group. This ID is returned in the responses to create and list commands. You provide it to operations like update and delete.
" + }, + "ProductLink":{ + "shape":"ProductLink", + "documentation":"For Amazon Web Services Marketplace managed rule groups only, the link to the rule group product page.
" + }, + "ProductTitle":{ + "shape":"ProductTitle", + "documentation":"The display name for the managed rule group. For example, Anonymous IP list or Account takeover prevention.
A short description of the managed rule group.
" + }, + "SnsTopicArn":{ + "shape":"ResourceArn", + "documentation":"The Amazon resource name (ARN) of the Amazon Simple Notification Service SNS topic that's used to provide notification of changes to the managed rule group. You can subscribe to the SNS topic to receive notifications when the managed rule group is modified, such as for new versions and for version expiration. For more information, see the Amazon Simple Notification Service Developer Guide.
" + }, + "IsVersioningSupported":{ + "shape":"Boolean", + "documentation":"Indicates whether the rule group is versioned.
" + }, + "IsAdvancedManagedRuleSet":{ + "shape":"Boolean", + "documentation":"Indicates whether the rule group provides an advanced set of protections, such as the the Amazon Web Services Managed Rules rule groups that are used for WAF intelligent threat mitigation.
" + } + }, + "documentation":"The properties of a managed product, such as an Amazon Web Services Managed Rules rule group or an Amazon Web Services Marketplace managed rule group.
" + }, + "ManagedProductDescriptors":{ + "type":"list", + "member":{"shape":"ManagedProductDescriptor"} + }, "ManagedRuleGroupConfig":{ "type":"structure", "members":{ @@ -3752,7 +3872,7 @@ "members":{ "VendorName":{ "shape":"VendorName", - "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify the rule group.
" + "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify a rule group.
" }, "Name":{ "shape":"EntityName", @@ -3790,7 +3910,7 @@ "members":{ "VendorName":{ "shape":"VendorName", - "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify the rule group.
" + "documentation":"The name of the managed rule group vendor. You use this, along with the rule group name, to identify a rule group.
" }, "Name":{ "shape":"EntityName", @@ -3805,7 +3925,7 @@ "documentation":"The description of the managed rule group, provided by Amazon Web Services Managed Rules or the Amazon Web Services Marketplace seller who manages it.
" } }, - "documentation":"High-level information about a managed rule group, returned by ListAvailableManagedRuleGroups. This provides information like the name and vendor name, that you provide when you add a ManagedRuleGroupStatement to a web ACL. Managed rule groups include Amazon Web Services Managed Rules rule groups, which are free of charge to WAF customers, and Amazon Web Services Marketplace managed rule groups, which you can subscribe to through Amazon Web Services Marketplace.
" + "documentation":"High-level information about a managed rule group, returned by ListAvailableManagedRuleGroups. This provides information like the name and vendor name, that you provide when you add a ManagedRuleGroupStatement to a web ACL. Managed rule groups include Amazon Web Services Managed Rules rule groups and Amazon Web Services Marketplace managed rule groups. To use any Amazon Web Services Marketplace managed rule group, first subscribe to the rule group through Amazon Web Services Marketplace.
" }, "ManagedRuleGroupVersion":{ "type":"structure", @@ -4152,6 +4272,28 @@ "CONTAINS_WORD" ] }, + "ProductDescription":{ + "type":"string", + "min":1, + "pattern":".*\\S.*" + }, + "ProductId":{ + "type":"string", + "max":128, + "min":1, + "pattern":".*\\S.*" + }, + "ProductLink":{ + "type":"string", + "max":2048, + "min":1, + "pattern":".*\\S.*" + }, + "ProductTitle":{ + "type":"string", + "min":1, + "pattern":".*\\S.*" + }, "PublishedVersions":{ "type":"map", "key":{"shape":"VersionKeyString"}, @@ -5829,11 +5971,11 @@ "members":{ "SampledRequestsEnabled":{ "shape":"Boolean", - "documentation":"A boolean indicating whether WAF should store a sampling of the web requests that match the rules. You can view the sampled requests through the WAF console.
" + "documentation":"Indicates whether WAF should store a sampling of the web requests that match the rules. You can view the sampled requests through the WAF console.
" }, "CloudWatchMetricsEnabled":{ "shape":"Boolean", - "documentation":"A boolean indicating whether the associated resource sends metrics to Amazon CloudWatch. For the list of available metrics, see WAF Metrics in the WAF Developer Guide.
For web ACLs, the metrics are for web requests that have the web ACL default action applied. WAF applies the default action to web requests that pass the inspection of all rules in the web ACL without being either allowed or blocked. For more information, see The web ACL default action in the WAF Developer Guide.
" + "documentation":"Indicates whether the associated resource sends metrics to Amazon CloudWatch. For the list of available metrics, see WAF Metrics in the WAF Developer Guide.
For web ACLs, the metrics are for web requests that have the web ACL default action applied. WAF applies the default action to web requests that pass the inspection of all rules in the web ACL without being either allowed or blocked. For more information, see The web ACL default action in the WAF Developer Guide.
" }, "MetricName":{ "shape":"MetricName", From b07a411e510c69d942ccbfb787f8852658d13f0c Mon Sep 17 00:00:00 2001 From: AWS <> Date: Fri, 2 Jun 2023 18:06:35 +0000 Subject: [PATCH 021/317] AWS CloudTrail Update: This feature allows users to start and stop event ingestion on a CloudTrail Lake event data store. --- .../feature-AWSCloudTrail-e5964bb.json | 6 + .../codegen-resources/service-2.json | 177 +++++++++++++----- 2 files changed, 135 insertions(+), 48 deletions(-) create mode 100644 .changes/next-release/feature-AWSCloudTrail-e5964bb.json diff --git a/.changes/next-release/feature-AWSCloudTrail-e5964bb.json b/.changes/next-release/feature-AWSCloudTrail-e5964bb.json new file mode 100644 index 000000000000..4c7d7628b8f4 --- /dev/null +++ b/.changes/next-release/feature-AWSCloudTrail-e5964bb.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS CloudTrail", + "contributor": "", + "description": "This feature allows users to start and stop event ingestion on a CloudTrail Lake event data store." +} diff --git a/services/cloudtrail/src/main/resources/codegen-resources/service-2.json b/services/cloudtrail/src/main/resources/codegen-resources/service-2.json index b0fcc57b132a..f413d344ea0c 100644 --- a/services/cloudtrail/src/main/resources/codegen-resources/service-2.json +++ b/services/cloudtrail/src/main/resources/codegen-resources/service-2.json @@ -39,7 +39,7 @@ {"shape":"NoManagementAccountSLRExistsException"}, {"shape":"ConflictException"} ], - "documentation":"Adds one or more tags to a trail, event data store, or channel, up to a limit of 50. Overwrites an existing tag's value when a new value is specified for an existing tag key. Tag key names must be unique; you cannot have two keys with the same name but different values. If you specify a key without a value, the tag will be created with the specified key and a value of null. You can tag a trail or event data store that applies to all Amazon Web Services Regions only from the Region in which the trail or event data store was created (also known as its home region).
", + "documentation":"Adds one or more tags to a trail, event data store, or channel, up to a limit of 50. Overwrites an existing tag's value when a new value is specified for an existing tag key. Tag key names must be unique; you cannot have two keys with the same name but different values. If you specify a key without a value, the tag will be created with the specified key and a value of null. You can tag a trail or event data store that applies to all Amazon Web Services Regions only from the Region in which the trail or event data store was created (also known as its home Region).
", "idempotent":true }, "CancelQuery":{ @@ -242,7 +242,7 @@ {"shape":"NoManagementAccountSLRExistsException"}, {"shape":"InsufficientDependencyServiceAccessPermissionException"} ], - "documentation":"Deletes a trail. This operation must be called from the region in which the trail was created. DeleteTrail cannot be called on the shadow trails (replicated trails in other regions) of a trail that is enabled in all regions.
Deletes a trail. This operation must be called from the Region in which the trail was created. DeleteTrail cannot be called on the shadow trails (replicated trails in other Regions) of a trail that is enabled in all Regions.
Retrieves settings for one or more trails associated with the current region for your account.
", + "documentation":"Retrieves settings for one or more trails associated with the current Region for your account.
", "idempotent":true }, "GetChannel":{ @@ -473,7 +473,7 @@ {"shape":"UnsupportedOperationException"}, {"shape":"OperationNotPermittedException"} ], - "documentation":"Returns a JSON-formatted list of information about the specified trail. Fields include information on delivery errors, Amazon SNS and Amazon S3 errors, and start and stop logging times for each trail. This operation returns trail status from a single region. To return trail status from all regions, you must call the operation on each region.
", + "documentation":"Returns a JSON-formatted list of information about the specified trail. Fields include information on delivery errors, Amazon SNS and Amazon S3 errors, and start and stop logging times for each trail. This operation returns trail status from a single Region. To return trail status from all Regions, you must call the operation on each Region.
", "idempotent":true }, "ListChannels":{ @@ -507,7 +507,7 @@ {"shape":"UnsupportedOperationException"}, {"shape":"NoManagementAccountSLRExistsException"} ], - "documentation":"Returns information about all event data stores in the account, in the current region.
", + "documentation":"Returns information about all event data stores in the account, in the current Region.
", "idempotent":true }, "ListImportFailures":{ @@ -558,7 +558,7 @@ {"shape":"OperationNotPermittedException"}, {"shape":"InvalidTokenException"} ], - "documentation":"Returns all public keys whose private keys were used to sign the digest files within the specified time range. The public key is needed to validate digest files that were signed with its corresponding private key.
CloudTrail uses different private and public key pairs per region. Each digest file is signed with a private key unique to its region. When you validate a digest file from a specific region, you must look in the same region for its corresponding public key.
Returns all public keys whose private keys were used to sign the digest files within the specified time range. The public key is needed to validate digest files that were signed with its corresponding private key.
CloudTrail uses different private and public key pairs per Region. Each digest file is signed with a private key unique to its Region. When you validate a digest file from a specific Region, you must look in the same Region for its corresponding public key.
Lists the tags for the trail, event data store, or channel in the current region.
", + "documentation":"Lists the tags for the specified trails, event data stores, or channels in the current Region.
", "idempotent":true }, "ListTrails":{ @@ -642,7 +642,7 @@ {"shape":"UnsupportedOperationException"}, {"shape":"OperationNotPermittedException"} ], - "documentation":"Looks up management events or CloudTrail Insights events that are captured by CloudTrail. You can look up events that occurred in a region within the last 90 days. Lookup supports the following attributes for management events:
Amazon Web Services access key
Event ID
Event name
Event source
Read only
Resource name
Resource type
User name
Lookup supports the following attributes for Insights events:
Event ID
Event name
Event source
All attributes are optional. The default number of results returned is 50, with a maximum of 50 possible. The response includes a token that you can use to get the next page of results.
The rate of lookup requests is limited to two per second, per account, per region. If this limit is exceeded, a throttling error occurs.
Looks up management events or CloudTrail Insights events that are captured by CloudTrail. You can look up events that occurred in a Region within the last 90 days. Lookup supports the following attributes for management events:
Amazon Web Services access key
Event ID
Event name
Event source
Read only
Resource name
Resource type
User name
Lookup supports the following attributes for Insights events:
Event ID
Event name
Event source
All attributes are optional. The default number of results returned is 50, with a maximum of 50 possible. The response includes a token that you can use to get the next page of results.
The rate of lookup requests is limited to two per second, per account, per Region. If this limit is exceeded, a throttling error occurs.
Configures an event selector or advanced event selectors for your trail. Use event selectors or advanced event selectors to specify management and data event settings for your trail. If you want your trail to log Insights events, be sure the event selector enables logging of the Insights event types you want configured for your trail. For more information about logging Insights events, see Logging Insights events for trails in the CloudTrail User Guide. By default, trails created without specific event selectors are configured to log all read and write management events, and no data events.
When an event occurs in your account, CloudTrail evaluates the event selectors or advanced event selectors in all trails. For each trail, if the event matches any event selector, the trail processes and logs the event. If the event doesn't match any event selector, the trail doesn't log the event.
Example
You create an event selector for a trail and specify that you want write-only events.
The EC2 GetConsoleOutput and RunInstances API operations occur in your account.
CloudTrail evaluates whether the events match your event selectors.
The RunInstances is a write-only event and it matches your event selector. The trail logs the event.
The GetConsoleOutput is a read-only event that doesn't match your event selector. The trail doesn't log the event.
The PutEventSelectors operation must be called from the region in which the trail was created; otherwise, an InvalidHomeRegionException exception is thrown.
You can configure up to five event selectors for each trail. For more information, see Logging management events, Logging data events, and Quotas in CloudTrail in the CloudTrail User Guide.
You can add advanced event selectors, and conditions for your advanced event selectors, up to a maximum of 500 values for all conditions and selectors on a trail. You can use either AdvancedEventSelectors or EventSelectors, but not both. If you apply AdvancedEventSelectors to a trail, any existing EventSelectors are overwritten. For more information about advanced event selectors, see Logging data events in the CloudTrail User Guide.
Configures an event selector or advanced event selectors for your trail. Use event selectors or advanced event selectors to specify management and data event settings for your trail. If you want your trail to log Insights events, be sure the event selector enables logging of the Insights event types you want configured for your trail. For more information about logging Insights events, see Logging Insights events for trails in the CloudTrail User Guide. By default, trails created without specific event selectors are configured to log all read and write management events, and no data events.
When an event occurs in your account, CloudTrail evaluates the event selectors or advanced event selectors in all trails. For each trail, if the event matches any event selector, the trail processes and logs the event. If the event doesn't match any event selector, the trail doesn't log the event.
Example
You create an event selector for a trail and specify that you want write-only events.
The EC2 GetConsoleOutput and RunInstances API operations occur in your account.
CloudTrail evaluates whether the events match your event selectors.
The RunInstances is a write-only event and it matches your event selector. The trail logs the event.
The GetConsoleOutput is a read-only event that doesn't match your event selector. The trail doesn't log the event.
The PutEventSelectors operation must be called from the Region in which the trail was created; otherwise, an InvalidHomeRegionException exception is thrown.
You can configure up to five event selectors for each trail. For more information, see Logging management events, Logging data events, and Quotas in CloudTrail in the CloudTrail User Guide.
You can add advanced event selectors, and conditions for your advanced event selectors, up to a maximum of 500 values for all conditions and selectors on a trail. You can use either AdvancedEventSelectors or EventSelectors, but not both. If you apply AdvancedEventSelectors to a trail, any existing EventSelectors are overwritten. For more information about advanced event selectors, see Logging data events in the CloudTrail User Guide.
Restores a deleted event data store specified by EventDataStore, which accepts an event data store ARN. You can only restore a deleted event data store within the seven-day wait period after deletion. Restoring an event data store can take several minutes, depending on the size of the event data store.
Starts the ingestion of live events on an event data store specified as either an ARN or the ID portion of the ARN. To start ingestion, the event data store Status must be STOPPED_INGESTION and the eventCategory must be Management, Data, or ConfigurationItem.
Starts the recording of Amazon Web Services API calls and log file delivery for a trail. For a trail that is enabled in all regions, this operation must be called from the region in which the trail was created. This operation cannot be called on the shadow trails (replicated trails in other regions) of a trail that is enabled in all regions.
", + "documentation":"Starts the recording of Amazon Web Services API calls and log file delivery for a trail. For a trail that is enabled in all Regions, this operation must be called from the Region in which the trail was created. This operation cannot be called on the shadow trails (replicated trails in other Regions) of a trail that is enabled in all Regions.
", "idempotent":true }, "StartQuery":{ @@ -868,6 +890,28 @@ "documentation":"Starts a CloudTrail Lake query. The required QueryStatement parameter provides your SQL query, enclosed in single quotation marks. Use the optional DeliveryS3Uri parameter to deliver the query results to an S3 bucket.
Stops the ingestion of live events on an event data store specified as either an ARN or the ID portion of the ARN. To stop ingestion, the event data store Status must be ENABLED and the eventCategory must be Management, Data, or ConfigurationItem.
Suspends the recording of Amazon Web Services API calls and log file delivery for the specified trail. Under most circumstances, there is no need to use this action. You can update a trail without stopping it first. This action is the only way to stop recording. For a trail enabled in all regions, this operation must be called from the region in which the trail was created, or an InvalidHomeRegionException will occur. This operation cannot be called on the shadow trails (replicated trails in other regions) of a trail enabled in all regions.
Suspends the recording of Amazon Web Services API calls and log file delivery for the specified trail. Under most circumstances, there is no need to use this action. You can update a trail without stopping it first. This action is the only way to stop recording. For a trail enabled in all Regions, this operation must be called from the Region in which the trail was created, or an InvalidHomeRegionException will occur. This operation cannot be called on the shadow trails (replicated trails in other Regions) of a trail enabled in all Regions.
Updates trail settings that control what events you are logging, and how to handle log files. Changes to a trail do not require stopping the CloudTrail service. Use this action to designate an existing bucket for log delivery. If the existing bucket has previously been a target for CloudTrail log files, an IAM policy exists for the bucket. UpdateTrail must be called from the region in which the trail was created; otherwise, an InvalidHomeRegionException is thrown.
Updates trail settings that control what events you are logging, and how to handle log files. Changes to a trail do not require stopping the CloudTrail service. Use this action to designate an existing bucket for log delivery. If the existing bucket has previously been a target for CloudTrail log files, an IAM policy exists for the bucket. UpdateTrail must be called from the Region in which the trail was created; otherwise, an InvalidHomeRegionException is thrown.
Specifies the ARN of the trail, event data store, or channel to which one or more tags will be added.
The format of a trail ARN is: arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
The format of an event data store ARN is: arn:aws:cloudtrail:us-east-2:12345678910:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
The format of a channel ARN is: arn:aws:cloudtrail:us-east-2:123456789012:channel/01234567890
Specifies the ARN of the trail, event data store, or channel to which one or more tags will be added.
The format of a trail ARN is: arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
The format of an event data store ARN is: arn:aws:cloudtrail:us-east-2:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
The format of a channel ARN is: arn:aws:cloudtrail:us-east-2:123456789012:channel/01234567890
This exception is thrown when an operation is called with a trail ARN that is not valid. The following is the format of a trail ARN.
arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
This exception is also thrown when you call AddTags or RemoveTags on a trail, event data store, or channel with a resource ARN that is not valid.
The following is the format of an event data store ARN: arn:aws:cloudtrail:us-east-2:12345678910:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
The following is the format of a channel ARN: arn:aws:cloudtrail:us-east-2:123456789012:channel/01234567890
This exception is thrown when an operation is called with a trail ARN that is not valid. The following is the format of a trail ARN.
arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
This exception is also thrown when you call AddTags or RemoveTags on a trail, event data store, or channel with a resource ARN that is not valid.
The following is the format of an event data store ARN: arn:aws:cloudtrail:us-east-2:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
The following is the format of a channel ARN: arn:aws:cloudtrail:us-east-2:123456789012:channel/01234567890
Cannot set a CloudWatch Logs delivery for this region.
", + "documentation":"Cannot set a CloudWatch Logs delivery for this Region.
", "exception":true }, "ConflictException":{ @@ -1326,7 +1370,7 @@ }, "MultiRegionEnabled":{ "shape":"Boolean", - "documentation":"Specifies whether the event data store includes events from all regions, or only from the region in which the event data store is created.
" + "documentation":"Specifies whether the event data store includes events from all Regions, or only from the Region in which the event data store is created.
" }, "OrganizationEnabled":{ "shape":"Boolean", @@ -1344,6 +1388,10 @@ "KmsKeyId":{ "shape":"EventDataStoreKmsKeyId", "documentation":"Specifies the KMS key ID to use to encrypt the events delivered by CloudTrail. The value can be an alias name prefixed by alias/, a fully specified ARN to an alias, a fully specified ARN to a key, or a globally unique identifier.
Disabling or deleting the KMS key, or removing CloudTrail permissions on the key, prevents CloudTrail from logging events to the event data store, and prevents users from querying the data in the event data store that was encrypted with the key. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed. Before you disable or delete a KMS key that you are using with an event data store, delete or back up your event data store.
CloudTrail also supports KMS multi-Region keys. For more information about multi-Region keys, see Using multi-Region keys in the Key Management Service Developer Guide.
Examples:
alias/MyAliasName
arn:aws:kms:us-east-2:123456789012:alias/MyAliasName
arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012
12345678-1234-1234-1234-123456789012
Specifies whether the event data store should start ingesting live events. The default is true.
" } } }, @@ -1368,7 +1416,7 @@ }, "MultiRegionEnabled":{ "shape":"Boolean", - "documentation":"Indicates whether the event data store collects events from all regions, or only from the region in which it was created.
" + "documentation":"Indicates whether the event data store collects events from all Regions, or only from the Region in which it was created.
" }, "OrganizationEnabled":{ "shape":"Boolean", @@ -1426,7 +1474,7 @@ }, "IsMultiRegionTrail":{ "shape":"Boolean", - "documentation":"Specifies whether the trail is created in the current region or in all regions. The default is false, which creates a trail only in the region where you are signed in. As a best practice, consider creating trails that log events in all regions.
" + "documentation":"Specifies whether the trail is created in the current Region or in all Regions. The default is false, which creates a trail only in the Region where you are signed in. As a best practice, consider creating trails that log events in all Regions.
" }, "EnableLogFileValidation":{ "shape":"Boolean", @@ -1482,7 +1530,7 @@ }, "IsMultiRegionTrail":{ "shape":"Boolean", - "documentation":"Specifies whether the trail exists in one region or in all regions.
" + "documentation":"Specifies whether the trail exists in one Region or in all Regions.
" }, "TrailARN":{ "shape":"String", @@ -1693,11 +1741,11 @@ "members":{ "trailNameList":{ "shape":"TrailNameList", - "documentation":"Specifies a list of trail names, trail ARNs, or both, of the trails to describe. The format of a trail ARN is:
arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
If an empty list is specified, information for the trail in the current region is returned.
If an empty list is specified and IncludeShadowTrails is false, then information for all trails in the current region is returned.
If an empty list is specified and IncludeShadowTrails is null or true, then information for all trails in the current region and any associated shadow trails in other regions is returned.
If one or more trail names are specified, information is returned only if the names match the names of trails belonging only to the current region and current account. To return information about a trail in another region, you must specify its trail ARN.
Specifies a list of trail names, trail ARNs, or both, of the trails to describe. The format of a trail ARN is:
arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
If an empty list is specified, information for the trail in the current Region is returned.
If an empty list is specified and IncludeShadowTrails is false, then information for all trails in the current Region is returned.
If an empty list is specified and IncludeShadowTrails is null or true, then information for all trails in the current Region and any associated shadow trails in other Regions is returned.
If one or more trail names are specified, information is returned only if the names match the names of trails belonging only to the current Region and current account. To return information about a trail in another Region, you must specify its trail ARN.
Specifies whether to include shadow trails in the response. A shadow trail is the replication in a region of a trail that was created in a different region, or in the case of an organization trail, the replication of an organization trail in member accounts. If you do not include shadow trails, organization trails in a member account and region replication trails will not be returned. The default is true.
" + "documentation":"Specifies whether to include shadow trails in the response. A shadow trail is the replication in a Region of a trail that was created in a different Region, or in the case of an organization trail, the replication of an organization trail in member accounts. If you do not include shadow trails, organization trails in a member account and Region replication trails will not be returned. The default is true.
" } }, "documentation":"Returns information about the trail.
" @@ -1814,7 +1862,7 @@ }, "Status":{ "shape":"EventDataStoreStatus", - "documentation":"The status of an event data store. Values are ENABLED and PENDING_DELETION.
The status of an event data store.
", "deprecated":true, "deprecatedMessage":"Status is no longer returned by ListEventDataStores" }, @@ -1826,7 +1874,7 @@ }, "MultiRegionEnabled":{ "shape":"Boolean", - "documentation":"Indicates whether the event data store includes events from all regions, or only from the region in which it was created.
", + "documentation":"Indicates whether the event data store includes events from all Regions, or only from the Region in which it was created.
", "deprecated":true, "deprecatedMessage":"MultiRegionEnabled is no longer returned by ListEventDataStores" }, @@ -1915,7 +1963,10 @@ "enum":[ "CREATED", "ENABLED", - "PENDING_DELETION" + "PENDING_DELETION", + "STARTING_INGESTION", + "STOPPING_INGESTION", + "STOPPED_INGESTION" ] }, "EventDataStoreTerminationProtectedException":{ @@ -1938,7 +1989,7 @@ }, "IncludeManagementEvents":{ "shape":"Boolean", - "documentation":"Specify if you want your event selector to include management events for your trail.
For more information, see Management Events in the CloudTrail User Guide.
By default, the value is true.
The first copy of management events is free. You are charged for additional copies of management events that you are logging on any subsequent trail in the same region. For more information about CloudTrail pricing, see CloudTrail Pricing.
" + "documentation":"Specify if you want your event selector to include management events for your trail.
For more information, see Management Events in the CloudTrail User Guide.
By default, the value is true.
The first copy of management events is free. You are charged for additional copies of management events that you are logging on any subsequent trail in the same Region. For more information about CloudTrail pricing, see CloudTrail Pricing.
" }, "DataResources":{ "shape":"DataResources", @@ -1946,7 +1997,7 @@ }, "ExcludeManagementEventSources":{ "shape":"ExcludeManagementEventSources", - "documentation":"An optional list of service event sources from which you do not want management events to be logged on your trail. In this release, the list can be empty (disables the filter), or it can filter out Key Management Service or Amazon RDS Data API events by containing kms.amazonaws.com or rdsdata.amazonaws.com. By default, ExcludeManagementEventSources is empty, and KMS and Amazon RDS Data API events are logged to your trail. You can exclude management event sources only in regions that support the event source.
An optional list of service event sources from which you do not want management events to be logged on your trail. In this release, the list can be empty (disables the filter), or it can filter out Key Management Service or Amazon RDS Data API events by containing kms.amazonaws.com or rdsdata.amazonaws.com. By default, ExcludeManagementEventSources is empty, and KMS and Amazon RDS Data API events are logged to your trail. You can exclude management event sources only in Regions that support the event source.
Use event selectors to further specify the management and data event settings for your trail. By default, trails created without specific event selectors will be configured to log all read and write management events, and no data events. When an event occurs in your account, CloudTrail evaluates the event selector for all trails. For each trail, if the event matches any event selector, the trail processes and logs the event. If the event doesn't match any event selector, the trail doesn't log the event.
You can configure up to five event selectors for a trail.
You cannot apply both event selectors and advanced event selectors to a trail.
" @@ -1990,7 +2041,7 @@ }, "SourceConfig":{ "shape":"SourceConfig", - "documentation":"Provides information about the advanced event selectors configured for the channel, and whether the channel applies to all regions or a single region.
" + "documentation":"Provides information about the advanced event selectors configured for the channel, and whether the channel applies to all Regions or a single Region.
" }, "Destinations":{ "shape":"Destinations", @@ -2025,7 +2076,7 @@ }, "Status":{ "shape":"EventDataStoreStatus", - "documentation":"The status of an event data store. Values can be ENABLED and PENDING_DELETION.
The status of an event data store.
" }, "AdvancedEventSelectors":{ "shape":"AdvancedEventSelectors", @@ -2033,7 +2084,7 @@ }, "MultiRegionEnabled":{ "shape":"Boolean", - "documentation":"Indicates whether the event data store includes events from all regions, or only from the region in which it was created.
" + "documentation":"Indicates whether the event data store includes events from all Regions, or only from the Region in which it was created.
" }, "OrganizationEnabled":{ "shape":"Boolean", @@ -2256,7 +2307,7 @@ "members":{ "Name":{ "shape":"String", - "documentation":"Specifies the name or the CloudTrail ARN of the trail for which you are requesting status. To get the status of a shadow trail (a replication of the trail in another region), you must specify its ARN. The following is the format of a trail ARN.
arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
Specifies the name or the CloudTrail ARN of the trail for which you are requesting status. To get the status of a shadow trail (a replication of the trail in another Region), you must specify its ARN. The following is the format of a trail ARN.
arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
The name of a trail about which you want the current status.
" @@ -2613,7 +2664,7 @@ "type":"structure", "members":{ }, - "documentation":"This exception is thrown when an operation is called on a trail from a region other than the region in which the trail was created.
", + "documentation":"This exception is thrown when an operation is called on a trail from a Region other than the Region in which the trail was created.
", "exception":true }, "InvalidImportSourceException":{ @@ -2761,7 +2812,7 @@ "type":"structure", "members":{ }, - "documentation":"This exception is thrown when the KMS key does not exist, when the S3 bucket and the KMS key are not in the same region, or when the KMS key associated with the Amazon SNS topic either does not exist or is not in the same region.
", + "documentation":"This exception is thrown when the KMS key does not exist, when the S3 bucket and the KMS key are not in the same Region, or when the KMS key associated with the Amazon SNS topic either does not exist or is not in the same Region.
", "exception":true }, "ListChannelsMaxResultsCount":{ @@ -2818,7 +2869,7 @@ "members":{ "EventDataStores":{ "shape":"EventDataStores", - "documentation":"Contains information about event data stores in the account, in the current region.
" + "documentation":"Contains information about event data stores in the account, in the current Region.
" }, "NextToken":{ "shape":"PaginationToken", @@ -2987,7 +3038,7 @@ "members":{ "ResourceIdList":{ "shape":"ResourceIdList", - "documentation":"Specifies a list of trail, event data store, or channel ARNs whose tags will be listed. The list has a limit of 20 ARNs.
" + "documentation":"Specifies a list of trail, event data store, or channel ARNs whose tags will be listed. The list has a limit of 20 ARNs.
Example trail ARN format: arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
Example event data store ARN format: arn:aws:cloudtrail:us-east-2:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
Example channel ARN format: arn:aws:cloudtrail:us-east-2:123456789012:channel/01234567890
Returns the name, ARN, and home region of trails in the current account.
" + "documentation":"Returns the name, ARN, and home Region of trails in the current account.
" }, "NextToken":{ "shape":"String", @@ -3467,7 +3518,7 @@ "members":{ "ResourceId":{ "shape":"String", - "documentation":"Specifies the ARN of the trail, event data store, or channel from which tags should be removed.
Example trail ARN format: arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
Example event data store ARN format: arn:aws:cloudtrail:us-east-2:12345678910:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
Example channel ARN format: arn:aws:cloudtrail:us-east-2:123456789012:channel/01234567890
Specifies the ARN of the trail, event data store, or channel from which tags should be removed.
Example trail ARN format: arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail
Example event data store ARN format: arn:aws:cloudtrail:us-east-2:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
Example channel ARN format: arn:aws:cloudtrail:us-east-2:123456789012:channel/01234567890
Indicates whether the event data store is collecting events from all regions, or only from the region in which the event data store was created.
" + "documentation":"Indicates whether the event data store is collecting events from all Regions, or only from the Region in which the event data store was created.
" }, "OrganizationEnabled":{ "shape":"Boolean", @@ -3654,7 +3705,7 @@ }, "S3BucketRegion":{ "shape":"String", - "documentation":"The region associated with the source S3 bucket.
" + "documentation":"The Region associated with the source S3 bucket.
" }, "S3BucketAccessRoleArn":{ "shape":"String", @@ -3686,7 +3737,7 @@ "members":{ "ApplyToAllRegions":{ "shape":"Boolean", - "documentation":"Specifies whether the channel applies to a single region or to all regions.
" + "documentation":"Specifies whether the channel applies to a single Region or to all Regions.
" }, "AdvancedEventSelectors":{ "shape":"AdvancedEventSelectors", @@ -3695,6 +3746,21 @@ }, "documentation":"Contains configuration information about the channel.
" }, + "StartEventDataStoreIngestionRequest":{ + "type":"structure", + "required":["EventDataStore"], + "members":{ + "EventDataStore":{ + "shape":"EventDataStoreArn", + "documentation":"The ARN (or ID suffix of the ARN) of the event data store for which you want to start ingestion.
" + } + } + }, + "StartEventDataStoreIngestionResponse":{ + "type":"structure", + "members":{ + } + }, "StartImportRequest":{ "type":"structure", "members":{ @@ -3797,6 +3863,21 @@ } } }, + "StopEventDataStoreIngestionRequest":{ + "type":"structure", + "required":["EventDataStore"], + "members":{ + "EventDataStore":{ + "shape":"EventDataStoreArn", + "documentation":"The ARN (or ID suffix of the ARN) of the event data store for which you want to stop ingestion.
" + } + } + }, + "StopEventDataStoreIngestionResponse":{ + "type":"structure", + "members":{ + } + }, "StopImportRequest":{ "type":"structure", "required":["ImportId"], @@ -3935,11 +4016,11 @@ }, "IsMultiRegionTrail":{ "shape":"Boolean", - "documentation":"Specifies whether the trail exists only in one region or exists in all regions.
" + "documentation":"Specifies whether the trail exists only in one Region or exists in all Regions.
" }, "HomeRegion":{ "shape":"String", - "documentation":"The region in which the trail was created.
" + "documentation":"The Region in which the trail was created.
" }, "TrailARN":{ "shape":"String", @@ -3999,7 +4080,7 @@ "documentation":"The Amazon Web Services Region in which a trail was created.
" } }, - "documentation":"Information about a CloudTrail trail, including the trail's name, home region, and Amazon Resource Name (ARN).
" + "documentation":"Information about a CloudTrail trail, including the trail's name, home Region, and Amazon Resource Name (ARN).
" }, "TrailList":{ "type":"list", @@ -4097,7 +4178,7 @@ }, "MultiRegionEnabled":{ "shape":"Boolean", - "documentation":"Specifies whether an event data store collects events from all regions, or only from the region in which it was created.
" + "documentation":"Specifies whether an event data store collects events from all Regions, or only from the Region in which it was created.
" }, "OrganizationEnabled":{ "shape":"Boolean", @@ -4130,7 +4211,7 @@ }, "Status":{ "shape":"EventDataStoreStatus", - "documentation":"The status of an event data store. Values can be ENABLED and PENDING_DELETION.
The status of an event data store.
" }, "AdvancedEventSelectors":{ "shape":"AdvancedEventSelectors", @@ -4138,7 +4219,7 @@ }, "MultiRegionEnabled":{ "shape":"Boolean", - "documentation":"Indicates whether the event data store includes events from all regions, or only from the region in which it was created.
" + "documentation":"Indicates whether the event data store includes events from all Regions, or only from the Region in which it was created.
" }, "OrganizationEnabled":{ "shape":"Boolean", @@ -4192,7 +4273,7 @@ }, "IsMultiRegionTrail":{ "shape":"Boolean", - "documentation":"Specifies whether the trail applies only to the current region or to all regions. The default is false. If the trail exists only in the current region and this value is set to true, shadow trails (replications of the trail) will be created in the other regions. If the trail exists in all regions and this value is set to false, the trail will remain in the region where it was created, and its shadow trails in other regions will be deleted. As a best practice, consider using trails that log events in all regions.
" + "documentation":"Specifies whether the trail applies only to the current Region or to all Regions. The default is false. If the trail exists only in the current Region and this value is set to true, shadow trails (replications of the trail) will be created in the other Regions. If the trail exists in all Regions and this value is set to false, the trail will remain in the Region where it was created, and its shadow trails in other Regions will be deleted. As a best practice, consider using trails that log events in all Regions.
" }, "EnableLogFileValidation":{ "shape":"Boolean", @@ -4247,7 +4328,7 @@ }, "IsMultiRegionTrail":{ "shape":"Boolean", - "documentation":"Specifies whether the trail exists in one region or in all regions.
" + "documentation":"Specifies whether the trail exists in one Region or in all Regions.
" }, "TrailARN":{ "shape":"String", @@ -4277,5 +4358,5 @@ "documentation":"Returns the objects or data listed below if successful. Otherwise, returns an error.
" } }, - "documentation":"This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail.
CloudTrail is a web service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the Amazon Web Services API call, the source IP address, the request parameters, and the response elements returned by the service.
As an alternative to the API, you can use one of the Amazon Web Services SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide programmatic access to CloudTrail. For example, the SDKs handle cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools to Build on Amazon Web Services.
See the CloudTrail User Guide for information about the data that is included with each Amazon Web Services API call listed in the log files.
" + "documentation":"This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail.
CloudTrail is a web service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the Amazon Web Services API call, the source IP address, the request parameters, and the response elements returned by the service.
As an alternative to the API, you can use one of the Amazon Web Services SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide programmatic access to CloudTrail. For example, the SDKs handle cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools to Build on Amazon Web Services.
See the CloudTrail User Guide for information about the data that is included with each Amazon Web Services API call listed in the log files.
Actions available for CloudTrail trails
The following actions are available for CloudTrail trails.
Actions available for CloudTrail event data stores
The following actions are available for CloudTrail event data stores.
The following additional actions are available for imports.
Actions available for CloudTrail channels
The following actions are available for CloudTrail channels.
Actions available for managing delegated administrators
The following actions are available for adding or a removing a delegated administrator to manage an Organizations organization’s CloudTrail resources.
" } From c25e41e008e3a11b9aa032c4954cc0255e3be4be Mon Sep 17 00:00:00 2001 From: AWS <> Date: Fri, 2 Jun 2023 18:06:36 +0000 Subject: [PATCH 022/317] Amazon SageMaker Service Update: This release adds Selective Execution feature that allows SageMaker Pipelines users to run selected steps in a pipeline. --- ...eature-AmazonSageMakerService-c559731.json | 6 ++ .../codegen-resources/service-2.json | 63 ++++++++++++++++++- 2 files changed, 68 insertions(+), 1 deletion(-) create mode 100644 .changes/next-release/feature-AmazonSageMakerService-c559731.json diff --git a/.changes/next-release/feature-AmazonSageMakerService-c559731.json b/.changes/next-release/feature-AmazonSageMakerService-c559731.json new file mode 100644 index 000000000000..6d5f5f611eef --- /dev/null +++ b/.changes/next-release/feature-AmazonSageMakerService-c559731.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon SageMaker Service", + "contributor": "", + "description": "This release adds Selective Execution feature that allows SageMaker Pipelines users to run selected steps in a pipeline." +} diff --git a/services/sagemaker/src/main/resources/codegen-resources/service-2.json b/services/sagemaker/src/main/resources/codegen-resources/service-2.json index bfb0aa2bccf3..23671aa4ca79 100644 --- a/services/sagemaker/src/main/resources/codegen-resources/service-2.json +++ b/services/sagemaker/src/main/resources/codegen-resources/service-2.json @@ -4974,7 +4974,7 @@ }, "ContentType":{ "shape":"ContentType", - "documentation":"The content type of the data from the input source. The following are the allowed content types for different problems:
ImageClassification: image/png, image/jpeg, or image/*. The default value is image/*.
TextClassification: text/csv;header=present or x-application/vnd.amazon+parquet. The default value is text/csv;header=present.
The content type of the data from the input source. The following are the allowed content types for different problems:
ImageClassification: image/png, image/jpeg, image/*
TextClassification: text/csv;header=present
The parallelism configuration applied to the pipeline.
" + }, + "SelectiveExecutionConfig":{ + "shape":"SelectiveExecutionConfig", + "documentation":"The selective execution configuration applied to the pipeline run.
" } } }, @@ -26834,6 +26838,10 @@ "PipelineParameters":{ "shape":"ParameterList", "documentation":"Contains a list of pipeline parameters. This list can be empty.
" + }, + "SelectiveExecutionConfig":{ + "shape":"SelectiveExecutionConfig", + "documentation":"The selective execution configuration applied to the pipeline run.
" } }, "documentation":"An execution of a pipeline.
" @@ -26912,6 +26920,10 @@ "Metadata":{ "shape":"PipelineExecutionStepMetadata", "documentation":"Metadata to run the pipeline step.
" + }, + "SelectiveExecutionResult":{ + "shape":"SelectiveExecutionResult", + "documentation":"The ARN from an execution of the current pipeline from which results are reused for this step.
" } }, "documentation":"An execution of a step in a pipeline.
" @@ -29772,6 +29784,51 @@ "max":5 }, "Seed":{"type":"long"}, + "SelectedStep":{ + "type":"structure", + "required":["StepName"], + "members":{ + "StepName":{ + "shape":"String256", + "documentation":"The name of the pipeline step.
" + } + }, + "documentation":"A step selected to run in selective execution mode.
" + }, + "SelectedStepList":{ + "type":"list", + "member":{"shape":"SelectedStep"}, + "max":50, + "min":1 + }, + "SelectiveExecutionConfig":{ + "type":"structure", + "required":[ + "SourcePipelineExecutionArn", + "SelectedSteps" + ], + "members":{ + "SourcePipelineExecutionArn":{ + "shape":"PipelineExecutionArn", + "documentation":"The ARN from a reference execution of the current pipeline. Used to copy input collaterals needed for the selected steps to run. The execution status of the pipeline can be either Failed or Success.
A list of pipeline steps to run. All step(s) in all path(s) between two selected steps should be included.
" + } + }, + "documentation":"The selective execution configuration applied to the pipeline run.
" + }, + "SelectiveExecutionResult":{ + "type":"structure", + "members":{ + "SourcePipelineExecutionArn":{ + "shape":"PipelineExecutionArn", + "documentation":"The ARN from an execution of the current pipeline.
" + } + }, + "documentation":"The ARN from an execution of the current pipeline.
" + }, "SendPipelineExecutionStepFailureRequest":{ "type":"structure", "required":["CallbackToken"], @@ -30311,6 +30368,10 @@ "ParallelismConfiguration":{ "shape":"ParallelismConfiguration", "documentation":"This configuration, if specified, overrides the parallelism configuration of the parent pipeline for this specific run.
" + }, + "SelectiveExecutionConfig":{ + "shape":"SelectiveExecutionConfig", + "documentation":"The selective execution configuration applied to the pipeline run.
" } } }, From 3ebefb0e9aa9b30c9a5967ea9019655c0ec18d33 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Fri, 2 Jun 2023 18:06:36 +0000 Subject: [PATCH 023/317] Amazon Athena Update: This release introduces the DeleteCapacityReservation API and the ability to manage capacity reservations using CloudFormation --- .../feature-AmazonAthena-2e152b9.json | 6 ++++ .../codegen-resources/service-2.json | 34 +++++++++++++++++-- 2 files changed, 38 insertions(+), 2 deletions(-) create mode 100644 .changes/next-release/feature-AmazonAthena-2e152b9.json diff --git a/.changes/next-release/feature-AmazonAthena-2e152b9.json b/.changes/next-release/feature-AmazonAthena-2e152b9.json new file mode 100644 index 000000000000..ae469c3e42cf --- /dev/null +++ b/.changes/next-release/feature-AmazonAthena-2e152b9.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Athena", + "contributor": "", + "description": "This release introduces the DeleteCapacityReservation API and the ability to manage capacity reservations using CloudFormation" +} diff --git a/services/athena/src/main/resources/codegen-resources/service-2.json b/services/athena/src/main/resources/codegen-resources/service-2.json index 126542567103..2ee393c43c60 100644 --- a/services/athena/src/main/resources/codegen-resources/service-2.json +++ b/services/athena/src/main/resources/codegen-resources/service-2.json @@ -66,7 +66,7 @@ {"shape":"InvalidRequestException"}, {"shape":"InternalServerException"} ], - "documentation":"Cancels the capacity reservation with the specified name.
", + "documentation":"Cancels the capacity reservation with the specified name. Cancelled reservations remain in your account and will be deleted 45 days after cancellation. During the 45 days, you cannot re-purpose or reuse a reservation that has been cancelled, but you can refer to its tags and view it for historical reference.
", "idempotent":true }, "CreateCapacityReservation":{ @@ -171,6 +171,21 @@ ], "documentation":"Creates a workgroup with the specified name. A workgroup can be an Apache Spark enabled workgroup or an Athena SQL workgroup.
" }, + "DeleteCapacityReservation":{ + "name":"DeleteCapacityReservation", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteCapacityReservationInput"}, + "output":{"shape":"DeleteCapacityReservationOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Deletes a cancelled capacity reservation. A reservation must be cancelled before it can be deleted. A deleted reservation is immediately removed from your account and can no longer be referenced, including by its ARN. A deleted reservation cannot be called by GetCapacityReservation, and deleted reservations do not appear in the output of ListCapacityReservations.
The name of the capacity reservation to delete.
" + } + } + }, + "DeleteCapacityReservationOutput":{ + "type":"structure", + "members":{ + } + }, "DeleteDataCatalogInput":{ "type":"structure", "required":["Name"], @@ -3524,7 +3554,7 @@ }, "ExecutionParameters":{ "shape":"ExecutionParameters", - "documentation":"A list of values for the parameters in a query. The values are applied sequentially to the parameters in the query in the order in which the parameters occur.
" + "documentation":"A list of values for the parameters in a query. The values are applied sequentially to the parameters in the query in the order in which the parameters occur. The list of parameters is not returned in the response.
" }, "SubstatementType":{ "shape":"String", From 750de8f5d92de9d6e58958ca1c11fbde5ddc8785 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Fri, 2 Jun 2023 18:08:54 +0000 Subject: [PATCH 024/317] Updated endpoints.json and partitions.json. --- .changes/next-release/feature-AWSSDKforJavav2-0443982.json | 6 ++++++ .../amazon/awssdk/regions/internal/region/endpoints.json | 3 ++- 2 files changed, 8 insertions(+), 1 deletion(-) create mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json new file mode 100644 index 000000000000..e5b5ee3ca5e3 --- /dev/null +++ b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." +} diff --git a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json index b7e9f016fdf7..9aaed5dd9014 100644 --- a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json +++ b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json @@ -23703,7 +23703,8 @@ }, "route53resolver" : { "endpoints" : { - "us-iso-east-1" : { } + "us-iso-east-1" : { }, + "us-iso-west-1" : { } } }, "runtime.sagemaker" : { From 56cc322b3d3cfba50695f2ce83f6abbcc08d4867 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Fri, 2 Jun 2023 18:10:00 +0000 Subject: [PATCH 025/317] Release 2.20.78. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.20.78.json | 36 +++++++++++++++++++ .../feature-AWSCloudTrail-e5964bb.json | 6 ---- .../feature-AWSSDKforJavav2-0443982.json | 6 ---- .../feature-AWSWAFV2-898b858.json | 6 ---- .../feature-AmazonAthena-2e152b9.json | 6 ---- ...eature-AmazonSageMakerService-c559731.json | 6 ---- CHANGELOG.md | 21 +++++++++++ README.md | 8 ++--- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 2 +- bom-internal/pom.xml | 2 +- bom/pom.xml | 2 +- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth-crt/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/crt-core/pom.xml | 2 +- core/endpoints-spi/pom.xml | 2 +- core/imds/pom.xml | 2 +- core/json-utils/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services-custom/s3-transfer-manager/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/account/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/amplifyuibuilder/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appconfigdata/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationcostprofiler/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/apprunner/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/arczonalshift/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/backupgateway/pom.xml | 2 +- services/backupstorage/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/billingconductor/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/chimesdkidentity/pom.xml | 2 +- services/chimesdkmediapipelines/pom.xml | 2 +- services/chimesdkmeetings/pom.xml | 2 +- services/chimesdkmessaging/pom.xml | 2 +- services/chimesdkvoice/pom.xml | 2 +- services/cleanrooms/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/cloudcontrol/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudtraildata/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecatalyst/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcampaigns/pom.xml | 2 +- services/connectcases/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/controltower/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/docdbelastic/pom.xml | 2 +- services/drs/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/emrserverless/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/evidently/pom.xml | 2 +- services/finspace/pom.xml | 2 +- services/finspacedata/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/gamesparks/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/grafana/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/inspector2/pom.xml | 2 +- services/internetmonitor/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotfleetwise/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotroborunner/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iottwinmaker/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/ivschat/pom.xml | 2 +- services/ivsrealtime/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kafkaconnect/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kendraranking/pom.xml | 2 +- services/keyspaces/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kinesisvideowebrtcstorage/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- .../licensemanagerlinuxsubscriptions/pom.xml | 2 +- .../licensemanagerusersubscriptions/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutequipment/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/m2/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagev2/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/memorydb/pom.xml | 2 +- services/mgn/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/migrationhuborchestrator/pom.xml | 2 +- services/migrationhubrefactorspaces/pom.xml | 2 +- services/migrationhubstrategy/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/nimble/pom.xml | 2 +- services/oam/pom.xml | 2 +- services/omics/pom.xml | 2 +- services/opensearch/pom.xml | 2 +- services/opensearchserverless/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/osis/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/panorama/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/pinpointsmsvoicev2/pom.xml | 2 +- services/pipes/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 2 +- services/pricing/pom.xml | 2 +- services/privatenetworks/pom.xml | 2 +- services/proton/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rbin/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/redshiftserverless/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resiliencehub/pom.xml | 2 +- services/resourceexplorer2/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/rolesanywhere/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53recoverycluster/pom.xml | 2 +- services/route53recoverycontrolconfig/pom.xml | 2 +- services/route53recoveryreadiness/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/rum/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakergeospatial/pom.xml | 2 +- services/sagemakermetrics/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/scheduler/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- services/securitylake/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/simspaceweaver/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/snowdevicemanagement/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/ssmcontacts/pom.xml | 2 +- services/ssmincidents/pom.xml | 2 +- services/ssmsap/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/supportapp/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/tnb/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/voiceid/pom.xml | 2 +- services/vpclattice/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/wisdom/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/workspacesweb/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/auth-tests/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/region-testing/pom.xml | 2 +- test/ruleset-testing-core/pom.xml | 2 +- test/s3-benchmarks/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- third-party/pom.xml | 2 +- third-party/third-party-jackson-core/pom.xml | 2 +- .../pom.xml | 2 +- utils/pom.xml | 2 +- 415 files changed, 468 insertions(+), 441 deletions(-) create mode 100644 .changes/2.20.78.json delete mode 100644 .changes/next-release/feature-AWSCloudTrail-e5964bb.json delete mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json delete mode 100644 .changes/next-release/feature-AWSWAFV2-898b858.json delete mode 100644 .changes/next-release/feature-AmazonAthena-2e152b9.json delete mode 100644 .changes/next-release/feature-AmazonSageMakerService-c559731.json diff --git a/.changes/2.20.78.json b/.changes/2.20.78.json new file mode 100644 index 000000000000..da0c2eac15b5 --- /dev/null +++ b/.changes/2.20.78.json @@ -0,0 +1,36 @@ +{ + "version": "2.20.78", + "date": "2023-06-02", + "entries": [ + { + "type": "feature", + "category": "AWS CloudTrail", + "contributor": "", + "description": "This feature allows users to start and stop event ingestion on a CloudTrail Lake event data store." + }, + { + "type": "feature", + "category": "AWS WAFV2", + "contributor": "", + "description": "Added APIs to describe managed products. The APIs retrieve information about rule groups that are managed by AWS and by AWS Marketplace sellers." + }, + { + "type": "feature", + "category": "Amazon Athena", + "contributor": "", + "description": "This release introduces the DeleteCapacityReservation API and the ability to manage capacity reservations using CloudFormation" + }, + { + "type": "feature", + "category": "Amazon SageMaker Service", + "contributor": "", + "description": "This release adds Selective Execution feature that allows SageMaker Pipelines users to run selected steps in a pipeline." + }, + { + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/feature-AWSCloudTrail-e5964bb.json b/.changes/next-release/feature-AWSCloudTrail-e5964bb.json deleted file mode 100644 index 4c7d7628b8f4..000000000000 --- a/.changes/next-release/feature-AWSCloudTrail-e5964bb.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS CloudTrail", - "contributor": "", - "description": "This feature allows users to start and stop event ingestion on a CloudTrail Lake event data store." -} diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json deleted file mode 100644 index e5b5ee3ca5e3..000000000000 --- a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS SDK for Java v2", - "contributor": "", - "description": "Updated endpoint and partition metadata." -} diff --git a/.changes/next-release/feature-AWSWAFV2-898b858.json b/.changes/next-release/feature-AWSWAFV2-898b858.json deleted file mode 100644 index 770c283fbe94..000000000000 --- a/.changes/next-release/feature-AWSWAFV2-898b858.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS WAFV2", - "contributor": "", - "description": "Added APIs to describe managed products. The APIs retrieve information about rule groups that are managed by AWS and by AWS Marketplace sellers." -} diff --git a/.changes/next-release/feature-AmazonAthena-2e152b9.json b/.changes/next-release/feature-AmazonAthena-2e152b9.json deleted file mode 100644 index ae469c3e42cf..000000000000 --- a/.changes/next-release/feature-AmazonAthena-2e152b9.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Athena", - "contributor": "", - "description": "This release introduces the DeleteCapacityReservation API and the ability to manage capacity reservations using CloudFormation" -} diff --git a/.changes/next-release/feature-AmazonSageMakerService-c559731.json b/.changes/next-release/feature-AmazonSageMakerService-c559731.json deleted file mode 100644 index 6d5f5f611eef..000000000000 --- a/.changes/next-release/feature-AmazonSageMakerService-c559731.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon SageMaker Service", - "contributor": "", - "description": "This release adds Selective Execution feature that allows SageMaker Pipelines users to run selected steps in a pipeline." -} diff --git a/CHANGELOG.md b/CHANGELOG.md index ffaa5391a9fc..3796ff70d14c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,24 @@ +# __2.20.78__ __2023-06-02__ +## __AWS CloudTrail__ + - ### Features + - This feature allows users to start and stop event ingestion on a CloudTrail Lake event data store. + +## __AWS SDK for Java v2__ + - ### Features + - Updated endpoint and partition metadata. + +## __AWS WAFV2__ + - ### Features + - Added APIs to describe managed products. The APIs retrieve information about rule groups that are managed by AWS and by AWS Marketplace sellers. + +## __Amazon Athena__ + - ### Features + - This release introduces the DeleteCapacityReservation API and the ability to manage capacity reservations using CloudFormation + +## __Amazon SageMaker Service__ + - ### Features + - This release adds Selective Execution feature that allows SageMaker Pipelines users to run selected steps in a pipeline. + # __2.20.77__ __2023-06-01__ ## __AWS SDK for Java v2__ - ### Features diff --git a/README.md b/README.md index 0f41d0812cf3..09d45fc66de5 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ To automatically manage module versions (currently all modules have the same verCreate a new FinSpace environment.
" }, + "CreateKxChangeset":{ + "name":"CreateKxChangeset", + "http":{ + "method":"POST", + "requestUri":"/kx/environments/{environmentId}/databases/{databaseName}/changesets" + }, + "input":{"shape":"CreateKxChangesetRequest"}, + "output":{"shape":"CreateKxChangesetResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConflictException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"Creates a changeset for a kdb database. A changeset allows you to add and delete existing files by using an ordered list of change requests.
" + }, + "CreateKxCluster":{ + "name":"CreateKxCluster", + "http":{ + "method":"POST", + "requestUri":"/kx/environments/{environmentId}/clusters" + }, + "input":{"shape":"CreateKxClusterRequest"}, + "output":{"shape":"CreateKxClusterResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConflictException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"} + ], + "documentation":"Creates a new kdb cluster.
" + }, + "CreateKxDatabase":{ + "name":"CreateKxDatabase", + "http":{ + "method":"POST", + "requestUri":"/kx/environments/{environmentId}/databases" + }, + "input":{"shape":"CreateKxDatabaseRequest"}, + "output":{"shape":"CreateKxDatabaseResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ConflictException"}, + {"shape":"ResourceAlreadyExistsException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"Creates a new kdb database in the environment.
" + }, + "CreateKxEnvironment":{ + "name":"CreateKxEnvironment", + "http":{ + "method":"POST", + "requestUri":"/kx/environments" + }, + "input":{"shape":"CreateKxEnvironmentRequest"}, + "output":{"shape":"CreateKxEnvironmentResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConflictException"} + ], + "documentation":"Creates a managed kdb environment for the account.
" + }, + "CreateKxUser":{ + "name":"CreateKxUser", + "http":{ + "method":"POST", + "requestUri":"/kx/environments/{environmentId}/users" + }, + "input":{"shape":"CreateKxUserRequest"}, + "output":{"shape":"CreateKxUserResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceAlreadyExistsException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConflictException"} + ], + "documentation":"Creates a user in FinSpace kdb environment with an associated IAM role.
" + }, "DeleteEnvironment":{ "name":"DeleteEnvironment", "http":{ @@ -48,6 +146,77 @@ ], "documentation":"Delete an FinSpace environment.
" }, + "DeleteKxCluster":{ + "name":"DeleteKxCluster", + "http":{ + "method":"DELETE", + "requestUri":"/kx/environments/{environmentId}/clusters/{clusterName}" + }, + "input":{"shape":"DeleteKxClusterRequest"}, + "output":{"shape":"DeleteKxClusterResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConflictException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Deletes a kdb cluster.
" + }, + "DeleteKxDatabase":{ + "name":"DeleteKxDatabase", + "http":{ + "method":"DELETE", + "requestUri":"/kx/environments/{environmentId}/databases/{databaseName}" + }, + "input":{"shape":"DeleteKxDatabaseRequest"}, + "output":{"shape":"DeleteKxDatabaseResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConflictException"} + ], + "documentation":"Deletes the specified database and all of its associated data. This action is irreversible. You must copy any data out of the database before deleting it if the data is to be retained.
" + }, + "DeleteKxEnvironment":{ + "name":"DeleteKxEnvironment", + "http":{ + "method":"DELETE", + "requestUri":"/kx/environments/{environmentId}" + }, + "input":{"shape":"DeleteKxEnvironmentRequest"}, + "output":{"shape":"DeleteKxEnvironmentResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"} + ], + "documentation":"Deletes the kdb environment. This action is irreversible. Deleting a kdb environment will remove all the associated data and any services running in it.
" + }, + "DeleteKxUser":{ + "name":"DeleteKxUser", + "http":{ + "method":"DELETE", + "requestUri":"/kx/environments/{environmentId}/users/{userName}" + }, + "input":{"shape":"DeleteKxUserRequest"}, + "output":{"shape":"DeleteKxUserResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ], + "documentation":"Deletes a user in the specified kdb environment.
" + }, "GetEnvironment":{ "name":"GetEnvironment", "http":{ @@ -64,6 +233,109 @@ ], "documentation":"Returns the FinSpace environment object.
" }, + "GetKxChangeset":{ + "name":"GetKxChangeset", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/databases/{databaseName}/changesets/{changesetId}" + }, + "input":{"shape":"GetKxChangesetRequest"}, + "output":{"shape":"GetKxChangesetResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Returns information about a kdb changeset.
" + }, + "GetKxCluster":{ + "name":"GetKxCluster", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/clusters/{clusterName}" + }, + "input":{"shape":"GetKxClusterRequest"}, + "output":{"shape":"GetKxClusterResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConflictException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Retrieves information about a kdb cluster.
" + }, + "GetKxConnectionString":{ + "name":"GetKxConnectionString", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/connectionString" + }, + "input":{"shape":"GetKxConnectionStringRequest"}, + "output":{"shape":"GetKxConnectionStringResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ], + "documentation":"Retrieves a connection string for a user to connect to a kdb cluster. You must call this API using the same role that you have defined while creating a user.
" + }, + "GetKxDatabase":{ + "name":"GetKxDatabase", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/databases/{databaseName}" + }, + "input":{"shape":"GetKxDatabaseRequest"}, + "output":{"shape":"GetKxDatabaseResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Returns database information for the specified environment ID.
" + }, + "GetKxEnvironment":{ + "name":"GetKxEnvironment", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}" + }, + "input":{"shape":"GetKxEnvironmentRequest"}, + "output":{"shape":"GetKxEnvironmentResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"} + ], + "documentation":"Retrieves all the information for the specified kdb environment.
" + }, + "GetKxUser":{ + "name":"GetKxUser", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/users/{userName}" + }, + "input":{"shape":"GetKxUserRequest"}, + "output":{"shape":"GetKxUserResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ], + "documentation":"Retrieves information about the specified kdb user.
" + }, "ListEnvironments":{ "name":"ListEnvironments", "http":{ @@ -78,6 +350,108 @@ ], "documentation":"A list of all of your FinSpace environments.
" }, + "ListKxChangesets":{ + "name":"ListKxChangesets", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/databases/{databaseName}/changesets" + }, + "input":{"shape":"ListKxChangesetsRequest"}, + "output":{"shape":"ListKxChangesetsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Returns a list of all the changesets for a database.
" + }, + "ListKxClusterNodes":{ + "name":"ListKxClusterNodes", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/clusters/{clusterName}/nodes" + }, + "input":{"shape":"ListKxClusterNodesRequest"}, + "output":{"shape":"ListKxClusterNodesResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"LimitExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Lists all the nodes in a kdb cluster.
" + }, + "ListKxClusters":{ + "name":"ListKxClusters", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/clusters" + }, + "input":{"shape":"ListKxClustersRequest"}, + "output":{"shape":"ListKxClustersResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConflictException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Returns a list of clusters.
" + }, + "ListKxDatabases":{ + "name":"ListKxDatabases", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/databases" + }, + "input":{"shape":"ListKxDatabasesRequest"}, + "output":{"shape":"ListKxDatabasesResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Returns a list of all the databases in the kdb environment.
" + }, + "ListKxEnvironments":{ + "name":"ListKxEnvironments", + "http":{ + "method":"GET", + "requestUri":"/kx/environments" + }, + "input":{"shape":"ListKxEnvironmentsRequest"}, + "output":{"shape":"ListKxEnvironmentsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ], + "documentation":"Returns a list of kdb environments created in an account.
" + }, + "ListKxUsers":{ + "name":"ListKxUsers", + "http":{ + "method":"GET", + "requestUri":"/kx/environments/{environmentId}/users" + }, + "input":{"shape":"ListKxUsersRequest"}, + "output":{"shape":"ListKxUsersResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ValidationException"} + ], + "documentation":"Lists all the users in a kdb environment.
" + }, "ListTagsForResource":{ "name":"ListTagsForResource", "http":{ @@ -139,25 +513,275 @@ {"shape":"ValidationException"} ], "documentation":"Update your FinSpace environment.
" - } - }, - "shapes":{ - "AccessDeniedException":{ - "type":"structure", - "members":{ - }, - "documentation":"You do not have sufficient access to perform this action.
", - "error":{"httpStatusCode":403}, - "exception":true }, - "AttributeMap":{ - "type":"map", - "key":{"shape":"FederationAttributeKey"}, - "value":{"shape":"url"} + "UpdateKxClusterDatabases":{ + "name":"UpdateKxClusterDatabases", + "http":{ + "method":"PUT", + "requestUri":"/kx/environments/{environmentId}/clusters/{clusterName}/configuration/databases" + }, + "input":{"shape":"UpdateKxClusterDatabasesRequest"}, + "output":{"shape":"UpdateKxClusterDatabasesResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"LimitExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Updates the databases mounted on a kdb cluster, which includes the changesetId and all the dbPaths to be cached. This API does not allow you to change a database name or add a database if you created a cluster without one.
Using this API you can point a cluster to a different changeset and modify a list of partitions being cached.
" }, - "CreateEnvironmentRequest":{ - "type":"structure", - "required":["name"], + "UpdateKxDatabase":{ + "name":"UpdateKxDatabase", + "http":{ + "method":"PUT", + "requestUri":"/kx/environments/{environmentId}/databases/{databaseName}" + }, + "input":{"shape":"UpdateKxDatabaseRequest"}, + "output":{"shape":"UpdateKxDatabaseResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConflictException"} + ], + "documentation":"Updates information for the given kdb database.
" + }, + "UpdateKxEnvironment":{ + "name":"UpdateKxEnvironment", + "http":{ + "method":"PUT", + "requestUri":"/kx/environments/{environmentId}" + }, + "input":{"shape":"UpdateKxEnvironmentRequest"}, + "output":{"shape":"UpdateKxEnvironmentResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"} + ], + "documentation":"Updates information for the given kdb environment.
" + }, + "UpdateKxEnvironmentNetwork":{ + "name":"UpdateKxEnvironmentNetwork", + "http":{ + "method":"PUT", + "requestUri":"/kx/environments/{environmentId}/network" + }, + "input":{"shape":"UpdateKxEnvironmentNetworkRequest"}, + "output":{"shape":"UpdateKxEnvironmentNetworkResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"} + ], + "documentation":"Updates environment network to connect to your internal network by using a transit gateway. This API supports request to create a transit gateway attachment from FinSpace VPC to your transit gateway ID and create a custom Route-53 outbound resolvers.
Once you send a request to update a network, you cannot change it again. Network update might require termination of any clusters that are running in the existing network.
" + }, + "UpdateKxUser":{ + "name":"UpdateKxUser", + "http":{ + "method":"PUT", + "requestUri":"/kx/environments/{environmentId}/users/{userName}" + }, + "input":{"shape":"UpdateKxUserRequest"}, + "output":{"shape":"UpdateKxUserResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConflictException"} + ], + "documentation":"Updates the user details. You can only update the IAM role associated with a user.
" + } + }, + "shapes":{ + "AccessDeniedException":{ + "type":"structure", + "members":{ + }, + "documentation":"You do not have sufficient access to perform this action.
", + "error":{"httpStatusCode":403}, + "exception":true + }, + "AttributeMap":{ + "type":"map", + "key":{"shape":"FederationAttributeKey"}, + "value":{"shape":"FederationAttributeValue"} + }, + "AutoScalingConfiguration":{ + "type":"structure", + "members":{ + "minNodeCount":{ + "shape":"NodeCount", + "documentation":"The lowest number of nodes to scale. This value must be at least 1 and less than the maxNodeCount. If the nodes in a cluster belong to multiple availability zones, then minNodeCount must be at least 3.
The highest number of nodes to scale. This value cannot be greater than 5.
" + }, + "autoScalingMetric":{ + "shape":"AutoScalingMetric", + "documentation":" The metric your cluster will track in order to scale in and out. For example, CPU_UTILIZATION_PERCENTAGE is the average CPU usage across all the nodes in a cluster.
The desired value of the chosen autoScalingMetric. When the metric drops below this value, the cluster will scale in. When the metric goes above this value, the cluster will scale out. You can set the target value between 1 and 100 percent.
The duration in seconds that FinSpace will wait after a scale in event before initiating another scaling event.
" + }, + "scaleOutCooldownSeconds":{ + "shape":"CooldownTime", + "documentation":"The duration in seconds that FinSpace will wait after a scale out event before initiating another scaling event.
" + } + }, + "documentation":"The configuration based on which FinSpace will scale in or scale out nodes in your cluster.
" + }, + "AutoScalingMetric":{ + "type":"string", + "enum":["CPU_UTILIZATION_PERCENTAGE"] + }, + "AutoScalingMetricTarget":{ + "type":"double", + "max":100, + "min":1 + }, + "AvailabilityZoneId":{"type":"string"}, + "AvailabilityZoneIds":{ + "type":"list", + "member":{"shape":"AvailabilityZoneId"} + }, + "BoxedInteger":{ + "type":"integer", + "box":true + }, + "CapacityConfiguration":{ + "type":"structure", + "members":{ + "nodeType":{ + "shape":"NodeType", + "documentation":"The type that determines the hardware of the host computer used for your cluster instance. Each node type offers different memory and storage capabilities. Choose a node type based on the requirements of the application or software that you plan to run on your instance.
You can only specify one of the following values:
kx.s.large – The node type with a configuration of 12 GiB memory and 2 vCPUs.
kx.s.xlarge – The node type with a configuration of 27 GiB memory and 4 vCPUs.
kx.s.2xlarge – The node type with a configuration of 54 GiB memory and 8 vCPUs.
kx.s.4xlarge – The node type with a configuration of 108 GiB memory and 16 vCPUs.
kx.s.8xlarge – The node type with a configuration of 216 GiB memory and 32 vCPUs.
kx.s.16xlarge – The node type with a configuration of 432 GiB memory and 64 vCPUs.
kx.s.32xlarge – The node type with a configuration of 864 GiB memory and 128 vCPUs.
The number of instances running in a cluster.
" + } + }, + "documentation":"A structure for the metadata of a cluster. It includes information like the CPUs needed, memory of instances, number of instances, and the port used while establishing a connection.
" + }, + "ChangeRequest":{ + "type":"structure", + "required":[ + "changeType", + "dbPath" + ], + "members":{ + "changeType":{ + "shape":"ChangeType", + "documentation":"Defines the type of change request. A changeType can have the following values:
PUT – Adds or updates files in a database.
DELETE – Deletes files in a database.
Defines the S3 path of the source file that is required to add or update files in a database.
" + }, + "dbPath":{ + "shape":"DbPath", + "documentation":"Defines the path within the database directory.
" + } + }, + "documentation":"A list of change request objects.
" + }, + "ChangeRequests":{ + "type":"list", + "member":{"shape":"ChangeRequest"}, + "max":32, + "min":1 + }, + "ChangeType":{ + "type":"string", + "enum":[ + "PUT", + "DELETE" + ] + }, + "ChangesetId":{ + "type":"string", + "max":26, + "min":1 + }, + "ChangesetStatus":{ + "type":"string", + "enum":[ + "PENDING", + "PROCESSING", + "FAILED", + "COMPLETED" + ] + }, + "ClientToken":{ + "type":"string", + "max":36, + "min":1, + "pattern":".*\\S.*" + }, + "ClientTokenString":{ + "type":"string", + "max":64, + "min":1, + "pattern":"^[a-zA-Z0-9-]+$" + }, + "CodeConfiguration":{ + "type":"structure", + "members":{ + "s3Bucket":{ + "shape":"S3Bucket", + "documentation":"A unique name for the S3 bucket.
" + }, + "s3Key":{ + "shape":"S3Key", + "documentation":"The full S3 path (excluding bucket) to the .zip file. This file contains the code that is loaded onto the cluster when it's started.
" + }, + "s3ObjectVersion":{ + "shape":"S3ObjectVersion", + "documentation":"The version of an S3 object.
" + } + }, + "documentation":"The structure of the customer code available within the running cluster.
" + }, + "ConflictException":{ + "type":"structure", + "members":{ + "message":{"shape":"errorMessage"}, + "reason":{ + "shape":"errorMessage", + "documentation":"The reason for the conflict exception.
" + } + }, + "documentation":"There was a conflict with this action, and it could not be completed.
", + "error":{"httpStatusCode":409}, + "exception":true + }, + "CooldownTime":{ + "type":"double", + "max":100000, + "min":0 + }, + "CreateEnvironmentRequest":{ + "type":"structure", + "required":["name"], "members":{ "name":{ "shape":"EnvironmentName", @@ -206,235 +830,1742 @@ }, "environmentUrl":{ "shape":"url", - "documentation":"The sign-in url for the web application of the FinSpace environment you created.
" + "documentation":"The sign-in URL for the web application of the FinSpace environment you created.
" } } }, - "DataBundleArn":{ - "type":"string", - "documentation":"The Amazon Resource Name (ARN) of the data bundle.
", - "max":2048, - "min":20, - "pattern":"^arn:aws:finspace:[A-Za-z0-9_/.-]{0,63}:\\d*:data-bundle/[0-9A-Za-z_-]{1,128}$" - }, - "DataBundleArns":{ - "type":"list", - "member":{"shape":"DataBundleArn"} - }, - "DeleteEnvironmentRequest":{ + "CreateKxChangesetRequest":{ "type":"structure", - "required":["environmentId"], + "required":[ + "environmentId", + "databaseName", + "changeRequests", + "clientToken" + ], "members":{ "environmentId":{ - "shape":"IdType", - "documentation":"The identifier for the FinSpace environment.
", + "shape":"EnvironmentId", + "documentation":"A unique identifier of the kdb environment.
", "location":"uri", "locationName":"environmentId" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
", + "location":"uri", + "locationName":"databaseName" + }, + "changeRequests":{ + "shape":"ChangeRequests", + "documentation":"A list of change request objects that are run in order. A change request object consists of changeType , s3Path, and a dbPath. A changeType can has the following values:
PUT – Adds or updates files in a database.
DELETE – Deletes files in a database.
All the change requests require a mandatory dbPath attribute that defines the path within the database directory. The s3Path attribute defines the s3 source file path and is required for a PUT change type.
Here is an example of how you can use the change request object:
[ { \"changeType\": \"PUT\", \"s3Path\":\"s3://bucket/db/2020.01.02/\", \"dbPath\":\"/2020.01.02/\"}, { \"changeType\": \"PUT\", \"s3Path\":\"s3://bucket/db/sym\", \"dbPath\":\"/\"}, { \"changeType\": \"DELETE\", \"dbPath\": \"/2020.01.01/\"} ]
In this example, the first request with PUT change type allows you to add files in the given s3Path under the 2020.01.02 partition of the database. The second request with PUT change type allows you to add a single sym file at database root location. The last request with DELETE change type allows you to delete the files under the 2020.01.01 partition of the database.
" + }, + "clientToken":{ + "shape":"ClientTokenString", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
", + "idempotencyToken":true } } }, - "DeleteEnvironmentResponse":{ + "CreateKxChangesetResponse":{ "type":"structure", "members":{ + "changesetId":{ + "shape":"ChangesetId", + "documentation":"A unique identifier for the changeset.
" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
" + }, + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
" + }, + "changeRequests":{ + "shape":"ChangeRequests", + "documentation":"A list of change requests.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the changeset was created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the changeset was updated in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "status":{ + "shape":"ChangesetStatus", + "documentation":"Status of the changeset creation process.
Pending – Changeset creation is pending.
Processing – Changeset creation is running.
Failed – Changeset creation has failed.
Complete – Changeset creation has succeeded.
The details of the error that you receive when creating a changeset. It consists of the type of error and the error message.
" + } } }, - "Description":{ - "type":"string", - "max":1000, - "min":1, - "pattern":"^[a-zA-Z0-9. ]{1,1000}$" - }, - "EmailId":{ - "type":"string", - "max":128, - "min":1, - "pattern":"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+[.]+[A-Za-z]+", - "sensitive":true - }, - "Environment":{ + "CreateKxClusterRequest":{ "type":"structure", + "required":[ + "environmentId", + "clusterName", + "clusterType", + "capacityConfiguration", + "releaseLabel", + "azMode" + ], "members":{ - "name":{ - "shape":"EnvironmentName", - "documentation":"The name of the FinSpace environment.
" + "clientToken":{ + "shape":"ClientToken", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
", + "idempotencyToken":true }, "environmentId":{ - "shape":"IdType", - "documentation":"The identifier of the FinSpace environment.
" + "shape":"KxEnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" }, - "awsAccountId":{ - "shape":"IdType", - "documentation":"The ID of the AWS account in which the FinSpace environment is created.
" + "clusterName":{ + "shape":"KxClusterName", + "documentation":"A unique name for the cluster that you want to create.
" + }, + "clusterType":{ + "shape":"KxClusterType", + "documentation":"Specifies the type of KDB database that is being created. The following types are available:
HDB – A Historical Database. The data is only accessible with read-only permissions from one of the FinSpace managed kdb databases mounted to the cluster.
RDB – A Realtime Database. This type of database captures all the data from a ticker plant and stores it in memory until the end of day, after which it writes all of its data to a disk and reloads the HDB. This cluster type requires local storage for temporary storage of data during the savedown process. If you specify this field in your request, you must provide the savedownStorageConfiguration parameter.
GATEWAY – A gateway cluster allows you to access data across processes in kdb systems. It allows you to create your own routing logic using the initialization scripts and custom code. This type of cluster does not require a writable local storage.
A list of databases that will be available for querying.
" + }, + "cacheStorageConfigurations":{ + "shape":"KxCacheStorageConfigurations", + "documentation":"The configurations for a read only cache storage associated with a cluster. This cache will be stored as an FSx Lustre that reads from the S3 store.
" + }, + "autoScalingConfiguration":{ + "shape":"AutoScalingConfiguration", + "documentation":"The configuration based on which FinSpace will scale in or scale out nodes in your cluster.
" + }, + "clusterDescription":{ + "shape":"KxClusterDescription", + "documentation":"A description of the cluster.
" + }, + "capacityConfiguration":{ + "shape":"CapacityConfiguration", + "documentation":"A structure for the metadata of a cluster. It includes information about like the CPUs needed, memory of instances, number of instances, and the port used while establishing a connection.
" + }, + "releaseLabel":{ + "shape":"ReleaseLabel", + "documentation":"The version of FinSpace managed kdb to run.
" + }, + "vpcConfiguration":{ + "shape":"VpcConfiguration", + "documentation":"Configuration details about the network where the Privatelink endpoint of the cluster resides.
" + }, + "initializationScript":{ + "shape":"InitializationScriptFilePath", + "documentation":"Specifies a Q program that will be run at launch of a cluster. It is a relative path within .zip file that contains the custom code, which will be loaded on the cluster. It must include the file name itself. For example, somedir/init.q.
Defines the key-value pairs to make them available inside the cluster.
" + }, + "code":{ + "shape":"CodeConfiguration", + "documentation":"The details of the custom code that you want to use inside a cluster when analyzing a data. It consists of the S3 source bucket, location, S3 object version, and the relative path from where the custom code is loaded into the cluster.
" + }, + "executionRole":{ + "shape":"ExecutionRoleArn", + "documentation":"An IAM role that defines a set of permissions associated with a cluster. These permissions are assumed when a cluster attempts to access another cluster.
" + }, + "savedownStorageConfiguration":{ + "shape":"KxSavedownStorageConfiguration", + "documentation":"The size and type of the temporary storage that is used to hold data during the savedown process. This parameter is required when you choose clusterType as RDB. All the data written to this storage space is lost when the cluster node is restarted.
The number of availability zones you want to assign per cluster. This can be one of the following
SINGLE – Assigns one availability zone per cluster.
MULTI – Assigns all the availability zones per cluster.
The availability zone identifiers for the requested regions.
" + }, + "tags":{ + "shape":"TagMap", + "documentation":"A list of key-value pairs to label the cluster. You can add up to 50 tags to a cluster.
" + } + } + }, + "CreateKxClusterResponse":{ + "type":"structure", + "members":{ + "environmentId":{ + "shape":"KxEnvironmentId", + "documentation":"A unique identifier for the kdb environment.
" }, "status":{ - "shape":"EnvironmentStatus", - "documentation":"The current status of creation of the FinSpace environment.
" + "shape":"KxClusterStatus", + "documentation":"The status of cluster creation.
PENDING – The cluster is pending creation.
CREATING – The cluster creation process is in progress.
CREATE_FAILED – The cluster creation process has failed.
RUNNING – The cluster creation process is running.
UPDATING – The cluster is in the process of being updated.
DELETING – The cluster is in the process of being deleted.
DELETED – The cluster has been deleted.
DELETE_FAILED – The cluster failed to delete.
The sign-in url for the web application of your FinSpace environment.
" + "statusReason":{ + "shape":"KxClusterStatusReason", + "documentation":"The error message when a failed state occurs.
" }, - "description":{ - "shape":"Description", - "documentation":"The description of the FinSpace environment.
" + "clusterName":{ + "shape":"KxClusterName", + "documentation":"A unique name for the cluster.
" }, - "environmentArn":{ - "shape":"EnvironmentArn", - "documentation":"The Amazon Resource Name (ARN) of your FinSpace environment.
" + "clusterType":{ + "shape":"KxClusterType", + "documentation":"Specifies the type of KDB database that is being created. The following types are available:
HDB – A Historical Database. The data is only accessible with read-only permissions from one of the FinSpace managed kdb databases mounted to the cluster.
RDB – A Realtime Database. This type of database captures all the data from a ticker plant and stores it in memory until the end of day, after which it writes all of its data to a disk and reloads the HDB. This cluster type requires local storage for temporary storage of data during the savedown process. If you specify this field in your request, you must provide the savedownStorageConfiguration parameter.
GATEWAY – A gateway cluster allows you to access data across processes in kdb systems. It allows you to create your own routing logic using the initialization scripts and custom code. This type of cluster does not require a writable local storage.
The url of the integrated FinSpace notebook environment in your web application.
" + "databases":{ + "shape":"KxDatabaseConfigurations", + "documentation":"A list of databases that will be available for querying.
" }, - "kmsKeyId":{ - "shape":"KmsKeyId", - "documentation":"The KMS key id used to encrypt in the FinSpace environment.
" + "cacheStorageConfigurations":{ + "shape":"KxCacheStorageConfigurations", + "documentation":"The configurations for a read only cache storage associated with a cluster. This cache will be stored as an FSx Lustre that reads from the S3 store.
" }, - "dedicatedServiceAccountId":{ - "shape":"IdType", - "documentation":"The AWS account ID of the dedicated service account associated with your FinSpace environment.
" + "autoScalingConfiguration":{ + "shape":"AutoScalingConfiguration", + "documentation":"The configuration based on which FinSpace will scale in or scale out nodes in your cluster.
" }, - "federationMode":{ - "shape":"FederationMode", - "documentation":"The authentication mode for the environment.
" + "clusterDescription":{ + "shape":"KxClusterDescription", + "documentation":"A description of the cluster.
" }, - "federationParameters":{ - "shape":"FederationParameters", - "documentation":"Configuration information when authentication mode is FEDERATED.
" + "capacityConfiguration":{ + "shape":"CapacityConfiguration", + "documentation":"A structure for the metadata of a cluster. It includes information like the CPUs needed, memory of instances, number of instances, and the port used while establishing a connection.
" + }, + "releaseLabel":{ + "shape":"ReleaseLabel", + "documentation":"A version of the FinSpace managed kdb to run.
" + }, + "vpcConfiguration":{ + "shape":"VpcConfiguration", + "documentation":"Configuration details about the network where the Privatelink endpoint of the cluster resides.
" + }, + "initializationScript":{ + "shape":"InitializationScriptFilePath", + "documentation":"Specifies a Q program that will be run at launch of a cluster. It is a relative path within .zip file that contains the custom code, which will be loaded on the cluster. It must include the file name itself. For example, somedir/init.q.
Defines the key-value pairs to make them available inside the cluster.
" + }, + "code":{ + "shape":"CodeConfiguration", + "documentation":"The details of the custom code that you want to use inside a cluster when analyzing a data. It consists of the S3 source bucket, location, S3 object version, and the relative path from where the custom code is loaded into the cluster.
" + }, + "executionRole":{ + "shape":"ExecutionRoleArn", + "documentation":"An IAM role that defines a set of permissions associated with a cluster. These permissions are assumed when a cluster attempts to access another cluster.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The last time that the cluster was modified. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "savedownStorageConfiguration":{ + "shape":"KxSavedownStorageConfiguration", + "documentation":"The size and type of the temporary storage that is used to hold data during the savedown process. This parameter is required when you choose clusterType as RDB. All the data written to this storage space is lost when the cluster node is restarted.
The number of availability zones you want to assign per cluster. This can be one of the following
SINGLE – Assigns one availability zone per cluster.
MULTI – Assigns all the availability zones per cluster.
The availability zone identifiers for the requested regions.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the cluster was created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" } - }, - "documentation":"Represents an FinSpace environment.
" + } }, - "EnvironmentArn":{ - "type":"string", - "max":2048, - "min":20, + "CreateKxDatabaseRequest":{ + "type":"structure", + "required":[ + "environmentId", + "databaseName", + "clientToken" + ], + "members":{ + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description of the database.
" + }, + "tags":{ + "shape":"TagMap", + "documentation":"A list of key-value pairs to label the kdb database. You can add up to 50 tags to your kdb database
" + }, + "clientToken":{ + "shape":"ClientTokenString", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
", + "idempotencyToken":true + } + } + }, + "CreateKxDatabaseResponse":{ + "type":"structure", + "members":{ + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
" + }, + "databaseArn":{ + "shape":"DatabaseArn", + "documentation":"The ARN identifier of the database.
" + }, + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description of the database.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the database is created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The last time that the database was updated in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + } + } + }, + "CreateKxEnvironmentRequest":{ + "type":"structure", + "required":[ + "name", + "kmsKeyId" + ], + "members":{ + "name":{ + "shape":"KxEnvironmentName", + "documentation":"The name of the kdb environment that you want to create.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description for the kdb environment.
" + }, + "kmsKeyId":{ + "shape":"KmsKeyARN", + "documentation":"The KMS key ID to encrypt your data in the FinSpace environment.
" + }, + "tags":{ + "shape":"TagMap", + "documentation":"A list of key-value pairs to label the kdb environment. You can add up to 50 tags to your kdb environment.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
" + } + } + }, + "CreateKxEnvironmentResponse":{ + "type":"structure", + "members":{ + "name":{ + "shape":"KxEnvironmentName", + "documentation":"The name of the kdb environment.
" + }, + "status":{ + "shape":"EnvironmentStatus", + "documentation":"The status of the kdb environment.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description for the kdb environment.
" + }, + "environmentArn":{ + "shape":"EnvironmentArn", + "documentation":"The ARN identifier of the environment.
" + }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"The KMS key ID to encrypt your data in the FinSpace environment.
" + }, + "creationTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was created in FinSpace.
" + } + } + }, + "CreateKxUserRequest":{ + "type":"structure", + "required":[ + "environmentId", + "userName", + "iamRole" + ], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment where you want to create a user.
", + "location":"uri", + "locationName":"environmentId" + }, + "userName":{ + "shape":"KxUserNameString", + "documentation":"A unique identifier for the user.
" + }, + "iamRole":{ + "shape":"RoleArn", + "documentation":"The IAM role ARN that will be associated with the user.
" + }, + "tags":{ + "shape":"TagMap", + "documentation":"A list of key-value pairs to label the user. You can add up to 50 tags to a user.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
" + } + } + }, + "CreateKxUserResponse":{ + "type":"structure", + "members":{ + "userName":{ + "shape":"KxUserNameString", + "documentation":"A unique identifier for the user.
" + }, + "userArn":{ + "shape":"KxUserArn", + "documentation":"The Amazon Resource Name (ARN) that identifies the user. For more information about ARNs and how to use ARNs in policies, see IAM Identifiers in the IAM User Guide.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
" + }, + "iamRole":{ + "shape":"RoleArn", + "documentation":"The IAM role ARN that will be associated with the user.
" + } + } + }, + "CustomDNSConfiguration":{ + "type":"list", + "member":{"shape":"CustomDNSServer"} + }, + "CustomDNSServer":{ + "type":"structure", + "required":[ + "customDNSServerName", + "customDNSServerIP" + ], + "members":{ + "customDNSServerName":{ + "shape":"ValidHostname", + "documentation":"The name of the DNS server.
" + }, + "customDNSServerIP":{ + "shape":"ValidIPAddress", + "documentation":"The IP address of the DNS server.
" + } + }, + "documentation":"A list of DNS server name and server IP. This is used to set up Route-53 outbound resolvers.
" + }, + "DataBundleArn":{ + "type":"string", + "documentation":"The Amazon Resource Name (ARN) of the data bundle.
", + "max":2048, + "min":20, + "pattern":"^arn:aws:finspace:[A-Za-z0-9_/.-]{0,63}:\\d*:data-bundle/[0-9A-Za-z_-]{1,128}$" + }, + "DataBundleArns":{ + "type":"list", + "member":{"shape":"DataBundleArn"} + }, + "DatabaseArn":{"type":"string"}, + "DatabaseName":{ + "type":"string", + "max":63, + "min":3, + "pattern":"^[a-zA-Z0-9][a-zA-Z0-9-_]*[a-zA-Z0-9]$" + }, + "DbPath":{ + "type":"string", + "max":1025, + "min":1, + "pattern":"^\\/([^\\/]+\\/){0,2}[^\\/]*$" + }, + "DbPaths":{ + "type":"list", + "member":{"shape":"DbPath"} + }, + "DeleteEnvironmentRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"The identifier for the FinSpace environment.
", + "location":"uri", + "locationName":"environmentId" + } + } + }, + "DeleteEnvironmentResponse":{ + "type":"structure", + "members":{ + } + }, + "DeleteKxClusterRequest":{ + "type":"structure", + "required":[ + "environmentId", + "clusterName" + ], + "members":{ + "environmentId":{ + "shape":"KxEnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "clusterName":{ + "shape":"KxClusterName", + "documentation":"The name of the cluster that you want to delete.
", + "location":"uri", + "locationName":"clusterName" + }, + "clientToken":{ + "shape":"ClientTokenString", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "DeleteKxClusterResponse":{ + "type":"structure", + "members":{ + } + }, + "DeleteKxDatabaseRequest":{ + "type":"structure", + "required":[ + "environmentId", + "databaseName", + "clientToken" + ], + "members":{ + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database that you want to delete.
", + "location":"uri", + "locationName":"databaseName" + }, + "clientToken":{ + "shape":"ClientTokenString", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "DeleteKxDatabaseResponse":{ + "type":"structure", + "members":{ + } + }, + "DeleteKxEnvironmentRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + } + } + }, + "DeleteKxEnvironmentResponse":{ + "type":"structure", + "members":{ + } + }, + "DeleteKxUserRequest":{ + "type":"structure", + "required":[ + "userName", + "environmentId" + ], + "members":{ + "userName":{ + "shape":"KxUserNameString", + "documentation":"A unique identifier for the user that you want to delete.
", + "location":"uri", + "locationName":"userName" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + } + } + }, + "DeleteKxUserResponse":{ + "type":"structure", + "members":{ + } + }, + "Description":{ + "type":"string", + "max":1000, + "min":1, + "pattern":"^[a-zA-Z0-9. ]{1,1000}$" + }, + "EmailId":{ + "type":"string", + "max":128, + "min":1, + "pattern":"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+[.]+[A-Za-z]+", + "sensitive":true + }, + "Environment":{ + "type":"structure", + "members":{ + "name":{ + "shape":"EnvironmentName", + "documentation":"The name of the FinSpace environment.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"The identifier of the FinSpace environment.
" + }, + "awsAccountId":{ + "shape":"IdType", + "documentation":"The ID of the AWS account in which the FinSpace environment is created.
" + }, + "status":{ + "shape":"EnvironmentStatus", + "documentation":"The current status of creation of the FinSpace environment.
" + }, + "environmentUrl":{ + "shape":"url", + "documentation":"The sign-in URL for the web application of your FinSpace environment.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the FinSpace environment.
" + }, + "environmentArn":{ + "shape":"EnvironmentArn", + "documentation":"The Amazon Resource Name (ARN) of your FinSpace environment.
" + }, + "sageMakerStudioDomainUrl":{ + "shape":"SmsDomainUrl", + "documentation":"The URL of the integrated FinSpace notebook environment in your web application.
" + }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"The KMS key id used to encrypt in the FinSpace environment.
" + }, + "dedicatedServiceAccountId":{ + "shape":"IdType", + "documentation":"The AWS account ID of the dedicated service account associated with your FinSpace environment.
" + }, + "federationMode":{ + "shape":"FederationMode", + "documentation":"The authentication mode for the environment.
" + }, + "federationParameters":{ + "shape":"FederationParameters", + "documentation":"Configuration information when authentication mode is FEDERATED.
" + } + }, + "documentation":"Represents an FinSpace environment.
" + }, + "EnvironmentArn":{ + "type":"string", + "max":2048, + "min":20, "pattern":"^arn:aws:finspace:[A-Za-z0-9_/.-]{0,63}:\\d+:environment/[0-9A-Za-z_-]{1,128}$" }, - "EnvironmentList":{ - "type":"list", - "member":{"shape":"Environment"} + "EnvironmentErrorMessage":{ + "type":"string", + "max":1000, + "min":0, + "pattern":"^[a-zA-Z0-9. ]{1,1000}$" + }, + "EnvironmentId":{ + "type":"string", + "max":32, + "min":1, + "pattern":".*\\S.*" + }, + "EnvironmentList":{ + "type":"list", + "member":{"shape":"Environment"} + }, + "EnvironmentName":{ + "type":"string", + "max":255, + "min":1, + "pattern":"^[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]$" + }, + "EnvironmentStatus":{ + "type":"string", + "enum":[ + "CREATE_REQUESTED", + "CREATING", + "CREATED", + "DELETE_REQUESTED", + "DELETING", + "DELETED", + "FAILED_CREATION", + "RETRY_DELETION", + "FAILED_DELETION", + "UPDATE_NETWORK_REQUESTED", + "UPDATING_NETWORK", + "FAILED_UPDATING_NETWORK", + "SUSPENDED" + ] + }, + "ErrorDetails":{ + "type":"string", + "enum":[ + "The inputs to this request are invalid.", + "Service limits have been exceeded.", + "Missing required permission to perform this request.", + "One or more inputs to this request were not found.", + "The system temporarily lacks sufficient resources to process the request.", + "An internal error has occurred.", + "Cancelled", + "A user recoverable error has occurred" + ] + }, + "ErrorInfo":{ + "type":"structure", + "members":{ + "errorMessage":{ + "shape":"ErrorMessage", + "documentation":"Specifies the error message that appears if a flow fails.
" + }, + "errorType":{ + "shape":"ErrorDetails", + "documentation":"Specifies the type of error.
" + } + }, + "documentation":"Provides details in the event of a failed flow, including the error type and the related error message.
" + }, + "ErrorMessage":{ + "type":"string", + "max":1000 + }, + "ExecutionRoleArn":{ + "type":"string", + "max":1024, + "min":1, + "pattern":"^arn:aws[a-z0-9-]*:iam::\\d{12}:role\\/[\\w-\\/.@+=,]{1,1017}$" + }, + "FederationAttributeKey":{ + "type":"string", + "max":32, + "min":1, + "pattern":".*" + }, + "FederationAttributeValue":{ + "type":"string", + "max":1000, + "min":1, + "pattern":".*" + }, + "FederationMode":{ + "type":"string", + "enum":[ + "FEDERATED", + "LOCAL" + ] + }, + "FederationParameters":{ + "type":"structure", + "members":{ + "samlMetadataDocument":{ + "shape":"SamlMetadataDocument", + "documentation":"SAML 2.0 Metadata document from identity provider (IdP).
" + }, + "samlMetadataURL":{ + "shape":"url", + "documentation":"Provide the metadata URL from your SAML 2.0 compliant identity provider (IdP).
" + }, + "applicationCallBackURL":{ + "shape":"url", + "documentation":"The redirect or sign-in URL that should be entered into the SAML 2.0 compliant identity provider configuration (IdP).
" + }, + "federationURN":{ + "shape":"urn", + "documentation":"The Uniform Resource Name (URN). Also referred as Service Provider URN or Audience URI or Service Provider Entity ID.
" + }, + "federationProviderName":{ + "shape":"FederationProviderName", + "documentation":"Name of the identity provider (IdP).
" + }, + "attributeMap":{ + "shape":"AttributeMap", + "documentation":"SAML attribute name and value. The name must always be Email and the value should be set to the attribute definition in which user email is set. For example, name would be Email and value http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress. Please check your SAML 2.0 compliant identity provider (IdP) documentation for details.
Configuration information when authentication mode is FEDERATED.
" + }, + "FederationProviderName":{ + "type":"string", + "max":32, + "min":1, + "pattern":"[^_\\p{Z}][\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}][^_\\p{Z}]+" + }, + "FinSpaceTaggableArn":{ + "type":"string", + "max":2048, + "min":20, + "pattern":"^arn:aws:finspace:[A-Za-z0-9_/.-]{0,63}:\\d+:(environment|kxEnvironment)/[0-9A-Za-z_-]{1,128}(/(kxDatabase|kxCluster|kxUser)/[a-zA-Z0-9_-]{1,255})?$" + }, + "GetEnvironmentRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"The identifier of the FinSpace environment.
", + "location":"uri", + "locationName":"environmentId" + } + } + }, + "GetEnvironmentResponse":{ + "type":"structure", + "members":{ + "environment":{ + "shape":"Environment", + "documentation":"The name of the FinSpace environment.
" + } + } + }, + "GetKxChangesetRequest":{ + "type":"structure", + "required":[ + "environmentId", + "databaseName", + "changesetId" + ], + "members":{ + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
", + "location":"uri", + "locationName":"databaseName" + }, + "changesetId":{ + "shape":"ChangesetId", + "documentation":"A unique identifier of the changeset for which you want to retrieve data.
", + "location":"uri", + "locationName":"changesetId" + } + } + }, + "GetKxChangesetResponse":{ + "type":"structure", + "members":{ + "changesetId":{ + "shape":"ChangesetId", + "documentation":"A unique identifier for the changeset.
" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
" + }, + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
" + }, + "changeRequests":{ + "shape":"ChangeRequests", + "documentation":"A list of change request objects that are run in order.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the changeset was created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "activeFromTimestamp":{ + "shape":"Timestamp", + "documentation":"Beginning time from which the changeset is active. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the changeset was updated in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "status":{ + "shape":"ChangesetStatus", + "documentation":"Status of the changeset creation process.
Pending – Changeset creation is pending.
Processing – Changeset creation is running.
Failed – Changeset creation has failed.
Complete – Changeset creation has succeeded.
Provides details in the event of a failed flow, including the error type and the related error message.
" + } + } + }, + "GetKxClusterRequest":{ + "type":"structure", + "required":[ + "environmentId", + "clusterName" + ], + "members":{ + "environmentId":{ + "shape":"KxEnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "clusterName":{ + "shape":"KxClusterName", + "documentation":"The name of the cluster that you want to retrieve.
", + "location":"uri", + "locationName":"clusterName" + } + } + }, + "GetKxClusterResponse":{ + "type":"structure", + "members":{ + "status":{ + "shape":"KxClusterStatus", + "documentation":"The status of cluster creation.
PENDING – The cluster is pending creation.
CREATING – The cluster creation process is in progress.
CREATE_FAILED – The cluster creation process has failed.
RUNNING – The cluster creation process is running.
UPDATING – The cluster is in the process of being updated.
DELETING – The cluster is in the process of being deleted.
DELETED – The cluster has been deleted.
DELETE_FAILED – The cluster failed to delete.
The error message when a failed state occurs.
" + }, + "clusterName":{ + "shape":"KxClusterName", + "documentation":"A unique name for the cluster.
" + }, + "clusterType":{ + "shape":"KxClusterType", + "documentation":"Specifies the type of KDB database that is being created. The following types are available:
HDB – A Historical Database. The data is only accessible with read-only permissions from one of the FinSpace managed kdb databases mounted to the cluster.
RDB – A Realtime Database. This type of database captures all the data from a ticker plant and stores it in memory until the end of day, after which it writes all of its data to a disk and reloads the HDB. This cluster type requires local storage for temporary storage of data during the savedown process. If you specify this field in your request, you must provide the savedownStorageConfiguration parameter.
GATEWAY – A gateway cluster allows you to access data across processes in kdb systems. It allows you to create your own routing logic using the initialization scripts and custom code. This type of cluster does not require a writable local storage.
A list of databases mounted on the cluster.
" + }, + "cacheStorageConfigurations":{ + "shape":"KxCacheStorageConfigurations", + "documentation":"The configurations for a read only cache storage associated with a cluster. This cache will be stored as an FSx Lustre that reads from the S3 store.
" + }, + "autoScalingConfiguration":{ + "shape":"AutoScalingConfiguration", + "documentation":"The configuration based on which FinSpace will scale in or scale out nodes in your cluster.
" + }, + "clusterDescription":{ + "shape":"KxClusterDescription", + "documentation":"A description of the cluster.
" + }, + "capacityConfiguration":{ + "shape":"CapacityConfiguration", + "documentation":"A structure for the metadata of a cluster. It includes information like the CPUs needed, memory of instances, number of instances, and the port used while establishing a connection.
" + }, + "releaseLabel":{ + "shape":"ReleaseLabel", + "documentation":"The version of FinSpace managed kdb to run.
" + }, + "vpcConfiguration":{ + "shape":"VpcConfiguration", + "documentation":"Configuration details about the network where the Privatelink endpoint of the cluster resides.
" + }, + "initializationScript":{ + "shape":"InitializationScriptFilePath", + "documentation":"Specifies a Q program that will be run at launch of a cluster. It is a relative path within .zip file that contains the custom code, which will be loaded on the cluster. It must include the file name itself. For example, somedir/init.q.
Defines key-value pairs to make them available inside the cluster.
" + }, + "code":{ + "shape":"CodeConfiguration", + "documentation":"The details of the custom code that you want to use inside a cluster when analyzing a data. It consists of the S3 source bucket, location, S3 object version, and the relative path from where the custom code is loaded into the cluster.
" + }, + "executionRole":{ + "shape":"ExecutionRoleArn", + "documentation":"An IAM role that defines a set of permissions associated with a cluster. These permissions are assumed when a cluster attempts to access another cluster.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The last time that the cluster was modified. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "savedownStorageConfiguration":{ + "shape":"KxSavedownStorageConfiguration", + "documentation":"The size and type of the temporary storage that is used to hold data during the savedown process. This parameter is required when you choose clusterType as RDB. All the data written to this storage space is lost when the cluster node is restarted.
The number of availability zones you want to assign per cluster. This can be one of the following
SINGLE – Assigns one availability zone per cluster.
MULTI – Assigns all the availability zones per cluster.
The availability zone identifiers for the requested regions.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the cluster was created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + } + } + }, + "GetKxConnectionStringRequest":{ + "type":"structure", + "required":[ + "userArn", + "environmentId", + "clusterName" + ], + "members":{ + "userArn":{ + "shape":"KxUserArn", + "documentation":"The Amazon Resource Name (ARN) that identifies the user. For more information about ARNs and how to use ARNs in policies, see IAM Identifiers in the IAM User Guide.
", + "location":"querystring", + "locationName":"userArn" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "clusterName":{ + "shape":"KxClusterName", + "documentation":"A name of the kdb cluster.
", + "location":"querystring", + "locationName":"clusterName" + } + } + }, + "GetKxConnectionStringResponse":{ + "type":"structure", + "members":{ + "signedConnectionString":{ + "shape":"SignedKxConnectionString", + "documentation":"The signed connection string that you can use to connect to clusters.
" + } + } + }, + "GetKxDatabaseRequest":{ + "type":"structure", + "required":[ + "environmentId", + "databaseName" + ], + "members":{ + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
", + "location":"uri", + "locationName":"databaseName" + } + } + }, + "GetKxDatabaseResponse":{ + "type":"structure", + "members":{ + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database for which the information is retrieved.
" + }, + "databaseArn":{ + "shape":"DatabaseArn", + "documentation":"The ARN identifier of the database.
" + }, + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description of the database.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the database is created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The last time that the database was modified. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "lastCompletedChangesetId":{ + "shape":"ChangesetId", + "documentation":"A unique identifier for the changeset.
" + }, + "numBytes":{ + "shape":"numBytes", + "documentation":"The total number of bytes in the database.
" + }, + "numChangesets":{ + "shape":"numChangesets", + "documentation":"The total number of changesets in the database.
" + }, + "numFiles":{ + "shape":"numFiles", + "documentation":"The total number of files in the database.
" + } + } + }, + "GetKxEnvironmentRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + } + } }, - "EnvironmentName":{ + "GetKxEnvironmentResponse":{ + "type":"structure", + "members":{ + "name":{ + "shape":"KxEnvironmentName", + "documentation":"The name of the kdb environment.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
" + }, + "awsAccountId":{ + "shape":"IdType", + "documentation":"The unique identifier of the AWS account that is used to create the kdb environment.
" + }, + "status":{ + "shape":"EnvironmentStatus", + "documentation":"The status of the kdb environment.
" + }, + "tgwStatus":{ + "shape":"tgwStatus", + "documentation":"The status of the network configuration.
" + }, + "dnsStatus":{ + "shape":"dnsStatus", + "documentation":"The status of DNS configuration.
" + }, + "errorMessage":{ + "shape":"EnvironmentErrorMessage", + "documentation":"Specifies the error message that appears if a flow fails.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description for the kdb environment.
" + }, + "environmentArn":{ + "shape":"EnvironmentArn", + "documentation":"The ARN identifier of the environment.
" + }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"The KMS key ID to encrypt your data in the FinSpace environment.
" + }, + "dedicatedServiceAccountId":{ + "shape":"IdType", + "documentation":"A unique identifier for the AWS environment infrastructure account.
" + }, + "transitGatewayConfiguration":{"shape":"TransitGatewayConfiguration"}, + "customDNSConfiguration":{ + "shape":"CustomDNSConfiguration", + "documentation":"A list of DNS server name and server IP. This is used to set up Route-53 outbound resolvers.
" + }, + "creationTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was created in FinSpace.
" + }, + "updateTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was updated.
" + }, + "availabilityZoneIds":{ + "shape":"AvailabilityZoneIds", + "documentation":"The identifier of the availability zones where subnets for the environment are created.
" + }, + "certificateAuthorityArn":{ + "shape":"stringValueLength1to255", + "documentation":"The Amazon Resource Name (ARN) of the certificate authority of the kdb environment.
" + } + } + }, + "GetKxUserRequest":{ + "type":"structure", + "required":[ + "userName", + "environmentId" + ], + "members":{ + "userName":{ + "shape":"KxUserNameString", + "documentation":"A unique identifier for the user.
", + "location":"uri", + "locationName":"userName" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + } + } + }, + "GetKxUserResponse":{ + "type":"structure", + "members":{ + "userName":{ + "shape":"IdType", + "documentation":"A unique identifier for the user.
" + }, + "userArn":{ + "shape":"KxUserArn", + "documentation":"The Amazon Resource Name (ARN) that identifies the user. For more information about ARNs and how to use ARNs in policies, see IAM Identifiers in the IAM User Guide.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
" + }, + "iamRole":{ + "shape":"RoleArn", + "documentation":"The IAM role ARN that is associated with the user.
" + } + } + }, + "IPAddressType":{ + "type":"string", + "enum":["IP_V4"] + }, + "IdType":{ + "type":"string", + "max":26, + "min":1, + "pattern":"^[a-zA-Z0-9]{1,26}$" + }, + "InitializationScriptFilePath":{ "type":"string", "max":255, "min":1, - "pattern":"^[a-zA-Z0-9]+[a-zA-Z0-9-]*[a-zA-Z0-9]$" + "pattern":"^[a-zA-Z0-9\\_\\-\\.\\/\\\\]+$" }, - "EnvironmentStatus":{ + "InternalServerException":{ + "type":"structure", + "members":{ + "message":{"shape":"errorMessage"} + }, + "documentation":"The request processing has failed because of an unknown error, exception or failure.
", + "error":{"httpStatusCode":500}, + "exception":true + }, + "InvalidRequestException":{ + "type":"structure", + "members":{ + "message":{"shape":"errorMessage"} + }, + "documentation":"The request is invalid. Something is wrong with the input to the request.
", + "error":{"httpStatusCode":400}, + "exception":true + }, + "KmsKeyARN":{ + "type":"string", + "max":1000, + "min":1, + "pattern":"^arn:aws:kms:.*:\\d+:.*$" + }, + "KmsKeyId":{ + "type":"string", + "max":1000, + "min":1, + "pattern":"^[a-zA-Z-0-9-:\\/]*$" + }, + "KxAzMode":{ "type":"string", "enum":[ - "CREATE_REQUESTED", + "SINGLE", + "MULTI" + ] + }, + "KxCacheStorageConfiguration":{ + "type":"structure", + "required":[ + "type", + "size" + ], + "members":{ + "type":{ + "shape":"KxCacheStorageType", + "documentation":"The type of cache storage . The valid values are:
CACHE_1000 – This type provides at least 1000 MB/s disk access throughput.
The size of cache in Gigabytes.
" + } + }, + "documentation":"The configuration for read only disk cache associated with a cluster.
" + }, + "KxCacheStorageConfigurations":{ + "type":"list", + "member":{"shape":"KxCacheStorageConfiguration"} + }, + "KxCacheStorageSize":{ + "type":"integer", + "max":33600, + "min":1200 + }, + "KxCacheStorageType":{ + "type":"string", + "max":10, + "min":8 + }, + "KxChangesetListEntry":{ + "type":"structure", + "members":{ + "changesetId":{ + "shape":"ChangesetId", + "documentation":"A unique identifier for the changeset.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the changeset was created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "activeFromTimestamp":{ + "shape":"Timestamp", + "documentation":"Beginning time from which the changeset is active. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the changeset was modified. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "status":{ + "shape":"ChangesetStatus", + "documentation":"Status of the changeset.
Pending – Changeset creation is pending.
Processing – Changeset creation is running.
Failed – Changeset creation has failed.
Complete – Changeset creation has succeeded.
Details of changeset.
" + }, + "KxChangesets":{ + "type":"list", + "member":{"shape":"KxChangesetListEntry"} + }, + "KxCluster":{ + "type":"structure", + "members":{ + "status":{ + "shape":"KxClusterStatus", + "documentation":"The status of a cluster.
PENDING – The cluster is pending creation.
CREATING –The cluster creation process is in progress.
CREATE_FAILED– The cluster creation process has failed.
RUNNING – The cluster creation process is running.
UPDATING – The cluster is in the process of being updated.
DELETING – The cluster is in the process of being deleted.
DELETED – The cluster has been deleted.
DELETE_FAILED – The cluster failed to delete.
The error message when a failed state occurs.
" + }, + "clusterName":{ + "shape":"KxClusterName", + "documentation":"A unique name for the cluster.
" + }, + "clusterType":{ + "shape":"KxClusterType", + "documentation":"Specifies the type of KDB database that is being created. The following types are available:
HDB – A Historical Database. The data is only accessible with read-only permissions from one of the FinSpace managed kdb databases mounted to the cluster.
RDB – A Realtime Database. This type of database captures all the data from a ticker plant and stores it in memory until the end of day, after which it writes all of its data to a disk and reloads the HDB. This cluster type requires local storage for temporary storage of data during the savedown process. If you specify this field in your request, you must provide the savedownStorageConfiguration parameter.
GATEWAY – A gateway cluster allows you to access data across processes in kdb systems. It allows you to create your own routing logic using the initialization scripts and custom code. This type of cluster does not require a writable local storage.
A description of the cluster.
" + }, + "releaseLabel":{ + "shape":"ReleaseLabel", + "documentation":"A version of the FinSpace managed kdb to run.
" + }, + "initializationScript":{ + "shape":"InitializationScriptFilePath", + "documentation":"Specifies a Q program that will be run at launch of a cluster. It is a relative path within .zip file that contains the custom code, which will be loaded on the cluster. It must include the file name itself. For example, somedir/init.q.
An IAM role that defines a set of permissions associated with a cluster. These permissions are assumed when a cluster attempts to access another cluster.
" + }, + "azMode":{ + "shape":"KxAzMode", + "documentation":"The number of availability zones assigned per cluster. This can be one of the following
SINGLE – Assigns one availability zone per cluster.
MULTI – Assigns all the availability zones per cluster.
The availability zone identifiers for the requested regions.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The last time that the cluster was modified. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the cluster was created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + } + }, + "documentation":"The details of a kdb cluster.
" + }, + "KxClusterDescription":{ + "type":"string", + "max":1000, + "min":1, + "pattern":"^[a-zA-Z0-9\\_\\-\\.\\s]+$" + }, + "KxClusterName":{ + "type":"string", + "max":63, + "min":3, + "pattern":"^[a-zA-Z0-9][a-zA-Z0-9-_]*[a-zA-Z0-9]$" + }, + "KxClusterNodeIdString":{ + "type":"string", + "max":40, + "min":1 + }, + "KxClusterStatus":{ + "type":"string", + "enum":[ + "PENDING", "CREATING", - "CREATED", - "DELETE_REQUESTED", + "CREATE_FAILED", + "RUNNING", + "UPDATING", "DELETING", "DELETED", - "FAILED_CREATION", - "RETRY_DELETION", - "FAILED_DELETION", - "SUSPENDED" + "DELETE_FAILED" ] }, - "FederationAttributeKey":{ + "KxClusterStatusReason":{ "type":"string", - "max":32, + "max":250, "min":1, - "pattern":".*" + "pattern":"^[a-zA-Z0-9\\_\\-\\.\\s]+$" }, - "FederationMode":{ + "KxClusterType":{ "type":"string", "enum":[ - "FEDERATED", - "LOCAL" + "HDB", + "RDB", + "GATEWAY" ] }, - "FederationParameters":{ + "KxClusters":{ + "type":"list", + "member":{"shape":"KxCluster"} + }, + "KxCommandLineArgument":{ "type":"structure", "members":{ - "samlMetadataDocument":{ - "shape":"SamlMetadataDocument", - "documentation":"SAML 2.0 Metadata document from identity provider (IdP).
" + "key":{ + "shape":"KxCommandLineArgumentKey", + "documentation":"The name of the key.
" }, - "samlMetadataURL":{ - "shape":"url", - "documentation":"Provide the metadata URL from your SAML 2.0 compliant identity provider (IdP).
" + "value":{ + "shape":"KxCommandLineArgumentValue", + "documentation":"The value of the key.
" + } + }, + "documentation":"Defines the key-value pairs to make them available inside the cluster.
" + }, + "KxCommandLineArgumentKey":{ + "type":"string", + "max":50, + "min":1, + "pattern":"^(?![Aa][Ww][Ss])(s|([a-zA-Z][a-zA-Z0-9_]+))" + }, + "KxCommandLineArgumentValue":{ + "type":"string", + "max":50, + "min":1, + "pattern":"^[a-zA-Z0-9][a-zA-Z0-9_:.]*" + }, + "KxCommandLineArguments":{ + "type":"list", + "member":{"shape":"KxCommandLineArgument"} + }, + "KxDatabaseCacheConfiguration":{ + "type":"structure", + "required":[ + "cacheType", + "dbPaths" + ], + "members":{ + "cacheType":{ + "shape":"KxCacheStorageType", + "documentation":"The type of disk cache. This parameter is used to map the database path to cache storage. The valid values are:
CACHE_1000 – This type provides at least 1000 MB/s disk access throughput.
Specifies the portions of database that will be loaded into the cache for access.
" + } + }, + "documentation":"The structure of database cache configuration that is used for mapping database paths to cache types in clusters.
" + }, + "KxDatabaseCacheConfigurations":{ + "type":"list", + "member":{"shape":"KxDatabaseCacheConfiguration"} + }, + "KxDatabaseConfiguration":{ + "type":"structure", + "required":["databaseName"], + "members":{ + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database. When this parameter is specified in the structure, S3 with the whole database is included by default.
" + }, + "cacheConfigurations":{ + "shape":"KxDatabaseCacheConfigurations", + "documentation":"Configuration details for the disk cache used to increase performance reading from a kdb database mounted to the cluster.
" + }, + "changesetId":{ + "shape":"ChangesetId", + "documentation":"A unique identifier of the changeset that is associated with the cluster.
" + } + }, + "documentation":"The configuration of data that is available for querying from this database.
" + }, + "KxDatabaseConfigurations":{ + "type":"list", + "member":{"shape":"KxDatabaseConfiguration"} + }, + "KxDatabaseListEntry":{ + "type":"structure", + "members":{ + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
" + }, + "createdTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the database was created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The last time that the database was modified. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + } + }, + "documentation":"Details about a FinSpace managed kdb database
" + }, + "KxDatabases":{ + "type":"list", + "member":{"shape":"KxDatabaseListEntry"} + }, + "KxEnvironment":{ + "type":"structure", + "members":{ + "name":{ + "shape":"KxEnvironmentName", + "documentation":"The name of the kdb environment.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
" + }, + "awsAccountId":{ + "shape":"IdType", + "documentation":"The unique identifier of the AWS account in which you create the kdb environment.
" + }, + "status":{ + "shape":"EnvironmentStatus", + "documentation":"The status of the environment creation.
CREATE_REQUESTED – Environment creation has been requested.
CREATING – Environment is in the process of being created.
FAILED_CREATION – Environment creation has failed.
CREATED – Environment is successfully created and is currently active.
DELETE REQUESTED – Environment deletion has been requested.
DELETING – Environment is in the process of being deleted.
RETRY_DELETION – Initial environment deletion failed, system is reattempting delete.
DELETED – Environment has been deleted.
FAILED_DELETION – Environment deletion has failed.
The status of the network configuration.
" + }, + "dnsStatus":{ + "shape":"dnsStatus", + "documentation":"The status of DNS configuration.
" + }, + "errorMessage":{ + "shape":"EnvironmentErrorMessage", + "documentation":"Specifies the error message that appears if a flow fails.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description of the kdb environment.
" + }, + "environmentArn":{ + "shape":"EnvironmentArn", + "documentation":"The Amazon Resource Name (ARN) of your kdb environment.
" + }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"The unique identifier of the KMS key.
" + }, + "dedicatedServiceAccountId":{ + "shape":"IdType", + "documentation":"A unique identifier for the AWS environment infrastructure account.
" + }, + "transitGatewayConfiguration":{ + "shape":"TransitGatewayConfiguration", + "documentation":"Specifies the transit gateway and network configuration to connect the kdb environment to an internal network.
" }, - "applicationCallBackURL":{ - "shape":"url", - "documentation":"The redirect or sign-in URL that should be entered into the SAML 2.0 compliant identity provider configuration (IdP).
" + "customDNSConfiguration":{ + "shape":"CustomDNSConfiguration", + "documentation":"A list of DNS server name and server IP. This is used to set up Route-53 outbound resolvers.
" }, - "federationURN":{ - "shape":"urn", - "documentation":"The Uniform Resource Name (URN). Also referred as Service Provider URN or Audience URI or Service Provider Entity ID.
" + "creationTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was created in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" }, - "federationProviderName":{ - "shape":"FederationProviderName", - "documentation":"Name of the identity provider (IdP).
" + "updateTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was modified in FinSpace. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" }, - "attributeMap":{ - "shape":"AttributeMap", - "documentation":"SAML attribute name and value. The name must always be Email and the value should be set to the attribute definition in which user email is set. For example, name would be Email and value http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress. Please check your SAML 2.0 compliant identity provider (IdP) documentation for details.
The identifier of the availability zones where subnets for the environment are created.
" + }, + "certificateAuthorityArn":{ + "shape":"stringValueLength1to255", + "documentation":"The Amazon Resource Name (ARN) of the certificate authority:
" } }, - "documentation":"Configuration information when authentication mode is FEDERATED.
" + "documentation":"The details of a kdb environment.
" }, - "FederationProviderName":{ + "KxEnvironmentId":{ "type":"string", "max":32, "min":1, - "pattern":"[^_\\p{Z}][\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}][^_\\p{Z}]+" + "pattern":"^[a-z0-9]+$" }, - "GetEnvironmentRequest":{ + "KxEnvironmentList":{ + "type":"list", + "member":{"shape":"KxEnvironment"} + }, + "KxEnvironmentName":{ + "type":"string", + "max":63, + "min":3, + "pattern":"^[a-zA-Z0-9][a-zA-Z0-9-_]*[a-zA-Z0-9]$" + }, + "KxNode":{ "type":"structure", - "required":["environmentId"], "members":{ - "environmentId":{ - "shape":"IdType", - "documentation":"The identifier of the FinSpace environment.
", - "location":"uri", - "locationName":"environmentId" + "nodeId":{ + "shape":"KxClusterNodeIdString", + "documentation":"A unique identifier for the node.
" + }, + "availabilityZoneId":{ + "shape":"AvailabilityZoneId", + "documentation":"The identifier of the availability zones where subnets for the environment are created.
" + }, + "launchTime":{ + "shape":"Timestamp", + "documentation":"The time when a particular node is started. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" } - } + }, + "documentation":"A structure that stores metadata for a kdb node.
" }, - "GetEnvironmentResponse":{ + "KxNodeSummaries":{ + "type":"list", + "member":{"shape":"KxNode"} + }, + "KxSavedownStorageConfiguration":{ "type":"structure", + "required":[ + "type", + "size" + ], "members":{ - "environment":{ - "shape":"Environment", - "documentation":"The name of the FinSpace environment.
" + "type":{ + "shape":"KxSavedownStorageType", + "documentation":"The type of writeable storage space for temporarily storing your savedown data. The valid values are:
SDS01 – This type represents 3000 IOPS and io2 ebs volume type.
The size of temporary storage in bytes.
" } - } + }, + "documentation":"The size and type of temporary storage that is used to hold data during the savedown process. All the data written to this storage space is lost when the cluster node is restarted.
" }, - "IdType":{ + "KxSavedownStorageSize":{ + "type":"integer", + "max":16000, + "min":4 + }, + "KxSavedownStorageType":{ "type":"string", - "max":26, - "min":1, - "pattern":"^[a-zA-Z0-9]{1,26}$" + "enum":["SDS01"] }, - "InternalServerException":{ + "KxUser":{ "type":"structure", "members":{ - "message":{"shape":"errorMessage"} + "userArn":{ + "shape":"KxUserArn", + "documentation":"The Amazon Resource Name (ARN) that identifies the user. For more information about ARNs and how to use ARNs in policies, see IAM Identifiers in the IAM User Guide.
" + }, + "userName":{ + "shape":"KxUserNameString", + "documentation":"A unique identifier for the user.
" + }, + "iamRole":{ + "shape":"RoleArn", + "documentation":"The IAM role ARN that is associated with the user.
" + }, + "createTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb user was created.
" + }, + "updateTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb user was updated.
" + } }, - "documentation":"The request processing has failed because of an unknown error, exception or failure.
", - "error":{"httpStatusCode":500}, - "exception":true + "documentation":"A structure that stores metadata for a kdb user.
" }, - "InvalidRequestException":{ - "type":"structure", - "members":{ - "message":{"shape":"errorMessage"} - }, - "documentation":"The request is invalid. Something is wrong with the input to the request.
", - "error":{"httpStatusCode":400}, - "exception":true + "KxUserArn":{ + "type":"string", + "max":2048, + "min":20, + "pattern":"^arn:aws:finspace:[A-Za-z0-9_/.-]{0,63}:\\d+:kxEnvironment/[0-9A-Za-z_-]{1,128}/kxUser/[0-9A-Za-z_-]{1,128}$" }, - "KmsKeyId":{ + "KxUserList":{ + "type":"list", + "member":{"shape":"KxUser"} + }, + "KxUserNameString":{ "type":"string", - "max":1000, + "max":50, "min":1, - "pattern":"^[a-zA-Z-0-9-:\\/]*$" + "pattern":"^[0-9A-Za-z_-]{1,50}$" }, "LimitExceededException":{ "type":"structure", @@ -450,7 +2581,7 @@ "members":{ "nextToken":{ "shape":"PaginationToken", - "documentation":"A token generated by FinSpace that specifies where to continue pagination if a previous request was truncated. To get the next set of pages, pass in the nextToken value from the response object of the previous page call.
", + "documentation":"A token generated by FinSpace that specifies where to continue pagination if a previous request was truncated. To get the next set of pages, pass in the nextTokennextToken value from the response object of the previous page call.
A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
", + "location":"uri", + "locationName":"databaseName" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"The maximum number of results to return in this request.
", + "location":"querystring", + "locationName":"maxResults" + } + } + }, + "ListKxChangesetsResponse":{ + "type":"structure", + "members":{ + "kxChangesets":{ + "shape":"KxChangesets", + "documentation":"A list of changesets for a database.
" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
" + } + } + }, + "ListKxClusterNodesRequest":{ + "type":"structure", + "required":[ + "clusterName", + "environmentId" + ], + "members":{ + "environmentId":{ + "shape":"KxEnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "clusterName":{ + "shape":"KxClusterName", + "documentation":"A unique name for the cluster.
", + "location":"uri", + "locationName":"clusterName" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"ResultLimit", + "documentation":"The maximum number of results to return in this request.
", + "location":"querystring", + "locationName":"maxResults" + } + } + }, + "ListKxClusterNodesResponse":{ + "type":"structure", + "members":{ + "nodes":{ + "shape":"KxNodeSummaries", + "documentation":"A list of nodes associated with the cluster.
" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
" + } + } + }, + "ListKxClustersRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"KxEnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "clusterType":{ + "shape":"KxClusterType", + "documentation":"Specifies the type of KDB database that is being created. The following types are available:
HDB – A Historical Database. The data is only accessible with read-only permissions from one of the FinSpace managed kdb databases mounted to the cluster.
RDB – A Realtime Database. This type of database captures all the data from a ticker plant and stores it in memory until the end of day, after which it writes all of its data to a disk and reloads the HDB. This cluster type requires local storage for temporary storage of data during the savedown process. If you specify this field in your request, you must provide the savedownStorageConfiguration parameter.
GATEWAY – A gateway cluster allows you to access data across processes in kdb systems. It allows you to create your own routing logic using the initialization scripts and custom code. This type of cluster does not require a writable local storage.
The maximum number of results to return in this request.
", + "location":"querystring", + "locationName":"maxResults" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListKxClustersResponse":{ + "type":"structure", + "members":{ + "kxClusterSummaries":{ + "shape":"KxClusters", + "documentation":"Lists the cluster details.
" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
" + } + } + }, + "ListKxDatabasesRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"The maximum number of results to return in this request.
", + "location":"querystring", + "locationName":"maxResults" + } + } + }, + "ListKxDatabasesResponse":{ + "type":"structure", + "members":{ + "kxDatabases":{ + "shape":"KxDatabases", + "documentation":"A list of databases in the kdb environment.
" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
" + } + } + }, + "ListKxEnvironmentsRequest":{ + "type":"structure", + "members":{ + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"BoxedInteger", + "documentation":"The maximum number of results to return in this request.
", + "location":"querystring", + "locationName":"maxResults" + } + } + }, + "ListKxEnvironmentsResponse":{ + "type":"structure", + "members":{ + "environments":{ + "shape":"KxEnvironmentList", + "documentation":"A list of environments in an account.
" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
" + } + } + }, + "ListKxUsersRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
", + "location":"querystring", + "locationName":"nextToken" + }, + "maxResults":{ + "shape":"ResultLimit", + "documentation":"The maximum number of results to return in this request.
", + "location":"querystring", + "locationName":"maxResults" + } + } + }, + "ListKxUsersResponse":{ + "type":"structure", + "members":{ + "users":{ + "shape":"KxUserList", + "documentation":"A list of users in a kdb environment.
" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"A token that indicates where a results page should begin.
" + } + } + }, "ListTagsForResourceRequest":{ "type":"structure", "required":["resourceArn"], "members":{ "resourceArn":{ - "shape":"EnvironmentArn", + "shape":"FinSpaceTaggableArn", "documentation":"The Amazon Resource Name of the resource.
", "location":"uri", "locationName":"resourceArn" @@ -496,11 +2866,27 @@ } } }, + "MaxResults":{ + "type":"integer", + "max":100, + "min":0 + }, "NameString":{ "type":"string", "max":50, "min":1, - "pattern":"^[a-zA-Z0-9]{1,50}$" + "pattern":"^[a-zA-Z0-9]{1,50}$" + }, + "NodeCount":{ + "type":"integer", + "max":5, + "min":1 + }, + "NodeType":{ + "type":"string", + "max":32, + "min":1, + "pattern":"^[a-zA-Z0-9._]+" }, "PaginationToken":{ "type":"string", @@ -508,13 +2894,28 @@ "min":1, "pattern":".*" }, + "ReleaseLabel":{ + "type":"string", + "max":16, + "min":1, + "pattern":"^[a-zA-Z0-9._-]+" + }, + "ResourceAlreadyExistsException":{ + "type":"structure", + "members":{ + "message":{"shape":"errorMessage"} + }, + "documentation":"The specified resource group already exists.
", + "error":{"httpStatusCode":409}, + "exception":true + }, "ResourceNotFoundException":{ "type":"structure", "members":{ "message":{"shape":"errorMessage"} }, "documentation":"One or more resources can't be found.
", - "error":{"httpStatusCode":400}, + "error":{"httpStatusCode":404}, "exception":true }, "ResultLimit":{ @@ -522,12 +2923,51 @@ "max":100, "min":0 }, + "RoleArn":{ + "type":"string", + "max":2048, + "min":20, + "pattern":"^arn:aws[a-z\\-]*:iam::\\d{12}:role/?[a-zA-Z_0-9+=,.@\\-_/]+$" + }, + "S3Bucket":{ + "type":"string", + "max":255, + "min":3, + "pattern":"^[a-z0-9][a-z0-9\\.\\-]*[a-z0-9]$" + }, + "S3Key":{ + "type":"string", + "max":1024, + "min":1, + "pattern":"^[a-zA-Z0-9\\/\\!\\-_\\.\\*'\\(\\)]+$" + }, + "S3ObjectVersion":{ + "type":"string", + "max":1000, + "min":1 + }, + "S3Path":{ + "type":"string", + "max":1093, + "min":9, + "pattern":"^s3:\\/\\/[a-z0-9][a-z0-9-]{1,61}[a-z0-9]\\/([^\\/]+\\/)*[^\\/]*$" + }, "SamlMetadataDocument":{ "type":"string", "max":10000000, "min":1000, "pattern":".*" }, + "SecurityGroupIdList":{ + "type":"list", + "member":{"shape":"SecurityGroupIdString"} + }, + "SecurityGroupIdString":{ + "type":"string", + "max":1024, + "min":1, + "pattern":"^sg-([a-z0-9]{8}$|[a-z0-9]{17}$)" + }, "ServiceQuotaExceededException":{ "type":"structure", "members":{ @@ -537,12 +2977,29 @@ "error":{"httpStatusCode":402}, "exception":true }, + "SignedKxConnectionString":{ + "type":"string", + "max":2048, + "min":1, + "pattern":"^(:|:tcps:\\/\\/)[a-zA-Z0-9-\\.\\_]+:\\d+:[a-zA-Z0-9-\\.\\_]+:\\S+$", + "sensitive":true + }, "SmsDomainUrl":{ "type":"string", "max":1000, "min":1, "pattern":"^[a-zA-Z-0-9-:\\/.]*$" }, + "SubnetIdList":{ + "type":"list", + "member":{"shape":"SubnetIdString"} + }, + "SubnetIdString":{ + "type":"string", + "max":1024, + "min":1, + "pattern":"^subnet-([a-z0-9]{8}$|[a-z0-9]{17}$)" + }, "SuperuserParameters":{ "type":"structure", "required":[ @@ -593,7 +3050,7 @@ ], "members":{ "resourceArn":{ - "shape":"EnvironmentArn", + "shape":"FinSpaceTaggableArn", "documentation":"The Amazon Resource Name (ARN) for the resource.
", "location":"uri", "locationName":"resourceArn" @@ -623,6 +3080,30 @@ "error":{"httpStatusCode":429}, "exception":true }, + "Timestamp":{"type":"timestamp"}, + "TransitGatewayConfiguration":{ + "type":"structure", + "required":[ + "transitGatewayID", + "routableCIDRSpace" + ], + "members":{ + "transitGatewayID":{ + "shape":"TransitGatewayID", + "documentation":"The identifier of the transit gateway created by the customer to connect outbound traffics from kdb network to your internal network.
" + }, + "routableCIDRSpace":{ + "shape":"ValidCIDRSpace", + "documentation":"The routing CIDR on behalf of kdb environment. It could be any \"/26 range in the 100.64.0.0 CIDR space. After providing, it will be added to the customer's transit gateway routing table so that the traffics could be routed to kdb network.
" + } + }, + "documentation":"The structure of the transit gateway and network configuration that is used to connect the kdb environment to an internal network.
" + }, + "TransitGatewayID":{ + "type":"string", + "max":32, + "min":1 + }, "UntagResourceRequest":{ "type":"structure", "required":[ @@ -631,7 +3112,7 @@ ], "members":{ "resourceArn":{ - "shape":"EnvironmentArn", + "shape":"FinSpaceTaggableArn", "documentation":"A FinSpace resource from which you want to remove a tag or tags. The value for this parameter is an Amazon Resource Name (ARN).
", "location":"uri", "locationName":"resourceArn" @@ -683,6 +3164,338 @@ } } }, + "UpdateKxClusterDatabasesRequest":{ + "type":"structure", + "required":[ + "environmentId", + "clusterName", + "databases" + ], + "members":{ + "environmentId":{ + "shape":"KxEnvironmentId", + "documentation":"The unique identifier of a kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "clusterName":{ + "shape":"KxClusterName", + "documentation":"A unique name for the cluster that you want to modify.
", + "location":"uri", + "locationName":"clusterName" + }, + "clientToken":{ + "shape":"ClientTokenString", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
" + }, + "databases":{ + "shape":"KxDatabaseConfigurations", + "documentation":"The structure of databases mounted on the cluster.
" + } + } + }, + "UpdateKxClusterDatabasesResponse":{ + "type":"structure", + "members":{ + } + }, + "UpdateKxDatabaseRequest":{ + "type":"structure", + "required":[ + "environmentId", + "databaseName", + "clientToken" + ], + "members":{ + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
", + "location":"uri", + "locationName":"databaseName" + }, + "description":{ + "shape":"Description", + "documentation":"A description of the database.
" + }, + "clientToken":{ + "shape":"ClientTokenString", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
", + "idempotencyToken":true + } + } + }, + "UpdateKxDatabaseResponse":{ + "type":"structure", + "members":{ + "databaseName":{ + "shape":"DatabaseName", + "documentation":"The name of the kdb database.
" + }, + "environmentId":{ + "shape":"EnvironmentId", + "documentation":"A unique identifier for the kdb environment.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description of the database.
" + }, + "lastModifiedTimestamp":{ + "shape":"Timestamp", + "documentation":"The last time that the database was modified. The value is determined as epoch time in milliseconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000000.
" + } + } + }, + "UpdateKxEnvironmentNetworkRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "transitGatewayConfiguration":{ + "shape":"TransitGatewayConfiguration", + "documentation":"Specifies the transit gateway and network configuration to connect the kdb environment to an internal network.
" + }, + "customDNSConfiguration":{ + "shape":"CustomDNSConfiguration", + "documentation":"A list of DNS server name and server IP. This is used to set up Route-53 outbound resolvers.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
" + } + } + }, + "UpdateKxEnvironmentNetworkResponse":{ + "type":"structure", + "members":{ + "name":{ + "shape":"KxEnvironmentName", + "documentation":"The name of the kdb environment.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
" + }, + "awsAccountId":{ + "shape":"IdType", + "documentation":"The unique identifier of the AWS account that is used to create the kdb environment.
" + }, + "status":{ + "shape":"EnvironmentStatus", + "documentation":"The status of the kdb environment.
" + }, + "tgwStatus":{ + "shape":"tgwStatus", + "documentation":"The status of the network configuration.
" + }, + "dnsStatus":{ + "shape":"dnsStatus", + "documentation":"The status of DNS configuration.
" + }, + "errorMessage":{ + "shape":"EnvironmentErrorMessage", + "documentation":"Specifies the error message that appears if a flow fails.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the environment.
" + }, + "environmentArn":{ + "shape":"EnvironmentArn", + "documentation":"The ARN identifier of the environment.
" + }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"The KMS key ID to encrypt your data in the FinSpace environment.
" + }, + "dedicatedServiceAccountId":{ + "shape":"IdType", + "documentation":"A unique identifier for the AWS environment infrastructure account.
" + }, + "transitGatewayConfiguration":{"shape":"TransitGatewayConfiguration"}, + "customDNSConfiguration":{ + "shape":"CustomDNSConfiguration", + "documentation":"A list of DNS server name and server IP. This is used to set up Route-53 outbound resolvers.
" + }, + "creationTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was created in FinSpace.
" + }, + "updateTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was updated.
" + }, + "availabilityZoneIds":{ + "shape":"AvailabilityZoneIds", + "documentation":"The identifier of the availability zones where subnets for the environment are created.
" + } + } + }, + "UpdateKxEnvironmentRequest":{ + "type":"structure", + "required":["environmentId"], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "name":{ + "shape":"KxEnvironmentName", + "documentation":"The name of the kdb environment.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description of the kdb environment.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
" + } + } + }, + "UpdateKxEnvironmentResponse":{ + "type":"structure", + "members":{ + "name":{ + "shape":"KxEnvironmentName", + "documentation":"The name of the kdb environment.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
" + }, + "awsAccountId":{ + "shape":"IdType", + "documentation":"The unique identifier of the AWS account that is used to create the kdb environment.
" + }, + "status":{ + "shape":"EnvironmentStatus", + "documentation":"The status of the kdb environment.
" + }, + "tgwStatus":{ + "shape":"tgwStatus", + "documentation":"The status of the network configuration.
" + }, + "dnsStatus":{ + "shape":"dnsStatus", + "documentation":"The status of DNS configuration.
" + }, + "errorMessage":{ + "shape":"EnvironmentErrorMessage", + "documentation":"Specifies the error message that appears if a flow fails.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the environment.
" + }, + "environmentArn":{ + "shape":"EnvironmentArn", + "documentation":"The ARN identifier of the environment.
" + }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"The KMS key ID to encrypt your data in the FinSpace environment.
" + }, + "dedicatedServiceAccountId":{ + "shape":"IdType", + "documentation":"A unique identifier for the AWS environment infrastructure account.
" + }, + "transitGatewayConfiguration":{"shape":"TransitGatewayConfiguration"}, + "customDNSConfiguration":{ + "shape":"CustomDNSConfiguration", + "documentation":"A list of DNS server name and server IP. This is used to set up Route-53 outbound resolvers.
" + }, + "creationTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was created in FinSpace.
" + }, + "updateTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp at which the kdb environment was updated.
" + }, + "availabilityZoneIds":{ + "shape":"AvailabilityZoneIds", + "documentation":"The identifier of the availability zones where subnets for the environment are created.
" + } + } + }, + "UpdateKxUserRequest":{ + "type":"structure", + "required":[ + "environmentId", + "userName", + "iamRole" + ], + "members":{ + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
", + "location":"uri", + "locationName":"environmentId" + }, + "userName":{ + "shape":"KxUserNameString", + "documentation":"A unique identifier for the user.
", + "location":"uri", + "locationName":"userName" + }, + "iamRole":{ + "shape":"RoleArn", + "documentation":"The IAM role ARN that is associated with the user.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A token that ensures idempotency. This token expires in 10 minutes.
" + } + } + }, + "UpdateKxUserResponse":{ + "type":"structure", + "members":{ + "userName":{ + "shape":"KxUserNameString", + "documentation":"A unique identifier for the user.
" + }, + "userArn":{ + "shape":"KxUserArn", + "documentation":"The Amazon Resource Name (ARN) that identifies the user. For more information about ARNs and how to use ARNs in policies, see IAM Identifiers in the IAM User Guide.
" + }, + "environmentId":{ + "shape":"IdType", + "documentation":"A unique identifier for the kdb environment.
" + }, + "iamRole":{ + "shape":"RoleArn", + "documentation":"The IAM role ARN that is associated with the user.
" + } + } + }, + "ValidCIDRSpace":{ + "type":"string", + "pattern":"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\/26$" + }, + "ValidHostname":{ + "type":"string", + "max":255, + "min":3, + "pattern":"^([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]{0,61}[a-zA-Z0-9])(\\.([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]{0,61}[a-zA-Z0-9]))*$" + }, + "ValidIPAddress":{ + "type":"string", + "pattern":"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$" + }, "ValidationException":{ "type":"structure", "members":{ @@ -692,7 +3505,63 @@ "error":{"httpStatusCode":400}, "exception":true }, + "VpcConfiguration":{ + "type":"structure", + "members":{ + "vpcId":{ + "shape":"VpcIdString", + "documentation":"The identifier of the VPC endpoint.
" + }, + "securityGroupIds":{ + "shape":"SecurityGroupIdList", + "documentation":"The unique identifier of the VPC security group applied to the VPC endpoint ENI for the cluster.
" + }, + "subnetIds":{ + "shape":"SubnetIdList", + "documentation":"The identifier of the subnet that the Privatelink VPC endpoint uses to connect to the cluster.
" + }, + "ipAddressType":{ + "shape":"IPAddressType", + "documentation":"The IP address type for cluster network configuration parameters. The following type is available:
IP_V4 – IP address version 4
Configuration details about the network where the Privatelink endpoint of the cluster resides.
" + }, + "VpcIdString":{ + "type":"string", + "max":1024, + "min":1, + "pattern":"^vpc-([a-z0-9]{8}$|[a-z0-9]{17}$)" + }, + "dnsStatus":{ + "type":"string", + "enum":[ + "NONE", + "UPDATE_REQUESTED", + "UPDATING", + "FAILED_UPDATE", + "SUCCESSFULLY_UPDATED" + ] + }, "errorMessage":{"type":"string"}, + "numBytes":{"type":"long"}, + "numChangesets":{"type":"integer"}, + "numFiles":{"type":"integer"}, + "stringValueLength1to255":{ + "type":"string", + "max":255, + "min":1 + }, + "tgwStatus":{ + "type":"string", + "enum":[ + "NONE", + "UPDATE_REQUESTED", + "UPDATING", + "FAILED_UPDATE", + "SUCCESSFULLY_UPDATED" + ] + }, "url":{ "type":"string", "max":1000, From fe643a736ae343c86c5574773d8abf0e93f59748 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Mon, 5 Jun 2023 18:06:53 +0000 Subject: [PATCH 028/317] Amazon Keyspaces Update: This release adds support for MRR GA launch, and includes multiregion support in create-keyspace, get-keyspace, and list-keyspace. --- .../feature-AmazonKeyspaces-42b7b17.json | 6 + .../codegen-resources/endpoint-tests.json | 208 +++++++++++------- .../codegen-resources/service-2.json | 69 +++++- 3 files changed, 198 insertions(+), 85 deletions(-) create mode 100644 .changes/next-release/feature-AmazonKeyspaces-42b7b17.json diff --git a/.changes/next-release/feature-AmazonKeyspaces-42b7b17.json b/.changes/next-release/feature-AmazonKeyspaces-42b7b17.json new file mode 100644 index 000000000000..020ae30b9417 --- /dev/null +++ b/.changes/next-release/feature-AmazonKeyspaces-42b7b17.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Keyspaces", + "contributor": "", + "description": "This release adds support for MRR GA launch, and includes multiregion support in create-keyspace, get-keyspace, and list-keyspace." +} diff --git a/services/keyspaces/src/main/resources/codegen-resources/endpoint-tests.json b/services/keyspaces/src/main/resources/codegen-resources/endpoint-tests.json index cd6fb8403293..90a9df595633 100644 --- a/services/keyspaces/src/main/resources/codegen-resources/endpoint-tests.json +++ b/services/keyspaces/src/main/resources/codegen-resources/endpoint-tests.json @@ -8,9 +8,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "ap-east-1", "UseFIPS": false, - "Region": "ap-east-1" + "UseDualStack": false } }, { @@ -21,9 +21,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "ap-northeast-1", "UseFIPS": false, - "Region": "ap-northeast-1" + "UseDualStack": false } }, { @@ -34,9 +34,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "ap-northeast-2", "UseFIPS": false, - "Region": "ap-northeast-2" + "UseDualStack": false } }, { @@ -47,9 +47,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "ap-south-1", "UseFIPS": false, - "Region": "ap-south-1" + "UseDualStack": false } }, { @@ -60,9 +60,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "ap-southeast-1", "UseFIPS": false, - "Region": "ap-southeast-1" + "UseDualStack": false } }, { @@ -73,9 +73,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "ap-southeast-2", "UseFIPS": false, - "Region": "ap-southeast-2" + "UseDualStack": false } }, { @@ -86,9 +86,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "ca-central-1", "UseFIPS": false, - "Region": "ca-central-1" + "UseDualStack": false } }, { @@ -99,9 +99,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "eu-central-1", "UseFIPS": false, - "Region": "eu-central-1" + "UseDualStack": false } }, { @@ -112,9 +112,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "eu-north-1", "UseFIPS": false, - "Region": "eu-north-1" + "UseDualStack": false } }, { @@ -125,9 +125,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "eu-west-1", "UseFIPS": false, - "Region": "eu-west-1" + "UseDualStack": false } }, { @@ -138,9 +138,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "eu-west-2", "UseFIPS": false, - "Region": "eu-west-2" + "UseDualStack": false } }, { @@ -151,9 +151,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "eu-west-3", "UseFIPS": false, - "Region": "eu-west-3" + "UseDualStack": false } }, { @@ -164,9 +164,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "me-south-1", "UseFIPS": false, - "Region": "me-south-1" + "UseDualStack": false } }, { @@ -177,9 +177,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "sa-east-1", "UseFIPS": false, - "Region": "sa-east-1" + "UseDualStack": false } }, { @@ -190,9 +190,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-east-1", "UseFIPS": false, - "Region": "us-east-1" + "UseDualStack": false } }, { @@ -203,9 +203,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-east-1", "UseFIPS": true, - "Region": "us-east-1" + "UseDualStack": false } }, { @@ -216,9 +216,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-east-2", "UseFIPS": false, - "Region": "us-east-2" + "UseDualStack": false } }, { @@ -229,9 +229,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-west-1", "UseFIPS": false, - "Region": "us-west-1" + "UseDualStack": false } }, { @@ -242,9 +242,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-west-2", "UseFIPS": false, - "Region": "us-west-2" + "UseDualStack": false } }, { @@ -255,9 +255,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-west-2", "UseFIPS": true, - "Region": "us-west-2" + "UseDualStack": false } }, { @@ -268,9 +268,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "us-east-1", "UseFIPS": true, - "Region": "us-east-1" + "UseDualStack": true } }, { @@ -281,9 +281,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "us-east-1", "UseFIPS": false, - "Region": "us-east-1" + "UseDualStack": true } }, { @@ -294,9 +294,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "cn-north-1", "UseFIPS": true, - "Region": "cn-north-1" + "UseDualStack": true } }, { @@ -307,9 +307,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "cn-north-1", "UseFIPS": true, - "Region": "cn-north-1" + "UseDualStack": false } }, { @@ -320,9 +320,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "cn-north-1", "UseFIPS": false, - "Region": "cn-north-1" + "UseDualStack": true } }, { @@ -333,9 +333,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "cn-north-1", "UseFIPS": false, - "Region": "cn-north-1" + "UseDualStack": false } }, { @@ -346,9 +346,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-gov-east-1", "UseFIPS": false, - "Region": "us-gov-east-1" + "UseDualStack": false } }, { @@ -359,9 +359,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-gov-east-1", "UseFIPS": true, - "Region": "us-gov-east-1" + "UseDualStack": false } }, { @@ -372,9 +372,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-gov-west-1", "UseFIPS": false, - "Region": "us-gov-west-1" + "UseDualStack": false } }, { @@ -385,9 +385,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-gov-west-1", "UseFIPS": true, - "Region": "us-gov-west-1" + "UseDualStack": false } }, { @@ -398,9 +398,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "us-gov-east-1", "UseFIPS": true, - "Region": "us-gov-east-1" + "UseDualStack": true } }, { @@ -411,9 +411,20 @@ } }, "params": { - "UseDualStack": true, + "Region": "us-gov-east-1", "UseFIPS": false, - "Region": "us-gov-east-1" + "UseDualStack": true + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": true, + "UseDualStack": true } }, { @@ -424,9 +435,20 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-iso-east-1", "UseFIPS": true, - "Region": "us-iso-east-1" + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": false, + "UseDualStack": true } }, { @@ -437,9 +459,20 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-iso-east-1", "UseFIPS": false, - "Region": "us-iso-east-1" + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": true, + "UseDualStack": true } }, { @@ -450,9 +483,20 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-isob-east-1", "UseFIPS": true, - "Region": "us-isob-east-1" + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": false, + "UseDualStack": true } }, { @@ -463,9 +507,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-isob-east-1", "UseFIPS": false, - "Region": "us-isob-east-1" + "UseDualStack": false } }, { @@ -476,9 +520,9 @@ } }, "params": { - "UseDualStack": false, - "UseFIPS": false, "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -490,8 +534,8 @@ } }, "params": { - "UseDualStack": false, "UseFIPS": false, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -501,9 +545,9 @@ "error": "Invalid Configuration: FIPS and custom endpoint are not supported" }, "params": { - "UseDualStack": false, - "UseFIPS": true, "Region": "us-east-1", + "UseFIPS": true, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -513,11 +557,17 @@ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" }, "params": { - "UseDualStack": true, - "UseFIPS": false, "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": true, "Endpoint": "https://example.com" } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } } ], "version": "1.0" diff --git a/services/keyspaces/src/main/resources/codegen-resources/service-2.json b/services/keyspaces/src/main/resources/codegen-resources/service-2.json index 9c4fc96d9c7f..1127adc7cc28 100644 --- a/services/keyspaces/src/main/resources/codegen-resources/service-2.json +++ b/services/keyspaces/src/main/resources/codegen-resources/service-2.json @@ -202,7 +202,7 @@ {"shape":"AccessDeniedException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"Associates a set of tags with a Amazon Keyspaces resource. You can then activate these user-defined tags so that they appear on the Cost Management Console for cost allocation tracking. For more information, see Adding tags and labels to Amazon Keyspaces resources in the Amazon Keyspaces Developer Guide.
For IAM policy examples that show how to control access to Amazon Keyspaces resources based on tags, see Amazon Keyspaces resource access based on tags in the Amazon Keyspaces Developer Guide.
" + "documentation":"Associates a set of tags with a Amazon Keyspaces resource. You can then activate these user-defined tags so that they appear on the Cost Management Console for cost allocation tracking. For more information, see Adding tags and labels to Amazon Keyspaces resources in the Amazon Keyspaces Developer Guide.
For IAM policy examples that show how to control access to Amazon Keyspaces resources based on tags, see Amazon Keyspaces resource access based on tags in the Amazon Keyspaces Developer Guide.
" }, "UntagResource":{ "name":"UntagResource", @@ -393,6 +393,10 @@ "tags":{ "shape":"TagList", "documentation":"A list of key-value pair tags to be attached to the keyspace.
For more information, see Adding tags and labels to Amazon Keyspaces resources in the Amazon Keyspaces Developer Guide.
" + }, + "replicationSpecification":{ + "shape":"ReplicationSpecification", + "documentation":"The replication specification of the keyspace includes:
replicationStrategy - the required value is SINGLE_REGION or MULTI_REGION.
regionList - if the replicationStrategy is MULTI_REGION, the regionList requires the current Region and at least one additional Amazon Web Services Region where the keyspace is going to be replicated in. The maximum number of supported replication Regions including the current Region is six.
The ARN of the keyspace.
" + "documentation":"Returns the ARN of the keyspace.
" + }, + "replicationStrategy":{ + "shape":"rs", + "documentation":" Returns the replication strategy of the keyspace. The options are SINGLE_REGION or MULTI_REGION.
If the replicationStrategy of the keyspace is MULTI_REGION, a list of replication Regions is returned.
The unique identifier of the keyspace in the format of an Amazon Resource Name (ARN).
" + }, + "replicationStrategy":{ + "shape":"rs", + "documentation":" This property specifies if a keyspace is a single Region keyspace or a multi-Region keyspace. The available values are SINGLE_REGION or MULTI_REGION.
If the replicationStrategy of the keyspace is MULTI_REGION, a list of replication Regions is returned.
Represents the properties of a keyspace.
" @@ -828,6 +850,27 @@ }, "documentation":"The point-in-time recovery status of the specified table.
" }, + "RegionList":{ + "type":"list", + "member":{"shape":"region"}, + "max":6, + "min":2 + }, + "ReplicationSpecification":{ + "type":"structure", + "required":["replicationStrategy"], + "members":{ + "replicationStrategy":{ + "shape":"rs", + "documentation":" The replicationStrategy of a keyspace, the required value is SINGLE_REGION or MULTI_REGION.
The regionList can contain up to six Amazon Web Services Regions where the keyspace is replicated in.
The replication specification of the keyspace includes:
regionList - up to six Amazon Web Services Regions where the keyspace is replicated in.
replicationStrategy - the required value is SINGLE_REGION or MULTI_REGION.
Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon Keyspaces makes it easy to migrate, run, and scale Cassandra workloads in the Amazon Web Services Cloud. With just a few clicks on the Amazon Web Services Management Console or a few lines of code, you can create keyspaces and tables in Amazon Keyspaces, without deploying any infrastructure or installing software.
In addition to supporting Cassandra Query Language (CQL) requests via open-source Cassandra drivers, Amazon Keyspaces supports data definition language (DDL) operations to manage keyspaces and tables using the Amazon Web Services SDK and CLI, as well as infrastructure as code (IaC) services and tools such as CloudFormation and Terraform. This API reference describes the supported DDL operations in detail.
For the list of all supported CQL APIs, see Supported Cassandra APIs, operations, and data types in Amazon Keyspaces in the Amazon Keyspaces Developer Guide.
To learn how Amazon Keyspaces API actions are recorded with CloudTrail, see Amazon Keyspaces information in CloudTrail in the Amazon Keyspaces Developer Guide.
For more information about Amazon Web Services APIs, for example how to implement retry logic or how to sign Amazon Web Services API requests, see Amazon Web Services APIs in the General Reference.
" From 06dcd256325e06edad6b94e044a3be23a63fe603 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Mon, 5 Jun 2023 18:06:53 +0000 Subject: [PATCH 029/317] AWS Key Management Service Update: This release includes feature to import customer's asymmetric (RSA and ECC) and HMAC keys into KMS. It also includes feature to allow customers to specify number of days to schedule a KMS key deletion as a policy condition key. --- ...ature-AWSKeyManagementService-6198159.json | 6 ++++ .../codegen-resources/service-2.json | 36 +++++++++++-------- 2 files changed, 27 insertions(+), 15 deletions(-) create mode 100644 .changes/next-release/feature-AWSKeyManagementService-6198159.json diff --git a/.changes/next-release/feature-AWSKeyManagementService-6198159.json b/.changes/next-release/feature-AWSKeyManagementService-6198159.json new file mode 100644 index 000000000000..ade147301a7a --- /dev/null +++ b/.changes/next-release/feature-AWSKeyManagementService-6198159.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Key Management Service", + "contributor": "", + "description": "This release includes feature to import customer's asymmetric (RSA and ECC) and HMAC keys into KMS. It also includes feature to allow customers to specify number of days to schedule a KMS key deletion as a policy condition key." +} diff --git a/services/kms/src/main/resources/codegen-resources/service-2.json b/services/kms/src/main/resources/codegen-resources/service-2.json index b0831901380a..c096e0a6e7c2 100644 --- a/services/kms/src/main/resources/codegen-resources/service-2.json +++ b/services/kms/src/main/resources/codegen-resources/service-2.json @@ -137,7 +137,7 @@ {"shape":"XksKeyAlreadyInUseException"}, {"shape":"XksKeyNotFoundException"} ], - "documentation":"Creates a unique customer managed KMS key in your Amazon Web Services account and Region. You can use a KMS key in cryptographic operations, such as encryption and signing. Some Amazon Web Services services let you use KMS keys that you create and manage to protect your service resources.
A KMS key is a logical representation of a cryptographic key. In addition to the key material used in cryptographic operations, a KMS key includes metadata, such as the key ID, key policy, creation date, description, and key state. For details, see Managing keys in the Key Management Service Developer Guide
Use the parameters of CreateKey to specify the type of KMS key, the source of its key material, its key policy, description, tags, and other properties.
KMS has replaced the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term.
To create different types of KMS keys, use the following guidance:
By default, CreateKey creates a symmetric encryption KMS key with key material that KMS generates. This is the basic and most widely used type of KMS key, and provides the best performance.
To create a symmetric encryption KMS key, you don't need to specify any parameters. The default value for KeySpec, SYMMETRIC_DEFAULT, the default value for KeyUsage, ENCRYPT_DECRYPT, and the default value for Origin, AWS_KMS, create a symmetric encryption KMS key with KMS key material.
If you need a key for basic encryption and decryption or you are creating a KMS key to protect your resources in an Amazon Web Services service, create a symmetric encryption KMS key. The key material in a symmetric encryption key never leaves KMS unencrypted. You can use a symmetric encryption KMS key to encrypt and decrypt data up to 4,096 bytes, but they are typically used to generate data keys and data keys pairs. For details, see GenerateDataKey and GenerateDataKeyPair.
To create an asymmetric KMS key, use the KeySpec parameter to specify the type of key material in the KMS key. Then, use the KeyUsage parameter to determine whether the KMS key will be used to encrypt and decrypt or sign and verify. You can't change these properties after the KMS key is created.
Asymmetric KMS keys contain an RSA key pair, Elliptic Curve (ECC) key pair, or an SM2 key pair (China Regions only). The private key in an asymmetric KMS key never leaves KMS unencrypted. However, you can use the GetPublicKey operation to download the public key so it can be used outside of KMS. KMS keys with RSA or SM2 key pairs can be used to encrypt or decrypt data or sign and verify messages (but not both). KMS keys with ECC key pairs can be used only to sign and verify messages. For information about asymmetric KMS keys, see Asymmetric KMS keys in the Key Management Service Developer Guide.
To create an HMAC KMS key, set the KeySpec parameter to a key spec value for HMAC KMS keys. Then set the KeyUsage parameter to GENERATE_VERIFY_MAC. You must set the key usage even though GENERATE_VERIFY_MAC is the only valid key usage value for HMAC KMS keys. You can't change these properties after the KMS key is created.
HMAC KMS keys are symmetric keys that never leave KMS unencrypted. You can use HMAC keys to generate (GenerateMac) and verify (VerifyMac) HMAC codes for messages up to 4096 bytes.
HMAC KMS keys are not supported in all Amazon Web Services Regions. If you try to create an HMAC KMS key in an Amazon Web Services Region in which HMAC keys are not supported, the CreateKey operation returns an UnsupportedOperationException. For a list of Regions in which HMAC KMS keys are supported, see HMAC keys in KMS in the Key Management Service Developer Guide.
To create a multi-Region primary key in the local Amazon Web Services Region, use the MultiRegion parameter with a value of True. To create a multi-Region replica key, that is, a KMS key with the same key ID and key material as a primary key, but in a different Amazon Web Services Region, use the ReplicateKey operation. To change a replica key to a primary key, and its primary key to a replica key, use the UpdatePrimaryRegion operation.
You can create multi-Region KMS keys for all supported KMS key types: symmetric encryption KMS keys, HMAC KMS keys, asymmetric encryption KMS keys, and asymmetric signing KMS keys. You can also create multi-Region keys with imported key material. However, you can't create multi-Region keys in a custom key store.
This operation supports multi-Region keys, an KMS feature that lets you create multiple interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
To import your own key material into a KMS key, begin by creating a symmetric encryption KMS key with no key material. To do this, use the Origin parameter of CreateKey with a value of EXTERNAL. Next, use GetParametersForImport operation to get a public key and import token, and use the public key to encrypt your key material. Then, use ImportKeyMaterial with your import token to import the key material. For step-by-step instructions, see Importing Key Material in the Key Management Service Developer Guide .
This feature supports only symmetric encryption KMS keys, including multi-Region symmetric encryption KMS keys. You cannot import key material into any other type of KMS key.
To create a multi-Region primary key with imported key material, use the Origin parameter of CreateKey with a value of EXTERNAL and the MultiRegion parameter with a value of True. To create replicas of the multi-Region primary key, use the ReplicateKey operation. For instructions, see Importing key material into multi-Region keys. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
A custom key store lets you protect your Amazon Web Services resources using keys in a backing key store that you own and manage. When you request a cryptographic operation with a KMS key in a custom key store, the operation is performed in the backing key store using its cryptographic keys.
KMS supports CloudHSM key stores backed by an CloudHSM cluster and external key stores backed by an external key manager outside of Amazon Web Services. When you create a KMS key in an CloudHSM key store, KMS generates an encryption key in the CloudHSM cluster and associates it with the KMS key. When you create a KMS key in an external key store, you specify an existing encryption key in the external key manager.
Some external key managers provide a simpler method for creating a KMS key in an external key store. For details, see your external key manager documentation.
Before you create a KMS key in a custom key store, the ConnectionState of the key store must be CONNECTED. To connect the custom key store, use the ConnectCustomKeyStore operation. To find the ConnectionState, use the DescribeCustomKeyStores operation.
To create a KMS key in a custom key store, use the CustomKeyStoreId. Use the default KeySpec value, SYMMETRIC_DEFAULT, and the default KeyUsage value, ENCRYPT_DECRYPT to create a symmetric encryption key. No other key type is supported in a custom key store.
To create a KMS key in an CloudHSM key store, use the Origin parameter with a value of AWS_CLOUDHSM. The CloudHSM cluster that is associated with the custom key store must have at least two active HSMs in different Availability Zones in the Amazon Web Services Region.
To create a KMS key in an external key store, use the Origin parameter with a value of EXTERNAL_KEY_STORE and an XksKeyId parameter that identifies an existing external key.
Some external key managers provide a simpler method for creating a KMS key in an external key store. For details, see your external key manager documentation.
Cross-account use: No. You cannot use this operation to create a KMS key in a different Amazon Web Services account.
Required permissions: kms:CreateKey (IAM policy). To use the Tags parameter, kms:TagResource (IAM policy). For examples and information about related permissions, see Allow a user to create KMS keys in the Key Management Service Developer Guide.
Related operations:
" + "documentation":"Creates a unique customer managed KMS key in your Amazon Web Services account and Region. You can use a KMS key in cryptographic operations, such as encryption and signing. Some Amazon Web Services services let you use KMS keys that you create and manage to protect your service resources.
A KMS key is a logical representation of a cryptographic key. In addition to the key material used in cryptographic operations, a KMS key includes metadata, such as the key ID, key policy, creation date, description, and key state. For details, see Managing keys in the Key Management Service Developer Guide
Use the parameters of CreateKey to specify the type of KMS key, the source of its key material, its key policy, description, tags, and other properties.
KMS has replaced the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term.
To create different types of KMS keys, use the following guidance:
By default, CreateKey creates a symmetric encryption KMS key with key material that KMS generates. This is the basic and most widely used type of KMS key, and provides the best performance.
To create a symmetric encryption KMS key, you don't need to specify any parameters. The default value for KeySpec, SYMMETRIC_DEFAULT, the default value for KeyUsage, ENCRYPT_DECRYPT, and the default value for Origin, AWS_KMS, create a symmetric encryption KMS key with KMS key material.
If you need a key for basic encryption and decryption or you are creating a KMS key to protect your resources in an Amazon Web Services service, create a symmetric encryption KMS key. The key material in a symmetric encryption key never leaves KMS unencrypted. You can use a symmetric encryption KMS key to encrypt and decrypt data up to 4,096 bytes, but they are typically used to generate data keys and data keys pairs. For details, see GenerateDataKey and GenerateDataKeyPair.
To create an asymmetric KMS key, use the KeySpec parameter to specify the type of key material in the KMS key. Then, use the KeyUsage parameter to determine whether the KMS key will be used to encrypt and decrypt or sign and verify. You can't change these properties after the KMS key is created.
Asymmetric KMS keys contain an RSA key pair, Elliptic Curve (ECC) key pair, or an SM2 key pair (China Regions only). The private key in an asymmetric KMS key never leaves KMS unencrypted. However, you can use the GetPublicKey operation to download the public key so it can be used outside of KMS. KMS keys with RSA or SM2 key pairs can be used to encrypt or decrypt data or sign and verify messages (but not both). KMS keys with ECC key pairs can be used only to sign and verify messages. For information about asymmetric KMS keys, see Asymmetric KMS keys in the Key Management Service Developer Guide.
To create an HMAC KMS key, set the KeySpec parameter to a key spec value for HMAC KMS keys. Then set the KeyUsage parameter to GENERATE_VERIFY_MAC. You must set the key usage even though GENERATE_VERIFY_MAC is the only valid key usage value for HMAC KMS keys. You can't change these properties after the KMS key is created.
HMAC KMS keys are symmetric keys that never leave KMS unencrypted. You can use HMAC keys to generate (GenerateMac) and verify (VerifyMac) HMAC codes for messages up to 4096 bytes.
To create a multi-Region primary key in the local Amazon Web Services Region, use the MultiRegion parameter with a value of True. To create a multi-Region replica key, that is, a KMS key with the same key ID and key material as a primary key, but in a different Amazon Web Services Region, use the ReplicateKey operation. To change a replica key to a primary key, and its primary key to a replica key, use the UpdatePrimaryRegion operation.
You can create multi-Region KMS keys for all supported KMS key types: symmetric encryption KMS keys, HMAC KMS keys, asymmetric encryption KMS keys, and asymmetric signing KMS keys. You can also create multi-Region keys with imported key material. However, you can't create multi-Region keys in a custom key store.
This operation supports multi-Region keys, an KMS feature that lets you create multiple interoperable KMS keys in different Amazon Web Services Regions. Because these KMS keys have the same key ID, key material, and other metadata, you can use them interchangeably to encrypt data in one Amazon Web Services Region and decrypt it in a different Amazon Web Services Region without re-encrypting the data or making a cross-Region call. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
To import your own key material into a KMS key, begin by creating a KMS key with no key material. To do this, use the Origin parameter of CreateKey with a value of EXTERNAL. Next, use GetParametersForImport operation to get a public key and import token. Use the wrapping public key to encrypt your key material. Then, use ImportKeyMaterial with your import token to import the key material. For step-by-step instructions, see Importing Key Material in the Key Management Service Developer Guide .
You can import key material into KMS keys of all supported KMS key types: symmetric encryption KMS keys, HMAC KMS keys, asymmetric encryption KMS keys, and asymmetric signing KMS keys. You can also create multi-Region keys with imported key material. However, you can't import key material into a KMS key in a custom key store.
To create a multi-Region primary key with imported key material, use the Origin parameter of CreateKey with a value of EXTERNAL and the MultiRegion parameter with a value of True. To create replicas of the multi-Region primary key, use the ReplicateKey operation. For instructions, see Importing key material into multi-Region keys. For more information about multi-Region keys, see Multi-Region keys in KMS in the Key Management Service Developer Guide.
A custom key store lets you protect your Amazon Web Services resources using keys in a backing key store that you own and manage. When you request a cryptographic operation with a KMS key in a custom key store, the operation is performed in the backing key store using its cryptographic keys.
KMS supports CloudHSM key stores backed by an CloudHSM cluster and external key stores backed by an external key manager outside of Amazon Web Services. When you create a KMS key in an CloudHSM key store, KMS generates an encryption key in the CloudHSM cluster and associates it with the KMS key. When you create a KMS key in an external key store, you specify an existing encryption key in the external key manager.
Some external key managers provide a simpler method for creating a KMS key in an external key store. For details, see your external key manager documentation.
Before you create a KMS key in a custom key store, the ConnectionState of the key store must be CONNECTED. To connect the custom key store, use the ConnectCustomKeyStore operation. To find the ConnectionState, use the DescribeCustomKeyStores operation.
To create a KMS key in a custom key store, use the CustomKeyStoreId. Use the default KeySpec value, SYMMETRIC_DEFAULT, and the default KeyUsage value, ENCRYPT_DECRYPT to create a symmetric encryption key. No other key type is supported in a custom key store.
To create a KMS key in an CloudHSM key store, use the Origin parameter with a value of AWS_CLOUDHSM. The CloudHSM cluster that is associated with the custom key store must have at least two active HSMs in different Availability Zones in the Amazon Web Services Region.
To create a KMS key in an external key store, use the Origin parameter with a value of EXTERNAL_KEY_STORE and an XksKeyId parameter that identifies an existing external key.
Some external key managers provide a simpler method for creating a KMS key in an external key store. For details, see your external key manager documentation.
Cross-account use: No. You cannot use this operation to create a KMS key in a different Amazon Web Services account.
Required permissions: kms:CreateKey (IAM policy). To use the Tags parameter, kms:TagResource (IAM policy). For examples and information about related permissions, see Allow a user to create KMS keys in the Key Management Service Developer Guide.
Related operations:
" }, "Decrypt":{ "name":"Decrypt", @@ -207,7 +207,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"Deletes key material that you previously imported. This operation makes the specified KMS key unusable. For more information about importing key material into KMS, see Importing Key Material in the Key Management Service Developer Guide.
When the specified KMS key is in the PendingDeletion state, this operation does not change the KMS key's state. Otherwise, it changes the KMS key's state to PendingImport.
After you delete key material, you can use ImportKeyMaterial to reimport the same key material into the KMS key.
The KMS key that you use for this operation must be in a compatible key state. For details, see Key states of KMS keys in the Key Management Service Developer Guide.
Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
Required permissions: kms:DeleteImportedKeyMaterial (key policy)
Related operations:
" + "documentation":"Deletes key material that was previously imported. This operation makes the specified KMS key temporarily unusable. To restore the usability of the KMS key, reimport the same key material. For more information about importing key material into KMS, see Importing Key Material in the Key Management Service Developer Guide.
When the specified KMS key is in the PendingDeletion state, this operation does not change the KMS key's state. Otherwise, it changes the KMS key's state to PendingImport.
The KMS key that you use for this operation must be in a compatible key state. For details, see Key states of KMS keys in the Key Management Service Developer Guide.
Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
Required permissions: kms:DeleteImportedKeyMaterial (key policy)
Related operations:
" }, "DescribeCustomKeyStores":{ "name":"DescribeCustomKeyStores", @@ -513,7 +513,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"Returns the items you need to import key material into a symmetric encryption KMS key. For more information about importing key material into KMS, see Importing key material in the Key Management Service Developer Guide.
This operation returns a public key and an import token. Use the public key to encrypt the symmetric key material. Store the import token to send with a subsequent ImportKeyMaterial request.
You must specify the key ID of the symmetric encryption KMS key into which you will import key material. The KMS key Origin must be EXTERNAL. You must also specify the wrapping algorithm and type of wrapping key (public key) that you will use to encrypt the key material. You cannot perform this operation on an asymmetric KMS key, an HMAC KMS key, or on any KMS key in a different Amazon Web Services account.
To import key material, you must use the public key and import token from the same response. These items are valid for 24 hours. The expiration date and time appear in the GetParametersForImport response. You cannot use an expired token in an ImportKeyMaterial request. If your key and token expire, send another GetParametersForImport request.
The KMS key that you use for this operation must be in a compatible key state. For details, see Key states of KMS keys in the Key Management Service Developer Guide.
Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
Required permissions: kms:GetParametersForImport (key policy)
Related operations:
" + "documentation":"Returns the public key and an import token you need to import or reimport key material for a KMS key.
By default, KMS keys are created with key material that KMS generates. This operation supports Importing key material, an advanced feature that lets you generate and import the cryptographic key material for a KMS key. For more information about importing key material into KMS, see Importing key material in the Key Management Service Developer Guide.
Before calling GetParametersForImport, use the CreateKey operation with an Origin value of EXTERNAL to create a KMS key with no key material. You can import key material for a symmetric encryption KMS key, HMAC KMS key, asymmetric encryption KMS key, or asymmetric signing KMS key. You can also import key material into a multi-Region key of any supported type. However, you can't import key material into a KMS key in a custom key store. You can also use GetParametersForImport to get a public key and import token to reimport the original key material into a KMS key whose key material expired or was deleted.
GetParametersForImport returns the items that you need to import your key material.
The public key (or \"wrapping key\") of an RSA key pair that KMS generates.
You will use this public key to encrypt (\"wrap\") your key material while it's in transit to KMS.
A import token that ensures that KMS can decrypt your key material and associate it with the correct KMS key.
The public key and its import token are permanently linked and must be used together. Each public key and import token set is valid for 24 hours. The expiration date and time appear in the ParametersValidTo field in the GetParametersForImport response. You cannot use an expired public key or import token in an ImportKeyMaterial request. If your key and token expire, send another GetParametersForImport request.
GetParametersForImport requires the following information:
The key ID of the KMS key for which you are importing the key material.
The key spec of the public key (\"wrapping key\") that you will use to encrypt your key material during import.
The wrapping algorithm that you will use with the public key to encrypt your key material.
You can use the same or a different public key spec and wrapping algorithm each time you import or reimport the same key material.
The KMS key that you use for this operation must be in a compatible key state. For details, see Key states of KMS keys in the Key Management Service Developer Guide.
Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
Required permissions: kms:GetParametersForImport (key policy)
Related operations:
" }, "GetPublicKey":{ "name":"GetPublicKey", @@ -557,7 +557,7 @@ {"shape":"ExpiredImportTokenException"}, {"shape":"InvalidImportTokenException"} ], - "documentation":"Imports key material into an existing symmetric encryption KMS key that was created without key material. After you successfully import key material into a KMS key, you can reimport the same key material into that KMS key, but you cannot import different key material.
You cannot perform this operation on an asymmetric KMS key, an HMAC KMS key, or on any KMS key in a different Amazon Web Services account. For more information about creating KMS keys with no key material and then importing key material, see Importing Key Material in the Key Management Service Developer Guide.
Before using this operation, call GetParametersForImport. Its response includes a public key and an import token. Use the public key to encrypt the key material. Then, submit the import token from the same GetParametersForImport response.
When calling this operation, you must specify the following values:
The key ID or key ARN of a KMS key with no key material. Its Origin must be EXTERNAL.
To create a KMS key with no key material, call CreateKey and set the value of its Origin parameter to EXTERNAL. To get the Origin of a KMS key, call DescribeKey.)
The encrypted key material. To get the public key to encrypt the key material, call GetParametersForImport.
The import token that GetParametersForImport returned. You must use a public key and token from the same GetParametersForImport response.
Whether the key material expires (ExpirationModel) and, if so, when (ValidTo). If you set an expiration date, on the specified date, KMS deletes the key material from the KMS key, making the KMS key unusable. To use the KMS key in cryptographic operations again, you must reimport the same key material. The only way to change the expiration model or expiration date is by reimporting the same key material and specifying a new expiration date.
When this operation is successful, the key state of the KMS key changes from PendingImport to Enabled, and you can use the KMS key.
If this operation fails, use the exception to help determine the problem. If the error is related to the key material, the import token, or wrapping key, use GetParametersForImport to get a new public key and import token for the KMS key and repeat the import procedure. For help, see How To Import Key Material in the Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For details, see Key states of KMS keys in the Key Management Service Developer Guide.
Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
Required permissions: kms:ImportKeyMaterial (key policy)
Related operations:
" + "documentation":"Imports or reimports key material into an existing KMS key that was created without key material. ImportKeyMaterial also sets the expiration model and expiration date of the imported key material.
By default, KMS keys are created with key material that KMS generates. This operation supports Importing key material, an advanced feature that lets you generate and import the cryptographic key material for a KMS key. For more information about importing key material into KMS, see Importing key material in the Key Management Service Developer Guide.
After you successfully import key material into a KMS key, you can reimport the same key material into that KMS key, but you cannot import different key material. You might reimport key material to replace key material that expired or key material that you deleted. You might also reimport key material to change the expiration model or expiration date of the key material. Before reimporting key material, if necessary, call DeleteImportedKeyMaterial to delete the current imported key material.
Each time you import key material into KMS, you can determine whether (ExpirationModel) and when (ValidTo) the key material expires. To change the expiration of your key material, you must import it again, either by calling ImportKeyMaterial or using the import features of the KMS console.
Before calling ImportKeyMaterial:
Create or identify a KMS key with no key material. The KMS key must have an Origin value of EXTERNAL, which indicates that the KMS key is designed for imported key material.
To create an new KMS key for imported key material, call the CreateKey operation with an Origin value of EXTERNAL. You can create a symmetric encryption KMS key, HMAC KMS key, asymmetric encryption KMS key, or asymmetric signing KMS key. You can also import key material into a multi-Region key of any supported type. However, you can't import key material into a KMS key in a custom key store.
Use the DescribeKey operation to verify that the KeyState of the KMS key is PendingImport, which indicates that the KMS key has no key material.
If you are reimporting the same key material into an existing KMS key, you might need to call the DeleteImportedKeyMaterial to delete its existing key material.
Call the GetParametersForImport operation to get a public key and import token set for importing key material.
Use the public key in the GetParametersForImport response to encrypt your key material.
Then, in an ImportKeyMaterial request, you submit your encrypted key material and import token. When calling this operation, you must specify the following values:
The key ID or key ARN of the KMS key to associate with the imported key material. Its Origin must be EXTERNAL and its KeyState must be PendingImport. You cannot perform this operation on a KMS key in a custom key store, or on a KMS key in a different Amazon Web Services account. To get the Origin and KeyState of a KMS key, call DescribeKey.
The encrypted key material.
The import token that GetParametersForImport returned. You must use a public key and token from the same GetParametersForImport response.
Whether the key material expires (ExpirationModel) and, if so, when (ValidTo). For help with this choice, see Setting an expiration time in the Key Management Service Developer Guide.
If you set an expiration date, KMS deletes the key material from the KMS key on the specified date, making the KMS key unusable. To use the KMS key in cryptographic operations again, you must reimport the same key material. However, you can delete and reimport the key material at any time, including before the key material expires. Each time you reimport, you can eliminate or reset the expiration time.
When this operation is successful, the key state of the KMS key changes from PendingImport to Enabled, and you can use the KMS key in cryptographic operations.
If this operation fails, use the exception to help determine the problem. If the error is related to the key material, the import token, or wrapping key, use GetParametersForImport to get a new public key and import token for the KMS key and repeat the import procedure. For help, see How To Import Key Material in the Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For details, see Key states of KMS keys in the Key Management Service Developer Guide.
Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
Required permissions: kms:ImportKeyMaterial (key policy)
Related operations:
" }, "ListAliases":{ "name":"ListAliases", @@ -773,7 +773,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"Schedules the deletion of a KMS key. By default, KMS applies a waiting period of 30 days, but you can specify a waiting period of 7-30 days. When this operation is successful, the key state of the KMS key changes to PendingDeletion and the key can't be used in any cryptographic operations. It remains in this state for the duration of the waiting period. Before the waiting period ends, you can use CancelKeyDeletion to cancel the deletion of the KMS key. After the waiting period ends, KMS deletes the KMS key, its key material, and all KMS data associated with it, including all aliases that refer to it.
Deleting a KMS key is a destructive and potentially dangerous operation. When a KMS key is deleted, all data that was encrypted under the KMS key is unrecoverable. (The only exception is a multi-Region replica key.) To prevent the use of a KMS key without deleting it, use DisableKey.
You can schedule the deletion of a multi-Region primary key and its replica keys at any time. However, KMS will not delete a multi-Region primary key with existing replica keys. If you schedule the deletion of a primary key with replicas, its key state changes to PendingReplicaDeletion and it cannot be replicated or used in cryptographic operations. This status can continue indefinitely. When the last of its replicas keys is deleted (not just scheduled), the key state of the primary key changes to PendingDeletion and its waiting period (PendingWindowInDays) begins. For details, see Deleting multi-Region keys in the Key Management Service Developer Guide.
When KMS deletes a KMS key from an CloudHSM key store, it makes a best effort to delete the associated key material from the associated CloudHSM cluster. However, you might need to manually delete the orphaned key material from the cluster and its backups. Deleting a KMS key from an external key store has no effect on the associated external key. However, for both types of custom key stores, deleting a KMS key is destructive and irreversible. You cannot decrypt ciphertext encrypted under the KMS key by using only its associated external key or CloudHSM key. Also, you cannot recreate a KMS key in an external key store by creating a new KMS key with the same key material.
For more information about scheduling a KMS key for deletion, see Deleting KMS keys in the Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For details, see Key states of KMS keys in the Key Management Service Developer Guide.
Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
Required permissions: kms:ScheduleKeyDeletion (key policy)
Related operations
" + "documentation":"Schedules the deletion of a KMS key. By default, KMS applies a waiting period of 30 days, but you can specify a waiting period of 7-30 days. When this operation is successful, the key state of the KMS key changes to PendingDeletion and the key can't be used in any cryptographic operations. It remains in this state for the duration of the waiting period. Before the waiting period ends, you can use CancelKeyDeletion to cancel the deletion of the KMS key. After the waiting period ends, KMS deletes the KMS key, its key material, and all KMS data associated with it, including all aliases that refer to it.
Deleting a KMS key is a destructive and potentially dangerous operation. When a KMS key is deleted, all data that was encrypted under the KMS key is unrecoverable. (The only exception is a multi-Region replica key, or an asymmetric or HMAC KMS key with imported key material[BUGBUG-link to importing-keys-managing.html#import-delete-key.) To prevent the use of a KMS key without deleting it, use DisableKey.
You can schedule the deletion of a multi-Region primary key and its replica keys at any time. However, KMS will not delete a multi-Region primary key with existing replica keys. If you schedule the deletion of a primary key with replicas, its key state changes to PendingReplicaDeletion and it cannot be replicated or used in cryptographic operations. This status can continue indefinitely. When the last of its replicas keys is deleted (not just scheduled), the key state of the primary key changes to PendingDeletion and its waiting period (PendingWindowInDays) begins. For details, see Deleting multi-Region keys in the Key Management Service Developer Guide.
When KMS deletes a KMS key from an CloudHSM key store, it makes a best effort to delete the associated key material from the associated CloudHSM cluster. However, you might need to manually delete the orphaned key material from the cluster and its backups. Deleting a KMS key from an external key store has no effect on the associated external key. However, for both types of custom key stores, deleting a KMS key is destructive and irreversible. You cannot decrypt ciphertext encrypted under the KMS key by using only its associated external key or CloudHSM key. Also, you cannot recreate a KMS key in an external key store by creating a new KMS key with the same key material.
For more information about scheduling a KMS key for deletion, see Deleting KMS keys in the Key Management Service Developer Guide.
The KMS key that you use for this operation must be in a compatible key state. For details, see Key states of KMS keys in the Key Management Service Developer Guide.
Cross-account use: No. You cannot perform this operation on a KMS key in a different Amazon Web Services account.
Required permissions: kms:ScheduleKeyDeletion (key policy)
Related operations
" }, "Sign":{ "name":"Sign", @@ -955,7 +955,9 @@ "enum":[ "RSAES_PKCS1_V1_5", "RSAES_OAEP_SHA_1", - "RSAES_OAEP_SHA_256" + "RSAES_OAEP_SHA_256", + "RSA_AES_KEY_WRAP_SHA_1", + "RSA_AES_KEY_WRAP_SHA_256" ] }, "AliasList":{ @@ -2078,15 +2080,15 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"The identifier of the symmetric encryption KMS key into which you will import key material. The Origin of the KMS key must be EXTERNAL.
Specify the key ID or key ARN of the KMS key.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
" + "documentation":"The identifier of the KMS key that will be associated with the imported key material. The Origin of the KMS key must be EXTERNAL.
All KMS key types are supported, including multi-Region keys. However, you cannot import key material into a KMS key in a custom key store.
Specify the key ID or key ARN of the KMS key.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
" }, "WrappingAlgorithm":{ "shape":"AlgorithmSpec", - "documentation":"The algorithm you will use to encrypt the key material before using the ImportKeyMaterial operation to import it. For more information, see Encrypt the key material in the Key Management Service Developer Guide.
The RSAES_PKCS1_V1_5 wrapping algorithm is deprecated. We recommend that you begin using a different wrapping algorithm immediately. KMS will end support for RSAES_PKCS1_V1_5 by October 1, 2023 pursuant to cryptographic key management guidance from the National Institute of Standards and Technology (NIST).
The algorithm you will use with the RSA public key (PublicKey) in the response to protect your key material during import. For more information, see Select a wrapping algorithm in the Key Management Service Developer Guide.
For RSA_AES wrapping algorithms, you encrypt your key material with an AES key that you generate, then encrypt your AES key with the RSA public key from KMS. For RSAES wrapping algorithms, you encrypt your key material directly with the RSA public key from KMS.
The wrapping algorithms that you can use depend on the type of key material that you are importing. To import an RSA private key, you must use an RSA_AES wrapping algorithm.
RSA_AES_KEY_WRAP_SHA_256 — Supported for wrapping RSA and ECC key material.
RSA_AES_KEY_WRAP_SHA_1 — Supported for wrapping RSA and ECC key material.
RSAES_OAEP_SHA_256 — Supported for all types of key material, except RSA key material (private key).
You cannot use the RSAES_OAEP_SHA_256 wrapping algorithm with the RSA_2048 wrapping key spec to wrap ECC_NIST_P521 key material.
RSAES_OAEP_SHA_1 — Supported for all types of key material, except RSA key material (private key).
You cannot use the RSAES_OAEP_SHA_1 wrapping algorithm with the RSA_2048 wrapping key spec to wrap ECC_NIST_P521 key material.
RSAES_PKCS1_V1_5 (Deprecated) — Supported only for symmetric encryption key material (and only in legacy mode).
The type of wrapping key (public key) to return in the response. Only 2048-bit RSA public keys are supported.
" + "documentation":"The type of RSA public key to return in the response. You will use this wrapping key with the specified wrapping algorithm to protect your key material during import.
Use the longest RSA wrapping key that is practical.
You cannot use an RSA_2048 public key to directly wrap an ECC_NIST_P521 private key. Instead, use an RSA_AES wrapping algorithm or choose a longer RSA public key.
" } } }, @@ -2277,7 +2279,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"The identifier of the symmetric encryption KMS key that receives the imported key material. This must be the same KMS key specified in the KeyID parameter of the corresponding GetParametersForImport request. The Origin of the KMS key must be EXTERNAL. You cannot perform this operation on an asymmetric KMS key, an HMAC KMS key, a KMS key in a custom key store, or on a KMS key in a different Amazon Web Services account
Specify the key ID or key ARN of the KMS key.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
" + "documentation":"The identifier of the KMS key that will be associated with the imported key material. This must be the same KMS key specified in the KeyID parameter of the corresponding GetParametersForImport request. The Origin of the KMS key must be EXTERNAL and its KeyState must be PendingImport.
The KMS key can be a symmetric encryption KMS key, HMAC KMS key, asymmetric encryption KMS key, or asymmetric signing KMS key, including a multi-Region key of any supported type. You cannot perform this operation on a KMS key in a custom key store, or on a KMS key in a different Amazon Web Services account.
Specify the key ID or key ARN of the KMS key.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
To get the key ID and key ARN for a KMS key, use ListKeys or DescribeKey.
" }, "ImportToken":{ "shape":"CiphertextType", @@ -2285,7 +2287,7 @@ }, "EncryptedKeyMaterial":{ "shape":"CiphertextType", - "documentation":"The encrypted key material to import. The key material must be encrypted with the public wrapping key that GetParametersForImport returned, using the wrapping algorithm that you specified in the same GetParametersForImport request.
The encrypted key material to import. The key material must be encrypted under the public wrapping key that GetParametersForImport returned, using the wrapping algorithm that you specified in the same GetParametersForImport request.
Specifies whether the key material expires. The default is KEY_MATERIAL_EXPIRES.
When the value of ExpirationModel is KEY_MATERIAL_EXPIRES, you must specify a value for the ValidTo parameter. When value is KEY_MATERIAL_DOES_NOT_EXPIRE, you must omit the ValidTo parameter.
You cannot change the ExpirationModel or ValidTo values for the current import after the request completes. To change either value, you must delete (DeleteImportedKeyMaterial) and reimport the key material.
Specifies whether the key material expires. The default is KEY_MATERIAL_EXPIRES. For help with this choice, see Setting an expiration time in the Key Management Service Developer Guide.
When the value of ExpirationModel is KEY_MATERIAL_EXPIRES, you must specify a value for the ValidTo parameter. When value is KEY_MATERIAL_DOES_NOT_EXPIRE, you must omit the ValidTo parameter.
You cannot change the ExpirationModel or ValidTo values for the current import after the request completes. To change either value, you must reimport the key material.
The waiting period, specified in number of days. After the waiting period ends, KMS deletes the KMS key.
If the KMS key is a multi-Region primary key with replica keys, the waiting period begins when the last of its replica keys is deleted. Otherwise, the waiting period begins immediately.
This value is optional. If you include a value, it must be between 7 and 30, inclusive. If you do not include a value, it defaults to 30.
" + "documentation":"The waiting period, specified in number of days. After the waiting period ends, KMS deletes the KMS key.
If the KMS key is a multi-Region primary key with replica keys, the waiting period begins when the last of its replica keys is deleted. Otherwise, the waiting period begins immediately.
This value is optional. If you include a value, it must be between 7 and 30, inclusive. If you do not include a value, it defaults to 30. You can use the kms:ScheduleKeyDeletionPendingWindowInDays condition key to further constrain the values that principals can specify in the PendingWindowInDays parameter.
The cryptographic signature that was generated for the message.
When used with the supported RSA signing algorithms, the encoding of this value is defined by PKCS #1 in RFC 8017.
When used with the ECDSA_SHA_256, ECDSA_SHA_384, or ECDSA_SHA_512 signing algorithms, this value is a DER-encoded object as defined by ANS X9.62–2005 and RFC 3279 Section 2.2.3. This is the most commonly used signature format and is appropriate for most uses.
When you use the HTTP API or the Amazon Web Services CLI, the value is Base64-encoded. Otherwise, it is not Base64-encoded.
" + "documentation":"The cryptographic signature that was generated for the message.
When used with the supported RSA signing algorithms, the encoding of this value is defined by PKCS #1 in RFC 8017.
When used with the ECDSA_SHA_256, ECDSA_SHA_384, or ECDSA_SHA_512 signing algorithms, this value is a DER-encoded object as defined by ANSI X9.62–2005 and RFC 3279 Section 2.2.3. This is the most commonly used signature format and is appropriate for most uses.
When you use the HTTP API or the Amazon Web Services CLI, the value is Base64-encoded. Otherwise, it is not Base64-encoded.
" }, "SigningAlgorithm":{ "shape":"SigningAlgorithmSpec", @@ -3550,7 +3552,11 @@ }, "WrappingKeySpec":{ "type":"string", - "enum":["RSA_2048"] + "enum":[ + "RSA_2048", + "RSA_3072", + "RSA_4096" + ] }, "XksKeyAlreadyInUseException":{ "type":"structure", From 82fc9b07b668f534290db7f5430f43e085b130a8 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Mon, 5 Jun 2023 18:06:53 +0000 Subject: [PATCH 030/317] Amazon Fraud Detector Update: Added new variable types, new DateTime data type, and new rules engine functions for interacting and working with DateTime data types. --- .../feature-AmazonFraudDetector-2c13eaf.json | 6 ++++++ .../src/main/resources/codegen-resources/service-2.json | 9 +++++---- 2 files changed, 11 insertions(+), 4 deletions(-) create mode 100644 .changes/next-release/feature-AmazonFraudDetector-2c13eaf.json diff --git a/.changes/next-release/feature-AmazonFraudDetector-2c13eaf.json b/.changes/next-release/feature-AmazonFraudDetector-2c13eaf.json new file mode 100644 index 000000000000..24fe6116a448 --- /dev/null +++ b/.changes/next-release/feature-AmazonFraudDetector-2c13eaf.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Fraud Detector", + "contributor": "", + "description": "Added new variable types, new DateTime data type, and new rules engine functions for interacting and working with DateTime data types." +} diff --git a/services/frauddetector/src/main/resources/codegen-resources/service-2.json b/services/frauddetector/src/main/resources/codegen-resources/service-2.json index 77c0c5f77342..8f8a28fd9de8 100644 --- a/services/frauddetector/src/main/resources/codegen-resources/service-2.json +++ b/services/frauddetector/src/main/resources/codegen-resources/service-2.json @@ -308,7 +308,7 @@ {"shape":"AccessDeniedException"}, {"shape":"ValidationException"} ], - "documentation":"Deletes the specified event.
When you delete an event, Amazon Fraud Detector permanently deletes that event and the event data is no longer stored in Amazon Fraud Detector.
" + "documentation":"Deletes the specified event.
When you delete an event, Amazon Fraud Detector permanently deletes that event and the event data is no longer stored in Amazon Fraud Detector. If deleteAuditHistory is True, event data is available through search for up to 30 seconds after the delete operation is completed.
The data type.
" + "documentation":"The data type of the variable.
" }, "dataSource":{ "shape":"DataSource", @@ -2051,7 +2051,8 @@ "STRING", "INTEGER", "FLOAT", - "BOOLEAN" + "BOOLEAN", + "DATETIME" ] }, "DataValidationMetrics":{ @@ -2168,7 +2169,7 @@ }, "deleteAuditHistory":{ "shape":"DeleteAuditHistory", - "documentation":"Specifies whether or not to delete any predictions associated with the event.
" + "documentation":"Specifies whether or not to delete any predictions associated with the event. If set to True,
The ARN of the Key Management Service (KMS) customer managed key that's used to encrypt your function's environment variables. When Lambda SnapStart is activated, this key is also used to encrypt your function's snapshot. If you don't provide a customer managed key, Lambda uses a default service key.
" + "documentation":"The ARN of the Key Management Service (KMS) customer managed key that's used to encrypt your function's environment variables. When Lambda SnapStart is activated, Lambda also uses this key is to encrypt your function's snapshot. If you deploy your function using a container image, Lambda also uses this key to encrypt your function when it's deployed. Note that this is not the same key that's used to protect your container image in the Amazon Elastic Container Registry (Amazon ECR). If you don't provide a customer managed key, Lambda uses a default service key.
" }, "TracingConfig":{ "shape":"TracingConfig", @@ -2471,7 +2471,7 @@ }, "MaximumRecordAgeInSeconds":{ "shape":"MaximumRecordAgeInSeconds", - "documentation":"(Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records.
The minimum value that can be set is 60 seconds.
(Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records.
The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
The layer's compatible runtimes.
" + "documentation":"The layer's compatible runtimes.
The following list includes deprecated runtimes. For more information, see Runtime deprecation policy.
" }, "LicenseInfo":{ "shape":"LicenseInfo", @@ -3970,7 +3970,7 @@ }, "CompatibleRuntimes":{ "shape":"CompatibleRuntimes", - "documentation":"The layer's compatible runtimes.
" + "documentation":"The layer's compatible runtimes.
The following list includes deprecated runtimes. For more information, see Runtime deprecation policy.
" }, "LicenseInfo":{ "shape":"LicenseInfo", @@ -4289,7 +4289,7 @@ "members":{ "CompatibleRuntime":{ "shape":"Runtime", - "documentation":"A runtime identifier. For example, go1.x.
A runtime identifier. For example, go1.x.
The following list includes deprecated runtimes. For more information, see Runtime deprecation policy.
", "location":"querystring", "locationName":"CompatibleRuntime" }, @@ -4337,7 +4337,7 @@ "members":{ "CompatibleRuntime":{ "shape":"Runtime", - "documentation":"A runtime identifier. For example, go1.x.
A runtime identifier. For example, go1.x.
The following list includes deprecated runtimes. For more information, see Runtime deprecation policy.
", "location":"querystring", "locationName":"CompatibleRuntime" }, @@ -4744,7 +4744,7 @@ }, "CompatibleRuntimes":{ "shape":"CompatibleRuntimes", - "documentation":"A list of compatible function runtimes. Used for filtering with ListLayers and ListLayerVersions.
" + "documentation":"A list of compatible function runtimes. Used for filtering with ListLayers and ListLayerVersions.
The following list includes deprecated runtimes. For more information, see Runtime deprecation policy.
" }, "LicenseInfo":{ "shape":"LicenseInfo", @@ -4785,7 +4785,7 @@ }, "CompatibleRuntimes":{ "shape":"CompatibleRuntimes", - "documentation":"The layer's compatible runtimes.
" + "documentation":"The layer's compatible runtimes.
The following list includes deprecated runtimes. For more information, see Runtime deprecation policy.
" }, "LicenseInfo":{ "shape":"LicenseInfo", @@ -5209,7 +5209,8 @@ "provided.al2", "nodejs18.x", "python3.10", - "java17" + "java17", + "ruby3.2" ] }, "RuntimeVersionArn":{ @@ -5878,7 +5879,7 @@ }, "KMSKeyArn":{ "shape":"KMSKeyArn", - "documentation":"The ARN of the Key Management Service (KMS) customer managed key that's used to encrypt your function's environment variables. When Lambda SnapStart is activated, this key is also used to encrypt your function's snapshot. If you don't provide a customer managed key, Lambda uses a default service key.
" + "documentation":"The ARN of the Key Management Service (KMS) customer managed key that's used to encrypt your function's environment variables. When Lambda SnapStart is activated, Lambda also uses this key is to encrypt your function's snapshot. If you deploy your function using a container image, Lambda also uses this key to encrypt your function when it's deployed. Note that this is not the same key that's used to protect your container image in the Amazon Elastic Container Registry (Amazon ECR). If you don't provide a customer managed key, Lambda uses a default service key.
" }, "TracingConfig":{ "shape":"TracingConfig", From 3e416c455afc42175344f3cb92c65483e8c9f0d7 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Mon, 5 Jun 2023 18:07:49 +0000 Subject: [PATCH 032/317] AmazonMWAA Update: This release adds ROLLING_BACK and CREATING_SNAPSHOT environment statuses for Amazon MWAA environments. --- .../feature-AmazonMWAA-e2a609e.json | 6 + .../codegen-resources/endpoint-tests.json | 142 +++++++++--------- .../codegen-resources/service-2.json | 14 +- 3 files changed, 85 insertions(+), 77 deletions(-) create mode 100644 .changes/next-release/feature-AmazonMWAA-e2a609e.json diff --git a/.changes/next-release/feature-AmazonMWAA-e2a609e.json b/.changes/next-release/feature-AmazonMWAA-e2a609e.json new file mode 100644 index 000000000000..72d438501ff0 --- /dev/null +++ b/.changes/next-release/feature-AmazonMWAA-e2a609e.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AmazonMWAA", + "contributor": "", + "description": "This release adds ROLLING_BACK and CREATING_SNAPSHOT environment statuses for Amazon MWAA environments." +} diff --git a/services/mwaa/src/main/resources/codegen-resources/endpoint-tests.json b/services/mwaa/src/main/resources/codegen-resources/endpoint-tests.json index 2f79f36beac4..87bc8970e1fd 100644 --- a/services/mwaa/src/main/resources/codegen-resources/endpoint-tests.json +++ b/services/mwaa/src/main/resources/codegen-resources/endpoint-tests.json @@ -8,9 +8,9 @@ } }, "params": { + "Region": "ap-northeast-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-northeast-1" + "UseDualStack": false } }, { @@ -21,9 +21,9 @@ } }, "params": { + "Region": "ap-northeast-2", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-northeast-2" + "UseDualStack": false } }, { @@ -34,9 +34,9 @@ } }, "params": { + "Region": "ap-south-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-south-1" + "UseDualStack": false } }, { @@ -47,9 +47,9 @@ } }, "params": { + "Region": "ap-southeast-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-southeast-1" + "UseDualStack": false } }, { @@ -60,9 +60,9 @@ } }, "params": { + "Region": "ap-southeast-2", "UseFIPS": false, - "UseDualStack": false, - "Region": "ap-southeast-2" + "UseDualStack": false } }, { @@ -73,9 +73,9 @@ } }, "params": { + "Region": "ca-central-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "ca-central-1" + "UseDualStack": false } }, { @@ -86,9 +86,9 @@ } }, "params": { + "Region": "eu-central-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-central-1" + "UseDualStack": false } }, { @@ -99,9 +99,9 @@ } }, "params": { + "Region": "eu-north-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-north-1" + "UseDualStack": false } }, { @@ -112,9 +112,9 @@ } }, "params": { + "Region": "eu-west-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-west-1" + "UseDualStack": false } }, { @@ -125,9 +125,9 @@ } }, "params": { + "Region": "eu-west-2", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-west-2" + "UseDualStack": false } }, { @@ -138,9 +138,9 @@ } }, "params": { + "Region": "eu-west-3", "UseFIPS": false, - "UseDualStack": false, - "Region": "eu-west-3" + "UseDualStack": false } }, { @@ -151,9 +151,9 @@ } }, "params": { + "Region": "sa-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "sa-east-1" + "UseDualStack": false } }, { @@ -164,9 +164,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-east-1" + "UseDualStack": false } }, { @@ -177,9 +177,9 @@ } }, "params": { + "Region": "us-east-2", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-east-2" + "UseDualStack": false } }, { @@ -190,9 +190,9 @@ } }, "params": { + "Region": "us-west-2", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-west-2" + "UseDualStack": false } }, { @@ -203,9 +203,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "us-east-1" + "UseDualStack": true } }, { @@ -216,9 +216,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "us-east-1" + "UseDualStack": false } }, { @@ -229,9 +229,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-east-1" + "UseDualStack": true } }, { @@ -242,9 +242,9 @@ } }, "params": { + "Region": "cn-north-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "cn-north-1" + "UseDualStack": true } }, { @@ -255,9 +255,9 @@ } }, "params": { + "Region": "cn-north-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "cn-north-1" + "UseDualStack": false } }, { @@ -268,9 +268,9 @@ } }, "params": { + "Region": "cn-north-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "cn-north-1" + "UseDualStack": true } }, { @@ -281,9 +281,9 @@ } }, "params": { + "Region": "cn-north-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "cn-north-1" + "UseDualStack": false } }, { @@ -294,9 +294,9 @@ } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "us-gov-east-1" + "UseDualStack": true } }, { @@ -307,9 +307,9 @@ } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "us-gov-east-1" + "UseDualStack": false } }, { @@ -320,9 +320,9 @@ } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-gov-east-1" + "UseDualStack": true } }, { @@ -333,9 +333,9 @@ } }, "params": { + "Region": "us-gov-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-gov-east-1" + "UseDualStack": false } }, { @@ -344,9 +344,9 @@ "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { + "Region": "us-iso-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "us-iso-east-1" + "UseDualStack": true } }, { @@ -357,9 +357,9 @@ } }, "params": { + "Region": "us-iso-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "us-iso-east-1" + "UseDualStack": false } }, { @@ -368,9 +368,9 @@ "error": "DualStack is enabled but this partition does not support DualStack" }, "params": { + "Region": "us-iso-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-iso-east-1" + "UseDualStack": true } }, { @@ -381,9 +381,9 @@ } }, "params": { + "Region": "us-iso-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-iso-east-1" + "UseDualStack": false } }, { @@ -392,9 +392,9 @@ "error": "FIPS and DualStack are enabled, but this partition does not support one or both" }, "params": { + "Region": "us-isob-east-1", "UseFIPS": true, - "UseDualStack": true, - "Region": "us-isob-east-1" + "UseDualStack": true } }, { @@ -405,9 +405,9 @@ } }, "params": { + "Region": "us-isob-east-1", "UseFIPS": true, - "UseDualStack": false, - "Region": "us-isob-east-1" + "UseDualStack": false } }, { @@ -416,9 +416,9 @@ "error": "DualStack is enabled but this partition does not support DualStack" }, "params": { + "Region": "us-isob-east-1", "UseFIPS": false, - "UseDualStack": true, - "Region": "us-isob-east-1" + "UseDualStack": true } }, { @@ -429,9 +429,9 @@ } }, "params": { + "Region": "us-isob-east-1", "UseFIPS": false, - "UseDualStack": false, - "Region": "us-isob-east-1" + "UseDualStack": false } }, { @@ -442,9 +442,9 @@ } }, "params": { + "Region": "us-east-1", "UseFIPS": false, "UseDualStack": false, - "Region": "us-east-1", "Endpoint": "https://example.com" } }, @@ -467,9 +467,9 @@ "error": "Invalid Configuration: FIPS and custom endpoint are not supported" }, "params": { + "Region": "us-east-1", "UseFIPS": true, "UseDualStack": false, - "Region": "us-east-1", "Endpoint": "https://example.com" } }, @@ -479,9 +479,9 @@ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" }, "params": { + "Region": "us-east-1", "UseFIPS": false, "UseDualStack": true, - "Region": "us-east-1", "Endpoint": "https://example.com" } }, diff --git a/services/mwaa/src/main/resources/codegen-resources/service-2.json b/services/mwaa/src/main/resources/codegen-resources/service-2.json index 40802ad51444..a58562a3f9b5 100644 --- a/services/mwaa/src/main/resources/codegen-resources/service-2.json +++ b/services/mwaa/src/main/resources/codegen-resources/service-2.json @@ -285,7 +285,7 @@ }, "AirflowVersion":{ "shape":"AirflowVersion", - "documentation":"The Apache Airflow version for your environment. If no value is specified, it defaults to the latest version. Valid values: 1.10.12, 2.0.2, 2.2.2, and 2.4.3. For more information, see Apache Airflow versions on Amazon Managed Workflows for Apache Airflow (MWAA).
The Apache Airflow version for your environment. If no value is specified, it defaults to the latest version. Valid values: 1.10.12, 2.0.2, 2.2.2, 2.4.3, and 2.5.1. For more information, see Apache Airflow versions on Amazon Managed Workflows for Apache Airflow (MWAA).
The Apache Airflow version on your environment. Valid values: 1.10.12, 2.0.2, 2.2.2, and 2.4.3.
The Apache Airflow version on your environment. Valid values: 1.10.12, 2.0.2, 2.2.2, 2.4.3, and 2.5.1.
The status of the Amazon MWAA environment. Valid values:
CREATING - Indicates the request to create the environment is in progress.
CREATE_FAILED - Indicates the request to create the environment failed, and the environment could not be created.
AVAILABLE - Indicates the request was successful and the environment is ready to use.
UPDATING - Indicates the request to update the environment is in progress.
DELETING - Indicates the request to delete the environment is in progress.
DELETED - Indicates the request to delete the environment is complete, and the environment has been deleted.
UNAVAILABLE - Indicates the request failed, but the environment was unable to rollback and is not in a stable state.
UPDATE_FAILED - Indicates the request to update the environment failed, and the environment has rolled back successfully and is ready to use.
We recommend reviewing our troubleshooting guide for a list of common errors and their solutions. For more information, see Amazon MWAA troubleshooting.
" + "documentation":"The status of the Amazon MWAA environment. Valid values:
CREATING - Indicates the request to create the environment is in progress.
CREATING_SNAPSHOT - Indicates the request to update environment details, or upgrade the environment version, is in progress and Amazon MWAA is creating a storage volume snapshot of the Amazon RDS database cluster associated with the environment. A database snapshot is a backup created at a specific point in time. Amazon MWAA uses snapshots to recover environment metadata if the process to update or upgrade an environment fails.
CREATE_FAILED - Indicates the request to create the environment failed, and the environment could not be created.
AVAILABLE - Indicates the request was successful and the environment is ready to use.
UPDATING - Indicates the request to update the environment is in progress.
ROLLING_BACK - Indicates the request to update environment details, or upgrade the environment version, failed and Amazon MWAA is restoring the environment using the latest storage volume snapshot.
DELETING - Indicates the request to delete the environment is in progress.
DELETED - Indicates the request to delete the environment is complete, and the environment has been deleted.
UNAVAILABLE - Indicates the request failed, but the environment was unable to rollback and is not in a stable state.
UPDATE_FAILED - Indicates the request to update the environment failed, and the environment has rolled back successfully and is ready to use.
We recommend reviewing our troubleshooting guide for a list of common errors and their solutions. For more information, see Amazon MWAA troubleshooting.
" }, "Tags":{ "shape":"TagMap", @@ -599,7 +599,9 @@ "DELETING", "DELETED", "UNAVAILABLE", - "UPDATE_FAILED" + "UPDATE_FAILED", + "ROLLING_BACK", + "CREATING_SNAPSHOT" ] }, "ErrorCode":{"type":"string"}, @@ -1139,7 +1141,7 @@ }, "AirflowVersion":{ "shape":"AirflowVersion", - "documentation":"The Apache Airflow version for your environment. If no value is specified, defaults to the latest version. Valid values: 1.10.12, 2.0.2, 2.2.2, and 2.4.3.
The Apache Airflow version for your environment. To upgrade your environment, specify a newer version of Apache Airflow supported by Amazon MWAA.
Before you upgrade an environment, make sure your requirements, DAGs, plugins, and other resources used in your workflows are compatible with the new Apache Airflow version. For more information about updating your resources, see Upgrading an Amazon MWAA environment.
Valid values: 1.10.12, 2.0.2, 2.2.2, 2.4.3, and 2.5.1.
This section contains the Amazon Managed Workflows for Apache Airflow (MWAA) API reference documentation. For more information, see What Is Amazon MWAA?.
Endpoints
api.airflow.{region}.amazonaws.com - This endpoint is used for environment management.
env.airflow.{region}.amazonaws.com - This endpoint is used to operate the Airflow environment.
ops.airflow.{region}.amazonaws.com - This endpoint is used to push environment metrics that track environment health.
Regions
For a list of regions that Amazon MWAA supports, see Region availability in the Amazon MWAA User Guide.
" + "documentation":"This section contains the Amazon Managed Workflows for Apache Airflow (MWAA) API reference documentation. For more information, see What is Amazon MWAA?.
Endpoints
api.airflow.{region}.amazonaws.com - This endpoint is used for environment management.
env.airflow.{region}.amazonaws.com - This endpoint is used to operate the Airflow environment.
ops.airflow.{region}.amazonaws.com - This endpoint is used to push environment metrics that track environment health.
Regions
For a list of regions that Amazon MWAA supports, see Region availability in the Amazon MWAA User Guide.
" } From 1db009e54d22d2b1cf774ef997495431fe0a8fe6 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Mon, 5 Jun 2023 18:08:00 +0000 Subject: [PATCH 033/317] AWS CloudFormation Update: AWS CloudFormation StackSets provides customers with three new APIs to activate, deactivate, and describe AWS Organizations trusted access which is needed to get started with service-managed StackSets. --- .../feature-AWSCloudFormation-db2d2f2.json | 6 + .../codegen-resources/service-2.json | 229 +++++++++++++----- 2 files changed, 178 insertions(+), 57 deletions(-) create mode 100644 .changes/next-release/feature-AWSCloudFormation-db2d2f2.json diff --git a/.changes/next-release/feature-AWSCloudFormation-db2d2f2.json b/.changes/next-release/feature-AWSCloudFormation-db2d2f2.json new file mode 100644 index 000000000000..7d54baadeaf2 --- /dev/null +++ b/.changes/next-release/feature-AWSCloudFormation-db2d2f2.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS CloudFormation", + "contributor": "", + "description": "AWS CloudFormation StackSets provides customers with three new APIs to activate, deactivate, and describe AWS Organizations trusted access which is needed to get started with service-managed StackSets." +} diff --git a/services/cloudformation/src/main/resources/codegen-resources/service-2.json b/services/cloudformation/src/main/resources/codegen-resources/service-2.json index 72298da019ca..3a2aff947ad6 100644 --- a/services/cloudformation/src/main/resources/codegen-resources/service-2.json +++ b/services/cloudformation/src/main/resources/codegen-resources/service-2.json @@ -11,6 +11,23 @@ "xmlNamespace":"http://cloudformation.amazonaws.com/doc/2010-05-15/" }, "operations":{ + "ActivateOrganizationsAccess":{ + "name":"ActivateOrganizationsAccess", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ActivateOrganizationsAccessInput"}, + "output":{ + "shape":"ActivateOrganizationsAccessOutput", + "resultWrapper":"ActivateOrganizationsAccessResult" + }, + "errors":[ + {"shape":"InvalidOperationException"}, + {"shape":"OperationNotFoundException"} + ], + "documentation":"Activate trusted access with Organizations. With trusted access between StackSets and Organizations activated, the management account has permissions to create and manage StackSets for your organization.
" + }, "ActivateType":{ "name":"ActivateType", "http":{ @@ -26,7 +43,7 @@ {"shape":"CFNRegistryException"}, {"shape":"TypeNotFoundException"} ], - "documentation":"Activates a public third-party extension, making it available for use in stack templates. For more information, see Using public extensions in the CloudFormation User Guide.
Once you have activated a public third-party extension in your account and region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
", + "documentation":"Activates a public third-party extension, making it available for use in stack templates. For more information, see Using public extensions in the CloudFormation User Guide.
Once you have activated a public third-party extension in your account and Region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
", "idempotent":true }, "BatchDescribeTypeConfigurations":{ @@ -44,7 +61,7 @@ {"shape":"TypeConfigurationNotFoundException"}, {"shape":"CFNRegistryException"} ], - "documentation":"Returns configuration data for the specified CloudFormation extensions, from the CloudFormation registry for the account and region.
For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
" + "documentation":"Returns configuration data for the specified CloudFormation extensions, from the CloudFormation registry for the account and Region.
For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
" }, "CancelUpdateStack":{ "name":"CancelUpdateStack", @@ -150,6 +167,23 @@ ], "documentation":"Creates a stack set.
" }, + "DeactivateOrganizationsAccess":{ + "name":"DeactivateOrganizationsAccess", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeactivateOrganizationsAccessInput"}, + "output":{ + "shape":"DeactivateOrganizationsAccessOutput", + "resultWrapper":"DeactivateOrganizationsAccessResult" + }, + "errors":[ + {"shape":"InvalidOperationException"}, + {"shape":"OperationNotFoundException"} + ], + "documentation":"Deactivates trusted access with Organizations. If trusted access is deactivated, the management account does not have permissions to create and manage service-managed StackSets for your organization.
" + }, "DeactivateType":{ "name":"DeactivateType", "http":{ @@ -165,7 +199,7 @@ {"shape":"CFNRegistryException"}, {"shape":"TypeNotFoundException"} ], - "documentation":"Deactivates a public extension that was previously activated in this account and region.
Once deactivated, an extension can't be used in any CloudFormation operation. This includes stack update operations where the stack template includes the extension, even if no updates are being made to the extension. In addition, deactivated extensions aren't automatically updated if a new version of the extension is released.
", + "documentation":"Deactivates a public extension that was previously activated in this account and Region.
Once deactivated, an extension can't be used in any CloudFormation operation. This includes stack update operations where the stack template includes the extension, even if no updates are being made to the extension. In addition, deactivated extensions aren't automatically updated if a new version of the extension is released.
", "idempotent":true }, "DeleteChangeSet":{ @@ -296,6 +330,23 @@ ], "documentation":"Returns hook-related information for the change set and a list of changes that CloudFormation makes when you run the change set.
" }, + "DescribeOrganizationsAccess":{ + "name":"DescribeOrganizationsAccess", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeOrganizationsAccessInput"}, + "output":{ + "shape":"DescribeOrganizationsAccessOutput", + "resultWrapper":"DescribeOrganizationsAccessResult" + }, + "errors":[ + {"shape":"InvalidOperationException"}, + {"shape":"OperationNotFoundException"} + ], + "documentation":"Retrieves information about the account's OrganizationAccess status. This API can be called either by the management account or the delegated administrator by using the CallAs parameter. This API can also be called without the CallAs parameter by the management account.
Returns information about a stack drift detection operation. A stack drift detection operation detects whether a stack's actual configuration differs, or has drifted, from it's expected configuration, as defined in the stack template and any values specified as template parameters. A stack is considered to have drifted if one or more of its resources have drifted. For more information about stack and resource drift, see Detecting Unregulated Configuration Changes to Stacks and Resources.
Use DetectStackDrift to initiate a stack drift detection operation. DetectStackDrift returns a StackDriftDetectionId you can use to monitor the progress of the operation using DescribeStackDriftDetectionStatus. Once the drift detection operation has completed, use DescribeStackResourceDrifts to return drift information about the stack and its resources.
Returns information about a stack drift detection operation. A stack drift detection operation detects whether a stack's actual configuration differs, or has drifted, from its expected configuration, as defined in the stack template and any values specified as template parameters. A stack is considered to have drifted if one or more of its resources have drifted. For more information about stack and resource drift, see Detecting Unregulated Configuration Changes to Stacks and Resources.
Use DetectStackDrift to initiate a stack drift detection operation. DetectStackDrift returns a StackDriftDetectionId you can use to monitor the progress of the operation using DescribeStackDriftDetectionStatus. Once the drift detection operation has completed, use DescribeStackResourceDrifts to return drift information about the stack and its resources.
Returns the stack instance that's associated with the specified stack set, Amazon Web Services account, and Region.
For a list of stack instances that are associated with a specific stack set, use ListStackInstances.
" + "documentation":"Returns the stack instance that's associated with the specified StackSet, Amazon Web Services account, and Amazon Web Services Region.
For a list of stack instances that are associated with a specific StackSet, use ListStackInstances.
" }, "DescribeStackResource":{ "name":"DescribeStackResource", @@ -409,7 +460,7 @@ "errors":[ {"shape":"StackSetNotFoundException"} ], - "documentation":"Returns the description of the specified stack set.
" + "documentation":"Returns the description of the specified StackSet.
" }, "DescribeStackSetOperation":{ "name":"DescribeStackSetOperation", @@ -426,7 +477,7 @@ {"shape":"StackSetNotFoundException"}, {"shape":"OperationNotFoundException"} ], - "documentation":"Returns the description of the specified stack set operation.
" + "documentation":"Returns the description of the specified StackSet operation.
" }, "DescribeStacks":{ "name":"DescribeStacks", @@ -487,7 +538,7 @@ "shape":"DetectStackDriftOutput", "resultWrapper":"DetectStackDriftResult" }, - "documentation":"Detects whether a stack's actual configuration differs, or has drifted, from it's expected configuration, as defined in the stack template and any values specified as template parameters. For each resource in the stack that supports drift detection, CloudFormation compares the actual configuration of the resource with its expected template configuration. Only resource properties explicitly defined in the stack template are checked for drift. A stack is considered to have drifted if one or more of its resources differ from their expected template configurations. For more information, see Detecting Unregulated Configuration Changes to Stacks and Resources.
Use DetectStackDrift to detect drift on all supported resources for a given stack, or DetectStackResourceDrift to detect drift on individual resources.
For a list of stack resources that currently support drift detection, see Resources that Support Drift Detection.
DetectStackDrift can take up to several minutes, depending on the number of resources contained within the stack. Use DescribeStackDriftDetectionStatus to monitor the progress of a detect stack drift operation. Once the drift detection operation has completed, use DescribeStackResourceDrifts to return drift information about the stack and its resources.
When detecting drift on a stack, CloudFormation doesn't detect drift on any nested stacks belonging to that stack. Perform DetectStackDrift directly on the nested stack itself.
Detects whether a stack's actual configuration differs, or has drifted, from its expected configuration, as defined in the stack template and any values specified as template parameters. For each resource in the stack that supports drift detection, CloudFormation compares the actual configuration of the resource with its expected template configuration. Only resource properties explicitly defined in the stack template are checked for drift. A stack is considered to have drifted if one or more of its resources differ from their expected template configurations. For more information, see Detecting Unregulated Configuration Changes to Stacks and Resources.
Use DetectStackDrift to detect drift on all supported resources for a given stack, or DetectStackResourceDrift to detect drift on individual resources.
For a list of stack resources that currently support drift detection, see Resources that Support Drift Detection.
DetectStackDrift can take up to several minutes, depending on the number of resources contained within the stack. Use DescribeStackDriftDetectionStatus to monitor the progress of a detect stack drift operation. Once the drift detection operation has completed, use DescribeStackResourceDrifts to return drift information about the stack and its resources.
When detecting drift on a stack, CloudFormation doesn't detect drift on any nested stacks belonging to that stack. Perform DetectStackDrift directly on the nested stack itself.
Returns information about whether a resource's actual configuration differs, or has drifted, from it's expected configuration, as defined in the stack template and any values specified as template parameters. This information includes actual and expected property values for resources in which CloudFormation detects drift. Only resource properties explicitly defined in the stack template are checked for drift. For more information about stack and resource drift, see Detecting Unregulated Configuration Changes to Stacks and Resources.
Use DetectStackResourceDrift to detect drift on individual resources, or DetectStackDrift to detect drift on all resources in a given stack that support drift detection.
Resources that don't currently support drift detection can't be checked. For a list of resources that support drift detection, see Resources that Support Drift Detection.
" + "documentation":"Returns information about whether a resource's actual configuration differs, or has drifted, from its expected configuration, as defined in the stack template and any values specified as template parameters. This information includes actual and expected property values for resources in which CloudFormation detects drift. Only resource properties explicitly defined in the stack template are checked for drift. For more information about stack and resource drift, see Detecting Unregulated Configuration Changes to Stacks and Resources.
Use DetectStackResourceDrift to detect drift on individual resources, or DetectStackDrift to detect drift on all resources in a given stack that support drift detection.
Resources that don't currently support drift detection can't be checked. For a list of resources that support drift detection, see Resources that Support Drift Detection.
" }, "DetectStackSetDrift":{ "name":"DetectStackSetDrift", @@ -617,7 +668,7 @@ {"shape":"StackNotFoundException"}, {"shape":"StaleRequestException"} ], - "documentation":"Import existing stacks into a new stack sets. Use the stack import operation to import up to 10 stacks into a new stack set in the same account as the source stack or in a different administrator account and Region, by specifying the stack ID of the stack you intend to import.
ImportStacksToStackSet is only supported by self-managed permissions.
Import existing stacks into a new stack sets. Use the stack import operation to import up to 10 stacks into a new stack set in the same account as the source stack or in a different administrator account and Region, by specifying the stack ID of the stack you intend to import.
" }, "ListChangeSets":{ "name":"ListChangeSets", @@ -812,7 +863,7 @@ {"shape":"CFNRegistryException"}, {"shape":"TypeNotFoundException"} ], - "documentation":"Publishes the specified extension to the CloudFormation registry as a public extension in this region. Public extensions are available for use by all CloudFormation users. For more information about publishing extensions, see Publishing extensions to make them available for public use in the CloudFormation CLI User Guide.
To publish an extension, you must be registered as a publisher with CloudFormation. For more information, see RegisterPublisher.
", + "documentation":"Publishes the specified extension to the CloudFormation registry as a public extension in this Region. Public extensions are available for use by all CloudFormation users. For more information about publishing extensions, see Publishing extensions to make them available for public use in the CloudFormation CLI User Guide.
To publish an extension, you must be registered as a publisher with CloudFormation. For more information, see RegisterPublisher.
", "idempotent":true }, "RecordHandlerProgress":{ @@ -864,7 +915,7 @@ "errors":[ {"shape":"CFNRegistryException"} ], - "documentation":"Registers an extension with the CloudFormation service. Registering an extension makes it available for use in CloudFormation templates in your Amazon Web Services account, and includes:
Validating the extension schema.
Determining which handlers, if any, have been specified for the extension.
Making the extension available for use in your account.
For more information about how to develop extensions and ready them for registration, see Creating Resource Providers in the CloudFormation CLI User Guide.
You can have a maximum of 50 resource extension versions registered at a time. This maximum is per account and per region. Use DeregisterType to deregister specific extension versions if necessary.
Once you have initiated a registration request using RegisterType , you can use DescribeTypeRegistration to monitor the progress of the registration request.
Once you have registered a private extension in your account and region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
", + "documentation":"Registers an extension with the CloudFormation service. Registering an extension makes it available for use in CloudFormation templates in your Amazon Web Services account, and includes:
Validating the extension schema.
Determining which handlers, if any, have been specified for the extension.
Making the extension available for use in your account.
For more information about how to develop extensions and ready them for registration, see Creating Resource Providers in the CloudFormation CLI User Guide.
You can have a maximum of 50 resource extension versions registered at a time. This maximum is per account and per Region. Use DeregisterType to deregister specific extension versions if necessary.
Once you have initiated a registration request using RegisterType , you can use DescribeTypeRegistration to monitor the progress of the registration request.
Once you have registered a private extension in your account and Region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
", "idempotent":true }, "RollbackStack":{ @@ -907,7 +958,7 @@ {"shape":"CFNRegistryException"}, {"shape":"TypeNotFoundException"} ], - "documentation":"Specifies the configuration data for a registered CloudFormation extension, in the given account and region.
To view the current configuration data for an extension, refer to the ConfigurationSchema element of DescribeType. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
It's strongly recommended that you use dynamic references to restrict sensitive configuration definitions, such as third-party credentials. For more details on dynamic references, see Using dynamic references to specify template values in the CloudFormation User Guide.
Specifies the configuration data for a registered CloudFormation extension, in the given account and Region.
To view the current configuration data for an extension, refer to the ConfigurationSchema element of DescribeType. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
It's strongly recommended that you use dynamic references to restrict sensitive configuration definitions, such as third-party credentials. For more details on dynamic references, see Using dynamic references to specify template values in the CloudFormation User Guide.
Tests a registered extension to make sure it meets all necessary requirements for being published in the CloudFormation registry.
For resource types, this includes passing all contracts tests defined for the type.
For modules, this includes determining if the module's model meets all necessary requirements.
For more information, see Testing your public extension prior to publishing in the CloudFormation CLI User Guide.
If you don't specify a version, CloudFormation uses the default version of the extension in your account and region for testing.
To perform testing, CloudFormation assumes the execution role specified when the type was registered. For more information, see RegisterType.
Once you've initiated testing on an extension using TestType, you can pass the returned TypeVersionArn into DescribeType to monitor the current test status and test status description for the extension.
An extension must have a test status of PASSED before it can be published. For more information, see Publishing extensions to make them available for public use in the CloudFormation CLI User Guide.
Tests a registered extension to make sure it meets all necessary requirements for being published in the CloudFormation registry.
For resource types, this includes passing all contracts tests defined for the type.
For modules, this includes determining if the module's model meets all necessary requirements.
For more information, see Testing your public extension prior to publishing in the CloudFormation CLI User Guide.
If you don't specify a version, CloudFormation uses the default version of the extension in your account and Region for testing.
To perform testing, CloudFormation assumes the execution role specified when the type was registered. For more information, see RegisterType.
Once you've initiated testing on an extension using TestType, you can pass the returned TypeVersionArn into DescribeType to monitor the current test status and test status description for the extension.
An extension must have a test status of PASSED before it can be published. For more information, see Publishing extensions to make them available for public use in the CloudFormation CLI User Guide.
An alias to assign to the public extension, in this account and region. If you specify an alias for the extension, CloudFormation treats the alias as the extension type name within this account and region. You must use the alias to refer to the extension in your templates, API calls, and CloudFormation console.
An extension alias must be unique within a given account and region. You can activate the same public resource multiple times in the same account and region, using different type name aliases.
" + "documentation":"An alias to assign to the public extension, in this account and Region. If you specify an alias for the extension, CloudFormation treats the alias as the extension type name within this account and Region. You must use the alias to refer to the extension in your templates, API calls, and CloudFormation console.
An extension alias must be unique within a given account and Region. You can activate the same public resource multiple times in the same account and Region, using different type name aliases.
" }, "AutoUpdate":{ "shape":"AutoUpdate", - "documentation":"Whether to automatically update the extension in this account and region when a new minor version is published by the extension publisher. Major versions released by the publisher must be manually updated.
The default is true.
Whether to automatically update the extension in this account and Region when a new minor version is published by the extension publisher. Major versions released by the publisher must be manually updated.
The default is true.
Contains logging configuration information for an extension.
" }, - "LoggingConfig":{"shape":"LoggingConfig"}, "ExecutionRoleArn":{ "shape":"RoleArn", "documentation":"The name of the IAM execution role to use to activate the extension.
" @@ -1171,7 +1235,7 @@ "members":{ "Arn":{ "shape":"PrivateTypeArn", - "documentation":"The Amazon Resource Name (ARN) of the activated extension, in this account and region.
" + "documentation":"The Amazon Resource Name (ARN) of the activated extension, in this account and Region.
" } } }, @@ -1220,7 +1284,10 @@ "shape":"ErrorMessage", "documentation":"The error message.
" }, - "TypeConfigurationIdentifier":{"shape":"TypeConfigurationIdentifier"} + "TypeConfigurationIdentifier":{ + "shape":"TypeConfigurationIdentifier", + "documentation":"Identifying information for the configuration of a CloudFormation extension.
" + } }, "documentation":"Detailed information concerning an error generated during the setting of configuration data for a CloudFormation extension.
" }, @@ -1291,7 +1358,7 @@ "members":{ "StackName":{ "shape":"StackName", - "documentation":"The name or the unique stack ID that's associated with the stack.
" + "documentation":"If you don't pass a parameter to StackName, the API returns a response that describes all resources in the account.
The IAM policy below can be added to IAM policies when you want to limit resource-level permissions and avoid returning a response when no parameter is sent in the request:
{ \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Deny\", \"Action\": \"cloudformation:DescribeStacks\", \"NotResource\": \"arn:aws:cloudformation:*:*:stack/*/*\" }] }
The name or the unique stack ID that's associated with the stack.
" }, "ClientRequestToken":{ "shape":"ClientRequestToken", @@ -1921,12 +1988,22 @@ "exception":true }, "CreationTime":{"type":"timestamp"}, + "DeactivateOrganizationsAccessInput":{ + "type":"structure", + "members":{ + } + }, + "DeactivateOrganizationsAccessOutput":{ + "type":"structure", + "members":{ + } + }, "DeactivateTypeInput":{ "type":"structure", "members":{ "TypeName":{ "shape":"TypeName", - "documentation":"The type name of the extension, in this account and region. If you specified a type name alias when enabling the extension, use the type name alias.
Conditional: You must specify either Arn, or TypeName and Type.
The type name of the extension, in this account and Region. If you specified a type name alias when enabling the extension, use the type name alias.
Conditional: You must specify either Arn, or TypeName and Type.
The Amazon Resource Name (ARN) for the extension, in this account and region.
Conditional: You must specify either Arn, or TypeName and Type.
The Amazon Resource Name (ARN) for the extension, in this account and Region.
Conditional: You must specify either Arn, or TypeName and Type.
The output for the DescribeChangeSet action.
" }, + "DescribeOrganizationsAccessInput":{ + "type":"structure", + "members":{ + "CallAs":{ + "shape":"CallAs", + "documentation":"[Service-managed permissions] Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account.
By default, SELF is specified.
If you are signed in to the management account, specify SELF.
If you are signed in to a delegated administrator account, specify DELEGATED_ADMIN.
Your Amazon Web Services account must be registered as a delegated administrator in the management account. For more information, see Register a delegated administrator in the CloudFormation User Guide.
Presents the status of the OrganizationAccess.
The name or the unique stack ID that's associated with the stack, which aren't always interchangeable:
Running stacks: You can specify either the stack's name or its unique stack ID.
Deleted stacks: You must specify the unique stack ID.
Default: There is no default value.
" + "documentation":"If you don't pass a parameter to StackName, the API returns a response that describes all resources in the account. This requires ListStacks and DescribeStacks permissions.
The IAM policy below can be added to IAM policies when you want to limit resource-level permissions and avoid returning a response when no parameter is sent in the request:
{ \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Deny\", \"Action\": \"cloudformation:DescribeStacks\", \"NotResource\": \"arn:aws:cloudformation:*:*:stack/*/*\" }] }
The name or the unique stack ID that's associated with the stack, which aren't always interchangeable:
Running stacks: You can specify either the stack's name or its unique stack ID.
Deleted stacks: You must specify the unique stack ID.
Default: There is no default value.
" }, "NextToken":{ "shape":"NextToken", @@ -2719,7 +2814,7 @@ }, "ConfigurationSchema":{ "shape":"ConfigurationSchema", - "documentation":"A JSON string that represent the current configuration data for the extension in this account and region.
To set the configuration data for an extension, use SetTypeConfiguration. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
" + "documentation":"A JSON string that represent the current configuration data for the extension in this account and Region.
To set the configuration data for an extension, use SetTypeConfiguration. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
" }, "PublisherId":{ "shape":"PublisherId", @@ -2727,11 +2822,11 @@ }, "OriginalTypeName":{ "shape":"TypeName", - "documentation":"For public extensions that have been activated for this account and region, the type name of the public extension.
If you specified a TypeNameAlias when enabling the extension in this account and region, CloudFormation treats that alias as the extension's type name within the account and region, not the type name of the public extension. For more information, see Specifying aliases to refer to extensions in the CloudFormation User Guide.
For public extensions that have been activated for this account and Region, the type name of the public extension.
If you specified a TypeNameAlias when enabling the extension in this account and Region, CloudFormation treats that alias as the extension's type name within the account and Region, not the type name of the public extension. For more information, see Specifying aliases to refer to extensions in the CloudFormation User Guide.
For public extensions that have been activated for this account and region, the Amazon Resource Name (ARN) of the public extension.
" + "documentation":"For public extensions that have been activated for this account and Region, the Amazon Resource Name (ARN) of the public extension.
" }, "PublicVersionNumber":{ "shape":"PublicVersionNumber", @@ -2743,11 +2838,11 @@ }, "IsActivated":{ "shape":"IsActivated", - "documentation":"Whether the extension is activated in the account and region.
This only applies to public third-party extensions. For all other extensions, CloudFormation returns null.
Whether the extension is activated in the account and Region.
This only applies to public third-party extensions. For all other extensions, CloudFormation returns null.
Whether CloudFormation automatically updates the extension in this account and region when a new minor version is published by the extension publisher. Major versions released by the publisher must be manually updated. For more information, see Activating public extensions for use in your account in the CloudFormation User Guide.
" + "documentation":"Whether CloudFormation automatically updates the extension in this account and Region when a new minor version is published by the extension publisher. Major versions released by the publisher must be manually updated. For more information, see Activating public extensions for use in your account in the CloudFormation User Guide.
" } } }, @@ -2846,7 +2941,10 @@ "shape":"StackSetNameOrId", "documentation":"The name of the stack set on which to perform the drift detection operation.
" }, - "OperationPreferences":{"shape":"StackSetOperationPreferences"}, + "OperationPreferences":{ + "shape":"StackSetOperationPreferences", + "documentation":"The user-specified preferences for how CloudFormation performs a stack set operation.
For more information about maximum concurrent accounts and failure tolerance, see Stack set operation options.
" + }, "OperationId":{ "shape":"ClientRequestToken", "documentation":"The ID of the stack set operation.
", @@ -3245,7 +3343,10 @@ "shape":"OrganizationalUnitIdList", "documentation":"The list of OU ID's to which the stacks being imported has to be mapped as deployment target.
" }, - "OperationPreferences":{"shape":"StackSetOperationPreferences"}, + "OperationPreferences":{ + "shape":"StackSetOperationPreferences", + "documentation":"The user-specified preferences for how CloudFormation performs a stack set operation.
For more information about maximum concurrent accounts and failure tolerance, see Stack set operation options.
" + }, "OperationId":{ "shape":"ClientRequestToken", "documentation":"A unique, user defined, identifier for the stack set operation.
", @@ -3736,7 +3837,7 @@ "members":{ "Visibility":{ "shape":"Visibility", - "documentation":"The scope at which the extensions are visible and usable in CloudFormation operations.
Valid values include:
PRIVATE: Extensions that are visible and usable within this account and region. This includes:
Private extensions you have registered in this account and region.
Public extensions that you have activated in this account and region.
PUBLIC: Extensions that are publicly visible and available to be activated within any Amazon Web Services account. This includes extensions from Amazon Web Services, in addition to third-party publishers.
The default is PRIVATE.
The scope at which the extensions are visible and usable in CloudFormation operations.
Valid values include:
PRIVATE: Extensions that are visible and usable within this account and Region. This includes:
Private extensions you have registered in this account and Region.
Public extensions that you have activated in this account and Region.
PUBLIC: Extensions that are publicly visible and available to be activated within any Amazon Web Services account. This includes extensions from Amazon Web Services, in addition to third-party publishers.
The default is PRIVATE.
A concatenated list of the logical IDs of the module or modules containing the resource. Modules are listed starting with the inner-most nested module, and separated by /.
In the following example, the resource was created from a module, moduleA, that's nested inside a parent module, moduleB.
moduleA/moduleB
For more information, see Referencing resources in a module in the CloudFormation User Guide.
" + "documentation":"A concatenated list of the logical IDs of the module or modules containing the resource. Modules are listed starting with the inner-most nested module, and separated by /.
In the following example, the resource was created from a module, moduleA, that's nested inside a parent module, moduleB.
moduleA/moduleB
For more information, see Referencing resources in a module in the CloudFormation User Guide.
" } }, - "documentation":"Contains information about the module from which the resource was created, if the resource was created from a module included in the stack template.
For more information about modules, see Using modules to encapsulate and reuse resource configurations in the CloudFormation User Guide.
" + "documentation":"Contains information about the module from which the resource was created, if the resource was created from a module included in the stack template.
For more information about modules, see Using modules to encapsulate and reuse resource configurations in the CloudFormation User Guide.
" }, "MonitoringTimeInMinutes":{ "type":"integer", @@ -3981,6 +4082,14 @@ "type":"string", "max":4096 }, + "OrganizationStatus":{ + "type":"string", + "enum":[ + "ENABLED", + "DISABLED", + "DISABLED_PERMANENTLY" + ] + }, "OrganizationalUnitId":{ "type":"string", "pattern":"^(ou-[a-z0-9]{4,32}-[a-z0-9]{8,32}|r-[a-z0-9]{4,32})$" @@ -4334,7 +4443,7 @@ }, "ExecutionRoleArn":{ "shape":"RoleArn", - "documentation":"The Amazon Resource Name (ARN) of the IAM role for CloudFormation to assume when invoking the extension.
For CloudFormation to assume the specified execution role, the role must contain a trust relationship with the CloudFormation service principle (resources.cloudformation.amazonaws.com). For more information about adding trust relationships, see Modifying a role trust policy in the Identity and Access Management User Guide.
If your extension calls Amazon Web Services APIs in any of its handlers, you must create an IAM execution role that includes the necessary permissions to call those Amazon Web Services APIs, and provision that execution role in your account. When CloudFormation needs to invoke the resource type handler, CloudFormation assumes this execution role to create a temporary session token, which it then passes to the resource type handler, thereby supplying your resource type with the appropriate credentials.
" + "documentation":"The Amazon Resource Name (ARN) of the IAM role for CloudFormation to assume when invoking the extension.
For CloudFormation to assume the specified execution role, the role must contain a trust relationship with the CloudFormation service principle (resources.cloudformation.amazonaws.com). For more information about adding trust relationships, see Modifying a role trust policy in the Identity and Access Management User Guide.
If your extension calls Amazon Web Services APIs in any of its handlers, you must create an IAM execution role that includes the necessary permissions to call those Amazon Web Services APIs, and provision that execution role in your account. When CloudFormation needs to invoke the resource type handler, CloudFormation assumes this execution role to create a temporary session token, which it then passes to the resource type handler, thereby supplying your resource type with the appropriate credentials.
" }, "ClientRequestToken":{ "shape":"RequestToken", @@ -4396,11 +4505,11 @@ "members":{ "TypeNameAlias":{ "shape":"TypeName", - "documentation":"An alias assigned to the public extension, in this account and region. If you specify an alias for the extension, CloudFormation treats the alias as the extension type name within this account and region. You must use the alias to refer to the extension in your templates, API calls, and CloudFormation console.
" + "documentation":"An alias assigned to the public extension, in this account and Region. If you specify an alias for the extension, CloudFormation treats the alias as the extension type name within this account and Region. You must use the alias to refer to the extension in your templates, API calls, and CloudFormation console.
" }, "OriginalTypeName":{ "shape":"TypeName", - "documentation":"The type name of the public extension.
If you specified a TypeNameAlias when enabling the extension in this account and region, CloudFormation treats that alias as the extension's type name within the account and region, not the type name of the public extension. For more information, see Specifying aliases to refer to extensions in the CloudFormation User Guide.
The type name of the public extension.
If you specified a TypeNameAlias when enabling the extension in this account and Region, CloudFormation treats that alias as the extension's type name within the account and Region, not the type name of the public extension. For more information, see Specifying aliases to refer to extensions in the CloudFormation User Guide.
The Amazon Resource Name (ARN) for the extension, in this account and region.
For public extensions, this will be the ARN assigned when you activate the type in this account and region. For private extensions, this will be the ARN assigned when you register the type in this account and region.
Do not include the extension versions suffix at the end of the ARN. You can set the configuration for an extension, but not for a specific extension version.
" + "documentation":"The Amazon Resource Name (ARN) for the extension, in this account and Region.
For public extensions, this will be the ARN assigned when you activate the type in this account and Region. For private extensions, this will be the ARN assigned when you register the type in this account and Region.
Do not include the extension versions suffix at the end of the ARN. You can set the configuration for an extension, but not for a specific extension version.
" }, "Configuration":{ "shape":"TypeConfiguration", - "documentation":"The configuration data for the extension, in this account and region.
The configuration data must be formatted as JSON, and validate against the schema returned in the ConfigurationSchema response element of API_DescribeType. For more information, see Defining account-level configuration data for an extension in the CloudFormation CLI User Guide.
The configuration data for the extension, in this account and Region.
The configuration data must be formatted as JSON, and validate against the schema returned in the ConfigurationSchema response element of DescribeType. For more information, see Defining account-level configuration data for an extension in the CloudFormation CLI User Guide.
The Amazon Resource Name (ARN) for the configuration data, in this account and region.
Conditional: You must specify ConfigurationArn, or Type and TypeName.
The Amazon Resource Name (ARN) for the configuration data, in this account and Region.
Conditional: You must specify ConfigurationArn, or Type and TypeName.
Information about whether a stack's actual configuration differs, or has drifted, from it's expected configuration, as defined in the stack template and any values specified as template parameters. For more information, see Detecting Unregulated Configuration Changes to Stacks and Resources.
" + "documentation":"Information about whether a stack's actual configuration differs, or has drifted, from its expected configuration, as defined in the stack template and any values specified as template parameters. For more information, see Detecting Unregulated Configuration Changes to Stacks and Resources.
" } }, "documentation":"The Stack data type.
" @@ -5664,7 +5773,10 @@ "shape":"ManagedExecution", "documentation":"Describes whether StackSets performs non-conflicting operations concurrently and queues conflicting operations.
" }, - "Regions":{"shape":"RegionList"} + "Regions":{ + "shape":"RegionList", + "documentation":"Returns a list of all Amazon Web Services Regions the given StackSet has stack instances deployed in. The Amazon Web Services Regions list output is in no particular order.
" + } }, "documentation":"A structure that contains information about a stack set. A stack set enables you to provision stacks into Amazon Web Services accounts and across Regions by using a single CloudFormation template. In the stack set, you specify the template to use, in addition to any parameters and capabilities that the template requires.
" }, @@ -5957,7 +6069,10 @@ "shape":"StackSetOperationStatusDetails", "documentation":"Detailed information about the stack set operation.
" }, - "OperationPreferences":{"shape":"StackSetOperationPreferences"} + "OperationPreferences":{ + "shape":"StackSetOperationPreferences", + "documentation":"The user-specified preferences for how CloudFormation performs a stack set operation.
For more information about maximum concurrent accounts and failure tolerance, see Stack set operation options.
" + } }, "documentation":"The structures that contain summary information about the specified operation.
" }, @@ -6101,7 +6216,7 @@ }, "DriftInformation":{ "shape":"StackDriftInformationSummary", - "documentation":"Summarizes information about whether a stack's actual configuration differs, or has drifted, from it's expected configuration, as defined in the stack template and any values specified as template parameters. For more information, see Detecting Unregulated Configuration Changes to Stacks and Resources.
" + "documentation":"Summarizes information about whether a stack's actual configuration differs, or has drifted, from its expected configuration, as defined in the stack template and any values specified as template parameters. For more information, see Detecting Unregulated Configuration Changes to Stacks and Resources.
" } }, "documentation":"The StackSummary Data Type
" @@ -6258,7 +6373,7 @@ }, "VersionId":{ "shape":"TypeVersionId", - "documentation":"The version of the extension to test.
You can specify the version id with either Arn, or with TypeName and Type.
If you don't specify a version, CloudFormation uses the default version of the extension in this account and region for testing.
" + "documentation":"The version of the extension to test.
You can specify the version id with either Arn, or with TypeName and Type.
If you don't specify a version, CloudFormation uses the default version of the extension in this account and Region for testing.
" }, "LogDeliveryBucket":{ "shape":"S3Bucket", @@ -6342,7 +6457,7 @@ "members":{ "Arn":{ "shape":"TypeConfigurationArn", - "documentation":"The Amazon Resource Name (ARN) for the configuration data, in this account and region.
" + "documentation":"The Amazon Resource Name (ARN) for the configuration data, in this account and Region.
" }, "Alias":{ "shape":"TypeConfigurationAlias", @@ -6350,7 +6465,7 @@ }, "Configuration":{ "shape":"TypeConfiguration", - "documentation":"A JSON string specifying the configuration data for the extension, in this account and region.
If a configuration hasn't been set for a specified extension, CloudFormation returns {}.
A JSON string specifying the configuration data for the extension, in this account and Region.
If a configuration hasn't been set for a specified extension, CloudFormation returns {}.
The Amazon Resource Name (ARN) for the extension, in this account and region.
For public extensions, this will be the ARN assigned when you activate the type in this account and region. For private extensions, this will be the ARN assigned when you register the type in this account and region.
" + "documentation":"The Amazon Resource Name (ARN) for the extension, in this account and Region.
For public extensions, this will be the ARN assigned when you activate the type in this account and Region. For private extensions, this will be the ARN assigned when you register the type in this account and Region.
" }, "TypeName":{ "shape":"TypeName", @@ -6369,7 +6484,7 @@ "documentation":"Whether this configuration data is the default configuration for the extension.
" } }, - "documentation":"Detailed information concerning the specification of a CloudFormation extension in a given account and region.
For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
" + "documentation":"Detailed information concerning the specification of a CloudFormation extension in a given account and Region.
For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
" }, "TypeConfigurationDetailsList":{ "type":"list", @@ -6380,7 +6495,7 @@ "members":{ "TypeArn":{ "shape":"TypeArn", - "documentation":"The Amazon Resource Name (ARN) for the extension, in this account and region.
For public extensions, this will be the ARN assigned when you activate the type in this account and region. For private extensions, this will be the ARN assigned when you register the type in this account and region.
" + "documentation":"The Amazon Resource Name (ARN) for the extension, in this account and Region.
For public extensions, this will be the ARN assigned when you activate the type in this account and Region. For private extensions, this will be the ARN assigned when you register the type in this account and Region.
" }, "TypeConfigurationAlias":{ "shape":"TypeConfigurationAlias", @@ -6388,7 +6503,7 @@ }, "TypeConfigurationArn":{ "shape":"TypeConfigurationArn", - "documentation":"The Amazon Resource Name (ARN) for the configuration, in this account and region.
" + "documentation":"The Amazon Resource Name (ARN) for the configuration, in this account and Region.
" }, "Type":{ "shape":"ThirdPartyType", @@ -6423,7 +6538,7 @@ "members":{ "Category":{ "shape":"Category", - "documentation":"The category of extensions to return.
REGISTERED: Private extensions that have been registered for this account and region.
ACTIVATED: Public extensions that have been activated for this account and region.
THIRD_PARTY: Extensions available for use from publishers other than Amazon. This includes:
Private extensions registered in the account.
Public extensions from publishers other than Amazon, whether activated or not.
AWS_TYPES: Extensions available for use from Amazon.
The category of extensions to return.
REGISTERED: Private extensions that have been registered for this account and Region.
ACTIVATED: Public extensions that have been activated for this account and Region.
THIRD_PARTY: Extensions available for use from publishers other than Amazon. This includes:
Private extensions registered in the account.
Public extensions from publishers other than Amazon, whether activated or not.
AWS_TYPES: Extensions available for use from Amazon.
The name of the extension.
If you specified a TypeNameAlias when you activate this extension in your account and region, CloudFormation considers that alias as the type name.
The name of the extension.
If you specified a TypeNameAlias when you activate this extension in your account and Region, CloudFormation considers that alias as the type name.
For public extensions that have been activated for this account and region, the type name of the public extension.
If you specified a TypeNameAlias when enabling the extension in this account and region, CloudFormation treats that alias as the extension's type name within the account and region, not the type name of the public extension. For more information, see Specifying aliases to refer to extensions in the CloudFormation User Guide.
For public extensions that have been activated for this account and Region, the type name of the public extension.
If you specified a TypeNameAlias when enabling the extension in this account and Region, CloudFormation treats that alias as the extension's type name within the account and Region, not the type name of the public extension. For more information, see Specifying aliases to refer to extensions in the CloudFormation User Guide.
For public extensions that have been activated for this account and region, the version of the public extension to be used for CloudFormation operations in this account and Region.
How you specified AutoUpdate when enabling the extension affects whether CloudFormation automatically updates the extension in this account and region when a new version is released. For more information, see Setting CloudFormation to automatically use new versions of extensions in the CloudFormation User Guide.
For public extensions that have been activated for this account and Region, the version of the public extension to be used for CloudFormation operations in this account and Region.
How you specified AutoUpdate when enabling the extension affects whether CloudFormation automatically updates the extension in this account and Region when a new version is released. For more information, see Setting CloudFormation to automatically use new versions of extensions in the CloudFormation User Guide.
For public extensions that have been activated for this account and region, the latest version of the public extension that is available. For any extensions other than activated third-arty extensions, CloudFormation returns null.
How you specified AutoUpdate when enabling the extension affects whether CloudFormation automatically updates the extension in this account and region when a new version is released. For more information, see Setting CloudFormation to automatically use new versions of extensions in the CloudFormation User Guide.
For public extensions that have been activated for this account and Region, the latest version of the public extension that is available. For any extensions other than activated third-arty extensions, CloudFormation returns null.
How you specified AutoUpdate when enabling the extension affects whether CloudFormation automatically updates the extension in this account and Region when a new version is released. For more information, see Setting CloudFormation to automatically use new versions of extensions in the CloudFormation User Guide.
Whether the extension is activated for this account and region.
This applies only to third-party public extensions. Extensions published by Amazon are activated by default.
" + "documentation":"Whether the extension is activated for this account and Region.
This applies only to third-party public extensions. Extensions published by Amazon are activated by default.
" } }, "documentation":"Contains summary information about the specified CloudFormation extension.
" @@ -6586,7 +6701,7 @@ }, "PublicVersionNumber":{ "shape":"PublicVersionNumber", - "documentation":"For public extensions that have been activated for this account and region, the version of the public extension to be used for CloudFormation operations in this account and region. For any extensions other than activated third-arty extensions, CloudFormation returns null.
How you specified AutoUpdate when enabling the extension affects whether CloudFormation automatically updates the extension in this account and region when a new version is released. For more information, see Setting CloudFormation to automatically use new versions of extensions in the CloudFormation User Guide.
For public extensions that have been activated for this account and Region, the version of the public extension to be used for CloudFormation operations in this account and Region. For any extensions other than activated third-arty extensions, CloudFormation returns null.
How you specified AutoUpdate when enabling the extension affects whether CloudFormation automatically updates the extension in this account and Region when a new version is released. For more information, see Setting CloudFormation to automatically use new versions of extensions in the CloudFormation User Guide.
Contains summary information about a specific version of a CloudFormation extension.
" From ce85c02924694417c789820a34779bd4584901f8 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Mon, 5 Jun 2023 18:08:03 +0000 Subject: [PATCH 034/317] Amazon Elastic Compute Cloud Update: Making InstanceTagAttribute as the required parameter for the DeregisterInstanceEventNotificationAttributes and RegisterInstanceEventNotificationAttributes APIs. --- .../feature-AmazonElasticComputeCloud-4b238ff.json | 6 ++++++ .../ec2/src/main/resources/codegen-resources/service-2.json | 2 ++ 2 files changed, 8 insertions(+) create mode 100644 .changes/next-release/feature-AmazonElasticComputeCloud-4b238ff.json diff --git a/.changes/next-release/feature-AmazonElasticComputeCloud-4b238ff.json b/.changes/next-release/feature-AmazonElasticComputeCloud-4b238ff.json new file mode 100644 index 000000000000..bc83a1e0ec71 --- /dev/null +++ b/.changes/next-release/feature-AmazonElasticComputeCloud-4b238ff.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Elastic Compute Cloud", + "contributor": "", + "description": "Making InstanceTagAttribute as the required parameter for the DeregisterInstanceEventNotificationAttributes and RegisterInstanceEventNotificationAttributes APIs." +} diff --git a/services/ec2/src/main/resources/codegen-resources/service-2.json b/services/ec2/src/main/resources/codegen-resources/service-2.json index 1cf555fe2b38..83eb0238bc50 100644 --- a/services/ec2/src/main/resources/codegen-resources/service-2.json +++ b/services/ec2/src/main/resources/codegen-resources/service-2.json @@ -17816,6 +17816,7 @@ }, "DeregisterInstanceEventNotificationAttributesRequest":{ "type":"structure", + "required":["InstanceTagAttribute"], "members":{ "DryRun":{ "shape":"Boolean", @@ -45649,6 +45650,7 @@ }, "RegisterInstanceEventNotificationAttributesRequest":{ "type":"structure", + "required":["InstanceTagAttribute"], "members":{ "DryRun":{ "shape":"Boolean", From 8bea8bb85fbda95d002f169b63efd7f2d789a9cc Mon Sep 17 00:00:00 2001 From: AWS <> Date: Mon, 5 Jun 2023 18:09:07 +0000 Subject: [PATCH 035/317] Updated endpoints.json and partitions.json. --- .../feature-AWSSDKforJavav2-0443982.json | 6 ++++++ .../awssdk/regions/internal/region/endpoints.json | 14 +++++++++++++- 2 files changed, 19 insertions(+), 1 deletion(-) create mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json new file mode 100644 index 000000000000..e5b5ee3ca5e3 --- /dev/null +++ b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." +} diff --git a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json index 9aaed5dd9014..79dd4bea5e52 100644 --- a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json +++ b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json @@ -23195,6 +23195,13 @@ }, "workspaces" : { "endpoints" : { + "fips-us-gov-east-1" : { + "credentialScope" : { + "region" : "us-gov-east-1" + }, + "deprecated" : true, + "hostname" : "workspaces-fips.us-gov-east-1.amazonaws.com" + }, "fips-us-gov-west-1" : { "credentialScope" : { "region" : "us-gov-west-1" @@ -23202,7 +23209,12 @@ "deprecated" : true, "hostname" : "workspaces-fips.us-gov-west-1.amazonaws.com" }, - "us-gov-east-1" : { }, + "us-gov-east-1" : { + "variants" : [ { + "hostname" : "workspaces-fips.us-gov-east-1.amazonaws.com", + "tags" : [ "fips" ] + } ] + }, "us-gov-west-1" : { "variants" : [ { "hostname" : "workspaces-fips.us-gov-west-1.amazonaws.com", From e9b491a907e4370e8ced51a6066f036f126f1d06 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Mon, 5 Jun 2023 18:10:12 +0000 Subject: [PATCH 036/317] Release 2.20.79. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.20.79.json | 66 +++++++++++++++++++ .../bugfix-AWSSDKforJavav2-b1d7d7f.json | 6 -- .../feature-AWSCloudFormation-db2d2f2.json | 6 -- ...ature-AWSKeyManagementService-6198159.json | 6 -- .../feature-AWSLambda-c6da278.json | 6 -- .../feature-AWSSDKforJavav2-0443982.json | 6 -- ...ure-AmazonElasticComputeCloud-4b238ff.json | 6 -- .../feature-AmazonFraudDetector-2c13eaf.json | 6 -- .../feature-AmazonKeyspaces-42b7b17.json | 6 -- .../feature-AmazonMWAA-e2a609e.json | 6 -- ...rEnvironmentManagementservice-396ae95.json | 6 -- CHANGELOG.md | 40 +++++++++++ README.md | 8 +-- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 2 +- bom-internal/pom.xml | 2 +- bom/pom.xml | 2 +- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth-crt/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/crt-core/pom.xml | 2 +- core/endpoints-spi/pom.xml | 2 +- core/imds/pom.xml | 2 +- core/json-utils/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services-custom/s3-transfer-manager/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/account/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/amplifyuibuilder/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appconfigdata/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationcostprofiler/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/apprunner/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/arczonalshift/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/backupgateway/pom.xml | 2 +- services/backupstorage/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/billingconductor/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/chimesdkidentity/pom.xml | 2 +- services/chimesdkmediapipelines/pom.xml | 2 +- services/chimesdkmeetings/pom.xml | 2 +- services/chimesdkmessaging/pom.xml | 2 +- services/chimesdkvoice/pom.xml | 2 +- services/cleanrooms/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/cloudcontrol/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudtraildata/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecatalyst/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcampaigns/pom.xml | 2 +- services/connectcases/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/controltower/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/docdbelastic/pom.xml | 2 +- services/drs/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/emrserverless/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/evidently/pom.xml | 2 +- services/finspace/pom.xml | 2 +- services/finspacedata/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/gamesparks/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/grafana/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/inspector2/pom.xml | 2 +- services/internetmonitor/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotfleetwise/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotroborunner/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iottwinmaker/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/ivschat/pom.xml | 2 +- services/ivsrealtime/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kafkaconnect/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kendraranking/pom.xml | 2 +- services/keyspaces/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kinesisvideowebrtcstorage/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- .../licensemanagerlinuxsubscriptions/pom.xml | 2 +- .../licensemanagerusersubscriptions/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutequipment/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/m2/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagev2/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/memorydb/pom.xml | 2 +- services/mgn/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/migrationhuborchestrator/pom.xml | 2 +- services/migrationhubrefactorspaces/pom.xml | 2 +- services/migrationhubstrategy/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/nimble/pom.xml | 2 +- services/oam/pom.xml | 2 +- services/omics/pom.xml | 2 +- services/opensearch/pom.xml | 2 +- services/opensearchserverless/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/osis/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/panorama/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/pinpointsmsvoicev2/pom.xml | 2 +- services/pipes/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 2 +- services/pricing/pom.xml | 2 +- services/privatenetworks/pom.xml | 2 +- services/proton/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rbin/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/redshiftserverless/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resiliencehub/pom.xml | 2 +- services/resourceexplorer2/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/rolesanywhere/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53recoverycluster/pom.xml | 2 +- services/route53recoverycontrolconfig/pom.xml | 2 +- services/route53recoveryreadiness/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/rum/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakergeospatial/pom.xml | 2 +- services/sagemakermetrics/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/scheduler/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- services/securitylake/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/simspaceweaver/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/snowdevicemanagement/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/ssmcontacts/pom.xml | 2 +- services/ssmincidents/pom.xml | 2 +- services/ssmsap/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/supportapp/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/tnb/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/voiceid/pom.xml | 2 +- services/vpclattice/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/wisdom/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/workspacesweb/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/auth-tests/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/region-testing/pom.xml | 2 +- test/ruleset-testing-core/pom.xml | 2 +- test/s3-benchmarks/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- third-party/pom.xml | 2 +- third-party/third-party-jackson-core/pom.xml | 2 +- .../pom.xml | 2 +- utils/pom.xml | 2 +- 420 files changed, 517 insertions(+), 471 deletions(-) create mode 100644 .changes/2.20.79.json delete mode 100644 .changes/next-release/bugfix-AWSSDKforJavav2-b1d7d7f.json delete mode 100644 .changes/next-release/feature-AWSCloudFormation-db2d2f2.json delete mode 100644 .changes/next-release/feature-AWSKeyManagementService-6198159.json delete mode 100644 .changes/next-release/feature-AWSLambda-c6da278.json delete mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json delete mode 100644 .changes/next-release/feature-AmazonElasticComputeCloud-4b238ff.json delete mode 100644 .changes/next-release/feature-AmazonFraudDetector-2c13eaf.json delete mode 100644 .changes/next-release/feature-AmazonKeyspaces-42b7b17.json delete mode 100644 .changes/next-release/feature-AmazonMWAA-e2a609e.json delete mode 100644 .changes/next-release/feature-FinSpaceUserEnvironmentManagementservice-396ae95.json diff --git a/.changes/2.20.79.json b/.changes/2.20.79.json new file mode 100644 index 000000000000..b6b2fe779124 --- /dev/null +++ b/.changes/2.20.79.json @@ -0,0 +1,66 @@ +{ + "version": "2.20.79", + "date": "2023-06-05", + "entries": [ + { + "type": "bugfix", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Upgrading AWS CRT dependency to v0.21.17. This version contains minor fixes and updates" + }, + { + "type": "feature", + "category": "AWS CloudFormation", + "contributor": "", + "description": "AWS CloudFormation StackSets provides customers with three new APIs to activate, deactivate, and describe AWS Organizations trusted access which is needed to get started with service-managed StackSets." + }, + { + "type": "feature", + "category": "AWS Key Management Service", + "contributor": "", + "description": "This release includes feature to import customer's asymmetric (RSA and ECC) and HMAC keys into KMS. It also includes feature to allow customers to specify number of days to schedule a KMS key deletion as a policy condition key." + }, + { + "type": "feature", + "category": "AWS Lambda", + "contributor": "", + "description": "Add Ruby 3.2 (ruby3.2) Runtime support to AWS Lambda." + }, + { + "type": "feature", + "category": "Amazon Elastic Compute Cloud", + "contributor": "", + "description": "Making InstanceTagAttribute as the required parameter for the DeregisterInstanceEventNotificationAttributes and RegisterInstanceEventNotificationAttributes APIs." + }, + { + "type": "feature", + "category": "Amazon Fraud Detector", + "contributor": "", + "description": "Added new variable types, new DateTime data type, and new rules engine functions for interacting and working with DateTime data types." + }, + { + "type": "feature", + "category": "Amazon Keyspaces", + "contributor": "", + "description": "This release adds support for MRR GA launch, and includes multiregion support in create-keyspace, get-keyspace, and list-keyspace." + }, + { + "type": "feature", + "category": "AmazonMWAA", + "contributor": "", + "description": "This release adds ROLLING_BACK and CREATING_SNAPSHOT environment statuses for Amazon MWAA environments." + }, + { + "type": "feature", + "category": "FinSpace User Environment Management service", + "contributor": "", + "description": "Releasing new Managed kdb Insights APIs" + }, + { + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/bugfix-AWSSDKforJavav2-b1d7d7f.json b/.changes/next-release/bugfix-AWSSDKforJavav2-b1d7d7f.json deleted file mode 100644 index 25c06af23b9b..000000000000 --- a/.changes/next-release/bugfix-AWSSDKforJavav2-b1d7d7f.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "bugfix", - "category": "AWS SDK for Java v2", - "contributor": "", - "description": "Upgrading AWS CRT dependency to v0.21.17. This version contains minor fixes and updates" -} diff --git a/.changes/next-release/feature-AWSCloudFormation-db2d2f2.json b/.changes/next-release/feature-AWSCloudFormation-db2d2f2.json deleted file mode 100644 index 7d54baadeaf2..000000000000 --- a/.changes/next-release/feature-AWSCloudFormation-db2d2f2.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS CloudFormation", - "contributor": "", - "description": "AWS CloudFormation StackSets provides customers with three new APIs to activate, deactivate, and describe AWS Organizations trusted access which is needed to get started with service-managed StackSets." -} diff --git a/.changes/next-release/feature-AWSKeyManagementService-6198159.json b/.changes/next-release/feature-AWSKeyManagementService-6198159.json deleted file mode 100644 index ade147301a7a..000000000000 --- a/.changes/next-release/feature-AWSKeyManagementService-6198159.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Key Management Service", - "contributor": "", - "description": "This release includes feature to import customer's asymmetric (RSA and ECC) and HMAC keys into KMS. It also includes feature to allow customers to specify number of days to schedule a KMS key deletion as a policy condition key." -} diff --git a/.changes/next-release/feature-AWSLambda-c6da278.json b/.changes/next-release/feature-AWSLambda-c6da278.json deleted file mode 100644 index a3e26d8d3af6..000000000000 --- a/.changes/next-release/feature-AWSLambda-c6da278.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Lambda", - "contributor": "", - "description": "Add Ruby 3.2 (ruby3.2) Runtime support to AWS Lambda." -} diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json deleted file mode 100644 index e5b5ee3ca5e3..000000000000 --- a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS SDK for Java v2", - "contributor": "", - "description": "Updated endpoint and partition metadata." -} diff --git a/.changes/next-release/feature-AmazonElasticComputeCloud-4b238ff.json b/.changes/next-release/feature-AmazonElasticComputeCloud-4b238ff.json deleted file mode 100644 index bc83a1e0ec71..000000000000 --- a/.changes/next-release/feature-AmazonElasticComputeCloud-4b238ff.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Elastic Compute Cloud", - "contributor": "", - "description": "Making InstanceTagAttribute as the required parameter for the DeregisterInstanceEventNotificationAttributes and RegisterInstanceEventNotificationAttributes APIs." -} diff --git a/.changes/next-release/feature-AmazonFraudDetector-2c13eaf.json b/.changes/next-release/feature-AmazonFraudDetector-2c13eaf.json deleted file mode 100644 index 24fe6116a448..000000000000 --- a/.changes/next-release/feature-AmazonFraudDetector-2c13eaf.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Fraud Detector", - "contributor": "", - "description": "Added new variable types, new DateTime data type, and new rules engine functions for interacting and working with DateTime data types." -} diff --git a/.changes/next-release/feature-AmazonKeyspaces-42b7b17.json b/.changes/next-release/feature-AmazonKeyspaces-42b7b17.json deleted file mode 100644 index 020ae30b9417..000000000000 --- a/.changes/next-release/feature-AmazonKeyspaces-42b7b17.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Keyspaces", - "contributor": "", - "description": "This release adds support for MRR GA launch, and includes multiregion support in create-keyspace, get-keyspace, and list-keyspace." -} diff --git a/.changes/next-release/feature-AmazonMWAA-e2a609e.json b/.changes/next-release/feature-AmazonMWAA-e2a609e.json deleted file mode 100644 index 72d438501ff0..000000000000 --- a/.changes/next-release/feature-AmazonMWAA-e2a609e.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AmazonMWAA", - "contributor": "", - "description": "This release adds ROLLING_BACK and CREATING_SNAPSHOT environment statuses for Amazon MWAA environments." -} diff --git a/.changes/next-release/feature-FinSpaceUserEnvironmentManagementservice-396ae95.json b/.changes/next-release/feature-FinSpaceUserEnvironmentManagementservice-396ae95.json deleted file mode 100644 index 100b6f091143..000000000000 --- a/.changes/next-release/feature-FinSpaceUserEnvironmentManagementservice-396ae95.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "FinSpace User Environment Management service", - "contributor": "", - "description": "Releasing new Managed kdb Insights APIs" -} diff --git a/CHANGELOG.md b/CHANGELOG.md index 3796ff70d14c..dfa731af8765 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,43 @@ +# __2.20.79__ __2023-06-05__ +## __AWS CloudFormation__ + - ### Features + - AWS CloudFormation StackSets provides customers with three new APIs to activate, deactivate, and describe AWS Organizations trusted access which is needed to get started with service-managed StackSets. + +## __AWS Key Management Service__ + - ### Features + - This release includes feature to import customer's asymmetric (RSA and ECC) and HMAC keys into KMS. It also includes feature to allow customers to specify number of days to schedule a KMS key deletion as a policy condition key. + +## __AWS Lambda__ + - ### Features + - Add Ruby 3.2 (ruby3.2) Runtime support to AWS Lambda. + +## __AWS SDK for Java v2__ + - ### Features + - Updated endpoint and partition metadata. + + - ### Bugfixes + - Upgrading AWS CRT dependency to v0.21.17. This version contains minor fixes and updates + +## __Amazon Elastic Compute Cloud__ + - ### Features + - Making InstanceTagAttribute as the required parameter for the DeregisterInstanceEventNotificationAttributes and RegisterInstanceEventNotificationAttributes APIs. + +## __Amazon Fraud Detector__ + - ### Features + - Added new variable types, new DateTime data type, and new rules engine functions for interacting and working with DateTime data types. + +## __Amazon Keyspaces__ + - ### Features + - This release adds support for MRR GA launch, and includes multiregion support in create-keyspace, get-keyspace, and list-keyspace. + +## __AmazonMWAA__ + - ### Features + - This release adds ROLLING_BACK and CREATING_SNAPSHOT environment statuses for Amazon MWAA environments. + +## __FinSpace User Environment Management service__ + - ### Features + - Releasing new Managed kdb Insights APIs + # __2.20.78__ __2023-06-02__ ## __AWS CloudTrail__ - ### Features diff --git a/README.md b/README.md index 09d45fc66de5..01d6eac05883 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ To automatically manage module versions (currently all modules have the same verCreates a custom slot type
To create a custom slot type, specify a name for the slot type and a set of enumeration values, the values that a slot of this type can assume.
" }, + "CreateTestSetDiscrepancyReport":{ + "name":"CreateTestSetDiscrepancyReport", + "http":{ + "method":"POST", + "requestUri":"/testsets/{testSetId}/testsetdiscrepancy", + "responseCode":202 + }, + "input":{"shape":"CreateTestSetDiscrepancyReportRequest"}, + "output":{"shape":"CreateTestSetDiscrepancyReportResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Create a report that describes the differences between the bot and the test set.
" + }, "CreateUploadUrl":{ "name":"CreateUploadUrl", "http":{ @@ -514,6 +533,25 @@ ], "documentation":"Deletes a slot type from a bot locale.
If a slot is using the slot type, Amazon Lex throws a ResourceInUseException exception. To avoid the exception, set the skipResourceInUseCheck parameter to true.
The action to delete the selected test set.
", + "idempotent":true + }, "DeleteUtterances":{ "name":"DeleteUtterances", "http":{ @@ -741,6 +779,96 @@ ], "documentation":"Gets metadata information about a slot type.
" }, + "DescribeTestExecution":{ + "name":"DescribeTestExecution", + "http":{ + "method":"GET", + "requestUri":"/testexecutions/{testExecutionId}", + "responseCode":200 + }, + "input":{"shape":"DescribeTestExecutionRequest"}, + "output":{"shape":"DescribeTestExecutionResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets metadata information about the test execution.
" + }, + "DescribeTestSet":{ + "name":"DescribeTestSet", + "http":{ + "method":"GET", + "requestUri":"/testsets/{testSetId}", + "responseCode":200 + }, + "input":{"shape":"DescribeTestSetRequest"}, + "output":{"shape":"DescribeTestSetResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets metadata information about the test set.
" + }, + "DescribeTestSetDiscrepancyReport":{ + "name":"DescribeTestSetDiscrepancyReport", + "http":{ + "method":"GET", + "requestUri":"/testsetdiscrepancy/{testSetDiscrepancyReportId}", + "responseCode":200 + }, + "input":{"shape":"DescribeTestSetDiscrepancyReportRequest"}, + "output":{"shape":"DescribeTestSetDiscrepancyReportResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets metadata information about the test set discrepancy report.
" + }, + "DescribeTestSetGeneration":{ + "name":"DescribeTestSetGeneration", + "http":{ + "method":"GET", + "requestUri":"/testsetgenerations/{testSetGenerationId}", + "responseCode":200 + }, + "input":{"shape":"DescribeTestSetGenerationRequest"}, + "output":{"shape":"DescribeTestSetGenerationResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets metadata information about the test set generation.
" + }, + "GetTestExecutionArtifactsUrl":{ + "name":"GetTestExecutionArtifactsUrl", + "http":{ + "method":"GET", + "requestUri":"/testexecutions/{testExecutionId}/artifacturl", + "responseCode":200 + }, + "input":{"shape":"GetTestExecutionArtifactsUrlRequest"}, + "output":{"shape":"GetTestExecutionArtifactsUrlResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ], + "documentation":"The pre-signed Amazon S3 URL to download the test execution result artifacts.
" + }, "ListAggregatedUtterances":{ "name":"ListAggregatedUtterances", "http":{ @@ -1013,6 +1141,76 @@ ], "documentation":"Gets a list of tags associated with a resource. Only bots, bot aliases, and bot channels can have tags associated with them.
" }, + "ListTestExecutionResultItems":{ + "name":"ListTestExecutionResultItems", + "http":{ + "method":"POST", + "requestUri":"/testexecutions/{testExecutionId}/results", + "responseCode":200 + }, + "input":{"shape":"ListTestExecutionResultItemsRequest"}, + "output":{"shape":"ListTestExecutionResultItemsResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets a list of test execution result items.
" + }, + "ListTestExecutions":{ + "name":"ListTestExecutions", + "http":{ + "method":"POST", + "requestUri":"/testexecutions", + "responseCode":200 + }, + "input":{"shape":"ListTestExecutionsRequest"}, + "output":{"shape":"ListTestExecutionsResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"InternalServerException"} + ], + "documentation":"The list of test set executions.
" + }, + "ListTestSetRecords":{ + "name":"ListTestSetRecords", + "http":{ + "method":"POST", + "requestUri":"/testsets/{testSetId}/records", + "responseCode":200 + }, + "input":{"shape":"ListTestSetRecordsRequest"}, + "output":{"shape":"ListTestSetRecordsResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ], + "documentation":"The list of test set records.
" + }, + "ListTestSets":{ + "name":"ListTestSets", + "http":{ + "method":"POST", + "requestUri":"/testsets", + "responseCode":200 + }, + "input":{"shape":"ListTestSetsRequest"}, + "output":{"shape":"ListTestSetsResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"InternalServerException"} + ], + "documentation":"The list of the test sets
" + }, "SearchAssociatedTranscripts":{ "name":"SearchAssociatedTranscripts", "http":{ @@ -1071,6 +1269,45 @@ ], "documentation":"Starts importing a bot, bot locale, or custom vocabulary from a zip archive that you uploaded to an S3 bucket.
" }, + "StartTestExecution":{ + "name":"StartTestExecution", + "http":{ + "method":"POST", + "requestUri":"/testsets/{testSetId}/testexecutions", + "responseCode":202 + }, + "input":{"shape":"StartTestExecutionRequest"}, + "output":{"shape":"StartTestExecutionResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"InternalServerException"} + ], + "documentation":"The action to start test set execution.
" + }, + "StartTestSetGeneration":{ + "name":"StartTestSetGeneration", + "http":{ + "method":"PUT", + "requestUri":"/testsetgenerations", + "responseCode":202 + }, + "input":{"shape":"StartTestSetGenerationRequest"}, + "output":{"shape":"StartTestSetGenerationResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ], + "documentation":"The action to start the generation of test set.
", + "idempotent":true + }, "StopBotRecommendation":{ "name":"StopBotRecommendation", "http":{ @@ -1298,9 +1535,52 @@ {"shape":"InternalServerException"} ], "documentation":"Updates the configuration of an existing slot type.
" + }, + "UpdateTestSet":{ + "name":"UpdateTestSet", + "http":{ + "method":"PUT", + "requestUri":"/testsets/{testSetId}", + "responseCode":200 + }, + "input":{"shape":"UpdateTestSetRequest"}, + "output":{"shape":"UpdateTestSetResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ValidationException"}, + {"shape":"PreconditionFailedException"}, + {"shape":"ConflictException"}, + {"shape":"InternalServerException"} + ], + "documentation":"The action to update the test set.
", + "idempotent":true } }, "shapes":{ + "ActiveContext":{ + "type":"structure", + "required":["name"], + "members":{ + "name":{ + "shape":"ActiveContextName", + "documentation":"The name of active context.
" + } + }, + "documentation":"The active context used in the test execution.
" + }, + "ActiveContextList":{ + "type":"list", + "member":{"shape":"ActiveContext"}, + "max":20, + "min":0 + }, + "ActiveContextName":{ + "type":"string", + "max":100, + "min":1, + "pattern":"^([A-Za-z]_?)+$" + }, "AdvancedRecognitionSetting":{ "type":"structure", "members":{ @@ -1311,6 +1591,41 @@ }, "documentation":"Provides settings that enable advanced recognition settings for slot values.
" }, + "AgentTurnResult":{ + "type":"structure", + "required":["expectedAgentPrompt"], + "members":{ + "expectedAgentPrompt":{ + "shape":"TestSetAgentPrompt", + "documentation":"The expected agent prompt for the agent turn in a test set execution.
" + }, + "actualAgentPrompt":{ + "shape":"TestSetAgentPrompt", + "documentation":"The actual agent prompt for the agent turn in a test set execution.
" + }, + "errorDetails":{"shape":"ExecutionErrorDetails"}, + "actualElicitedSlot":{ + "shape":"TestResultSlotName", + "documentation":"The actual elicited slot for the agent turn in a test set execution.
" + }, + "actualIntent":{ + "shape":"Name", + "documentation":"The actual intent for the agent turn in a test set execution.
" + } + }, + "documentation":"The information about the agent turn in a test set execution.
" + }, + "AgentTurnSpecification":{ + "type":"structure", + "required":["agentPrompt"], + "members":{ + "agentPrompt":{ + "shape":"TestSetAgentPrompt", + "documentation":"The agent prompt for the agent turn in a test set.
" + } + }, + "documentation":"The specification of an agent turn.
" + }, "AggregatedUtterancesFilter":{ "type":"structure", "required":[ @@ -1507,6 +1822,12 @@ }, "documentation":"Specifies the audio and DTMF input specification.
" }, + "AudioFileS3Location":{ + "type":"string", + "max":1024, + "min":1, + "pattern":"^s3://([a-z0-9\\\\.-]+)/(.+)$" + }, "AudioLogDestination":{ "type":"structure", "required":["s3Bucket"], @@ -1838,6 +2159,29 @@ "type":"list", "member":{"shape":"BotAliasSummary"} }, + "BotAliasTestExecutionTarget":{ + "type":"structure", + "required":[ + "botId", + "botAliasId", + "localeId" + ], + "members":{ + "botId":{ + "shape":"Id", + "documentation":"The bot Id of the bot alias used in the test set execution.
" + }, + "botAliasId":{ + "shape":"BotAliasId", + "documentation":"The bot alias Id of the bot alias used in the test set execution.
" + }, + "localeId":{ + "shape":"LocaleId", + "documentation":"The locale Id of the bot alias used in the test set execution.
" + } + }, + "documentation":"The target Amazon S3 location for the test set execution using a bot alias.
" + }, "BotExportSpecification":{ "type":"structure", "required":[ @@ -2738,35 +3082,214 @@ "max":20, "min":1 }, - "ConversationLogSettings":{ + "ConversationLevelIntentClassificationResultItem":{ + "type":"structure", + "required":[ + "intentName", + "matchResult" + ], + "members":{ + "intentName":{ + "shape":"Name", + "documentation":"The intent name used in the evaluation of intent level success or failure.
" + }, + "matchResult":{ + "shape":"TestResultMatchStatus", + "documentation":"The number of times the specific intent is used in the evaluation of intent level success or failure.
" + } + }, + "documentation":"The item listing the evaluation of intent level success or failure.
" + }, + "ConversationLevelIntentClassificationResults":{ + "type":"list", + "member":{"shape":"ConversationLevelIntentClassificationResultItem"} + }, + "ConversationLevelResultDetail":{ "type":"structure", + "required":["endToEndResult"], "members":{ - "textLogSettings":{ - "shape":"TextLogSettingsList", - "documentation":"The Amazon CloudWatch Logs settings for logging text and metadata.
" + "endToEndResult":{ + "shape":"TestResultMatchStatus", + "documentation":"The success or failure of the streaming of the conversation.
" }, - "audioLogSettings":{ - "shape":"AudioLogSettingsList", - "documentation":"The Amazon S3 settings for logging audio to an S3 bucket.
" + "speechTranscriptionResult":{ + "shape":"TestResultMatchStatus", + "documentation":"The speech transcription success or failure details of the conversation.
" } }, - "documentation":"Configures conversation logging that saves audio, text, and metadata for the conversations with your users.
" + "documentation":"The conversation level details of the conversation used in the test set.
" }, - "Count":{"type":"integer"}, - "CreateBotAliasRequest":{ + "ConversationLevelSlotResolutionResultItem":{ "type":"structure", "required":[ - "botAliasName", - "botId" + "intentName", + "slotName", + "matchResult" ], "members":{ - "botAliasName":{ + "intentName":{ "shape":"Name", - "documentation":"The alias to create. The name must be unique for the bot.
" + "documentation":"The intents used in the slots list for the slot resolution details.
" }, - "description":{ - "shape":"Description", - "documentation":"A description of the alias. Use this description to help identify the alias.
" + "slotName":{ + "shape":"TestResultSlotName", + "documentation":"The slot name in the slots list for the slot resolution details.
" + }, + "matchResult":{ + "shape":"TestResultMatchStatus", + "documentation":"The number of matching slots used in the slots listings for the slot resolution evaluation.
" + } + }, + "documentation":"The slots used for the slot resolution in the conversation.
" + }, + "ConversationLevelSlotResolutionResults":{ + "type":"list", + "member":{"shape":"ConversationLevelSlotResolutionResultItem"} + }, + "ConversationLevelTestResultItem":{ + "type":"structure", + "required":[ + "conversationId", + "endToEndResult", + "intentClassificationResults", + "slotResolutionResults" + ], + "members":{ + "conversationId":{ + "shape":"TestSetConversationId", + "documentation":"The conversation Id of the test result evaluation item.
" + }, + "endToEndResult":{ + "shape":"TestResultMatchStatus", + "documentation":"The end-to-end success or failure of the test result evaluation item.
" + }, + "speechTranscriptionResult":{ + "shape":"TestResultMatchStatus", + "documentation":"The speech transcription success or failure of the test result evaluation item.
" + }, + "intentClassificationResults":{ + "shape":"ConversationLevelIntentClassificationResults", + "documentation":"The intent classification of the test result evaluation item.
" + }, + "slotResolutionResults":{ + "shape":"ConversationLevelSlotResolutionResults", + "documentation":"The slot success or failure of the test result evaluation item.
" + } + }, + "documentation":"The test result evaluation item at the conversation level.
" + }, + "ConversationLevelTestResultItemList":{ + "type":"list", + "member":{"shape":"ConversationLevelTestResultItem"} + }, + "ConversationLevelTestResults":{ + "type":"structure", + "required":["items"], + "members":{ + "items":{ + "shape":"ConversationLevelTestResultItemList", + "documentation":"The item list in the test set results data at the conversation level.
" + } + }, + "documentation":"The test set results data at the conversation level.
" + }, + "ConversationLevelTestResultsFilterBy":{ + "type":"structure", + "members":{ + "endToEndResult":{ + "shape":"TestResultMatchStatus", + "documentation":"The selection of matched or mismatched end-to-end status to filter test set results data at the conversation level.
" + } + }, + "documentation":"The selection to filter the test set results data at the conversation level.
" + }, + "ConversationLogSettings":{ + "type":"structure", + "members":{ + "textLogSettings":{ + "shape":"TextLogSettingsList", + "documentation":"The Amazon CloudWatch Logs settings for logging text and metadata.
" + }, + "audioLogSettings":{ + "shape":"AudioLogSettingsList", + "documentation":"The Amazon S3 settings for logging audio to an S3 bucket.
" + } + }, + "documentation":"Configures conversation logging that saves audio, text, and metadata for the conversations with your users.
" + }, + "ConversationLogsDataSource":{ + "type":"structure", + "required":[ + "botId", + "botAliasId", + "localeId", + "filter" + ], + "members":{ + "botId":{ + "shape":"Id", + "documentation":"The bot Id from the conversation logs.
" + }, + "botAliasId":{ + "shape":"BotAliasId", + "documentation":"The bot alias Id from the conversation logs.
" + }, + "localeId":{ + "shape":"LocaleId", + "documentation":"The locale Id of the conversation log.
" + }, + "filter":{ + "shape":"ConversationLogsDataSourceFilterBy", + "documentation":"The filter for the data source of the conversation log.
" + } + }, + "documentation":"The data source that uses conversation logs.
" + }, + "ConversationLogsDataSourceFilterBy":{ + "type":"structure", + "required":[ + "startTime", + "endTime", + "inputMode" + ], + "members":{ + "startTime":{ + "shape":"Timestamp", + "documentation":"The start time for the conversation log.
" + }, + "endTime":{ + "shape":"Timestamp", + "documentation":"The end time for the conversation log.
" + }, + "inputMode":{ + "shape":"ConversationLogsInputModeFilter", + "documentation":"The selection to filter by input mode for the conversation logs.
" + } + }, + "documentation":"The selected data source to filter the conversation log.
" + }, + "ConversationLogsInputModeFilter":{ + "type":"string", + "enum":[ + "Speech", + "Text" + ] + }, + "Count":{"type":"integer"}, + "CreateBotAliasRequest":{ + "type":"structure", + "required":[ + "botAliasName", + "botId" + ], + "members":{ + "botAliasName":{ + "shape":"Name", + "documentation":"The alias to create. The name must be unique for the bot.
" + }, + "description":{ + "shape":"Description", + "documentation":"A description of the alias. Use this description to help identify the alias.
" }, "botVersion":{ "shape":"NumericalBotVersion", @@ -3184,7 +3707,7 @@ }, "botVersion":{ "shape":"DraftBotVersion", - "documentation":"The identifier of the version of the bot associated with this intent.
", + "documentation":"The version of the bot associated with this intent.
", "location":"uri", "locationName":"botVersion" }, @@ -3257,7 +3780,7 @@ }, "botVersion":{ "shape":"DraftBotVersion", - "documentation":"The identifier of the version of the bot associated with the intent.
" + "documentation":"The version of the bot associated with the intent.
" }, "localeId":{ "shape":"LocaleId", @@ -3331,7 +3854,7 @@ }, "principal":{ "shape":"PrincipalList", - "documentation":"An IAM principal, such as an IAM users, IAM roles, or AWS services that is allowed or denied access to a resource. For more information, see AWS JSON policy elements: Principal.
" + "documentation":"An IAM principal, such as an IAM user, IAM role, or Amazon Web Services services that is allowed or denied access to a resource. For more information, see Amazon Web Services JSON policy elements: Principal.
" }, "action":{ "shape":"OperationList", @@ -3419,7 +3942,7 @@ }, "multipleValuesSetting":{ "shape":"MultipleValuesSetting", - "documentation":"Indicates whether the slot returns multiple values in one response. Multi-value slots are only available in the en-US locale. If you set this value to true in any other locale, Amazon Lex throws a ValidationException.
If the multipleValuesSetting is not set, the default value is false.
Indicates whether the slot returns multiple values in one response. Multi-value slots are only available in the en-US locale. If you set this value to true in any other locale, Amazon Lex throws a ValidationException.
If the multipleValuesSetting is not set, the default value is false.
The name for the slot. A slot type name must be unique within the account.
" + "documentation":"The name for the slot. A slot type name must be unique within the intent.
" }, "description":{ "shape":"Description", @@ -3507,7 +4030,7 @@ }, "valueSelectionSetting":{ "shape":"SlotValueSelectionSetting", - "documentation":"Determines the strategy that Amazon Lex uses to select a value from the list of possible values. The field can be set to one of the following values:
OriginalValue - Returns the value entered by the user, if the user value is similar to the slot value.
TopResolution - If there is a resolution list for the slot, return the first value in the resolution list. If there is no resolution list, return null.
If you don't specify the valueSelectionSetting parameter, the default is OriginalValue.
Determines the strategy that Amazon Lex uses to select a value from the list of possible values. The field can be set to one of the following values:
ORIGINAL_VALUE - Returns the value entered by the user, if the user value is similar to the slot value.
TOP_RESOLUTION - If there is a resolution list for the slot, return the first value in the resolution list. If there is no resolution list, return null.
If you don't specify the valueSelectionSetting parameter, the default is ORIGINAL_VALUE.
The test set Id for the test set discrepancy report.
", + "location":"uri", + "locationName":"testSetId" + }, + "target":{ + "shape":"TestSetDiscrepancyReportResourceTarget", + "documentation":"The target bot for the test set discrepancy report.
" + } + } + }, + "CreateTestSetDiscrepancyReportResponse":{ + "type":"structure", + "members":{ + "testSetDiscrepancyReportId":{ + "shape":"Id", + "documentation":"The unique identifier of the test set discrepancy report to describe.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The creation date and time for the test set discrepancy report.
" + }, + "testSetId":{ + "shape":"Id", + "documentation":"The test set Id for the test set discrepancy report.
" + }, + "target":{ + "shape":"TestSetDiscrepancyReportResourceTarget", + "documentation":"The target bot for the test set discrepancy report.
" + } + } + }, "CreateUploadUrlRequest":{ "type":"structure", "members":{ @@ -3763,7 +4326,7 @@ "members":{ "childDirected":{ "shape":"ChildDirected", - "documentation":"For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to the Children's Online Privacy Protection Act (COPPA) by specifying true or false in the childDirected field. By specifying true in the childDirected field, you confirm that your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. By specifying false in the childDirected field, you confirm that your use of Amazon Lex is not related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. You may not specify a default value for the childDirected field that does not accurately reflect whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. If your use of Amazon Lex relates to a website, program, or other application that is directed in whole or in part, to children under age 13, you must obtain any required verifiable parental consent under COPPA. For information regarding the use of Amazon Lex in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13, see the Amazon Lex FAQ.
For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to the Children's Online Privacy Protection Act (COPPA) by specifying true or false in the childDirected field. By specifying true in the childDirected field, you confirm that your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. By specifying false in the childDirected field, you confirm that your use of Amazon Lex is not related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. You may not specify a default value for the childDirected field that does not accurately reflect whether your use of Amazon Lex is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. If your use of Amazon Lex relates to a website, program, or other application that is directed in whole or in part, to children under age 13, you must obtain any required verifiable parental consent under COPPA. For information regarding the use of Amazon Lex in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13, see the Amazon Lex FAQ.
By default, data stored by Amazon Lex is encrypted. The DataPrivacy structure provides settings that determine how Amazon Lex handles special cases of securing the data for your bot.
The test set Id of the test set to be deleted.
", + "location":"uri", + "locationName":"testSetId" + } + } + }, "DeleteUtterancesRequest":{ "type":"structure", "required":["botId"], @@ -4379,7 +4954,7 @@ }, "botVersion":{ "shape":"BotVersion", - "documentation":"The identifier of the version of the bot associated with the locale.
", + "documentation":"The version of the bot associated with the locale.
", "location":"uri", "locationName":"botVersion" }, @@ -4400,7 +4975,7 @@ }, "botVersion":{ "shape":"BotVersion", - "documentation":"The identifier of the version of the bot associated with the locale.
" + "documentation":"The version of the bot associated with the locale.
" }, "localeId":{ "shape":"LocaleId", @@ -4958,7 +5533,7 @@ }, "initialResponseSetting":{ "shape":"InitialResponseSetting", - "documentation":"" + "documentation":"Configuration setting for a response sent to the user before Amazon Lex starts eliciting slots.
" } } }, @@ -5183,6 +5758,234 @@ } } }, + "DescribeTestExecutionRequest":{ + "type":"structure", + "required":["testExecutionId"], + "members":{ + "testExecutionId":{ + "shape":"Id", + "documentation":"The execution Id of the test set execution.
", + "location":"uri", + "locationName":"testExecutionId" + } + } + }, + "DescribeTestExecutionResponse":{ + "type":"structure", + "members":{ + "testExecutionId":{ + "shape":"Id", + "documentation":"The execution Id for the test set execution.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The execution creation date and time for the test set execution.
" + }, + "lastUpdatedDateTime":{ + "shape":"Timestamp", + "documentation":"The date and time of the last update for the execution.
" + }, + "testExecutionStatus":{ + "shape":"TestExecutionStatus", + "documentation":"The test execution status for the test execution.
" + }, + "testSetId":{ + "shape":"Id", + "documentation":"The test set Id for the test set execution.
" + }, + "testSetName":{ + "shape":"Name", + "documentation":"The test set name of the test set execution.
" + }, + "target":{ + "shape":"TestExecutionTarget", + "documentation":"The target bot for the test set execution details.
" + }, + "apiMode":{ + "shape":"TestExecutionApiMode", + "documentation":"Indicates whether we use streaming or non-streaming APIs are used for the test set execution. For streaming, StartConversation Amazon Lex Runtime API is used. Whereas for non-streaming, RecognizeUtterance and RecognizeText Amazon Lex Runtime API is used.
Indicates whether test set is audio or text.
" + }, + "failureReasons":{ + "shape":"FailureReasons", + "documentation":"Reasons for the failure of the test set execution.
" + } + } + }, + "DescribeTestSetDiscrepancyReportRequest":{ + "type":"structure", + "required":["testSetDiscrepancyReportId"], + "members":{ + "testSetDiscrepancyReportId":{ + "shape":"Id", + "documentation":"The unique identifier of the test set discrepancy report.
", + "location":"uri", + "locationName":"testSetDiscrepancyReportId" + } + } + }, + "DescribeTestSetDiscrepancyReportResponse":{ + "type":"structure", + "members":{ + "testSetDiscrepancyReportId":{ + "shape":"Id", + "documentation":"The unique identifier of the test set discrepancy report to describe.
" + }, + "testSetId":{ + "shape":"Id", + "documentation":"The test set Id for the test set discrepancy report.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The time and date of creation for the test set discrepancy report.
" + }, + "target":{ + "shape":"TestSetDiscrepancyReportResourceTarget", + "documentation":"The target bot location for the test set discrepancy report.
" + }, + "testSetDiscrepancyReportStatus":{ + "shape":"TestSetDiscrepancyReportStatus", + "documentation":"The status for the test set discrepancy report.
" + }, + "lastUpdatedDataTime":{ + "shape":"Timestamp", + "documentation":"The date and time of the last update for the test set discrepancy report.
" + }, + "testSetDiscrepancyTopErrors":{ + "shape":"TestSetDiscrepancyErrors", + "documentation":"The top 200 error results from the test set discrepancy report.
" + }, + "testSetDiscrepancyRawOutputUrl":{ + "shape":"PresignedS3Url", + "documentation":"Pre-signed Amazon S3 URL to download the test set discrepancy report.
" + }, + "failureReasons":{ + "shape":"FailureReasons", + "documentation":"The failure report for the test set discrepancy report generation action.
" + } + } + }, + "DescribeTestSetGenerationRequest":{ + "type":"structure", + "required":["testSetGenerationId"], + "members":{ + "testSetGenerationId":{ + "shape":"Id", + "documentation":"The unique identifier of the test set generation.
", + "location":"uri", + "locationName":"testSetGenerationId" + } + } + }, + "DescribeTestSetGenerationResponse":{ + "type":"structure", + "members":{ + "testSetGenerationId":{ + "shape":"Id", + "documentation":"The unique identifier of the test set generation.
" + }, + "testSetGenerationStatus":{ + "shape":"TestSetGenerationStatus", + "documentation":"The status for the test set generation.
" + }, + "failureReasons":{ + "shape":"FailureReasons", + "documentation":"The reasons the test set generation failed.
" + }, + "testSetId":{ + "shape":"Id", + "documentation":"The unique identifier for the test set created for the generated test set.
" + }, + "testSetName":{ + "shape":"Name", + "documentation":"The test set name for the generated test set.
" + }, + "description":{ + "shape":"Description", + "documentation":"The test set description for the test set generation.
" + }, + "storageLocation":{ + "shape":"TestSetStorageLocation", + "documentation":"The Amazon S3 storage location for the test set generation.
" + }, + "generationDataSource":{ + "shape":"TestSetGenerationDataSource", + "documentation":"The data source of the test set used for the test set generation.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The roleARN of the test set used for the test set generation.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The creation date and time for the test set generation.
" + }, + "lastUpdatedDateTime":{ + "shape":"Timestamp", + "documentation":"The date and time of the last update for the test set generation.
" + } + } + }, + "DescribeTestSetRequest":{ + "type":"structure", + "required":["testSetId"], + "members":{ + "testSetId":{ + "shape":"Id", + "documentation":"The test set Id for the test set request.
", + "location":"uri", + "locationName":"testSetId" + } + } + }, + "DescribeTestSetResponse":{ + "type":"structure", + "members":{ + "testSetId":{ + "shape":"Id", + "documentation":"The test set Id for the test set response.
" + }, + "testSetName":{ + "shape":"Name", + "documentation":"The test set name of the test set.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the test set.
" + }, + "modality":{ + "shape":"TestSetModality", + "documentation":"Indicates whether the test set is audio or text data.
" + }, + "status":{ + "shape":"TestSetStatus", + "documentation":"The status of the test set.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The roleARN used for any operation in the test set to access resources in the Amazon Web Services account.
" + }, + "numTurns":{ + "shape":"Count", + "documentation":"The total number of agent and user turn in the test set.
" + }, + "storageLocation":{ + "shape":"TestSetStorageLocation", + "documentation":"The Amazon S3 storage location for the test set data.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The creation date and time for the test set data.
" + }, + "lastUpdatedDateTime":{ + "shape":"Timestamp", + "documentation":"The date and time for the last update of the test set data.
" + } + } + }, "Description":{ "type":"string", "max":200, @@ -5205,7 +6008,7 @@ "documentation":"When true the next message for the intent is not used.
" } }, - "documentation":"Defines the action that the bot executes at runtime when the conversation reaches this step.
" + "documentation":"Defines the action that the bot executes at runtime when the conversation reaches this step.
" }, "DialogActionType":{ "type":"string", @@ -5246,7 +6049,7 @@ "documentation":"Contains the responses and actions that Amazon Lex takes after the Lambda function is complete.
" } }, - "documentation":"Settings that specify the dialog code hook that is called by Amazon Lex at a step of the conversation.
" + "documentation":"Settings that specify the dialog code hook that is called by Amazon Lex at a step of the conversation.
" }, "DialogCodeHookSettings":{ "type":"structure", @@ -5328,17 +6131,35 @@ }, "ErrorMessage":{"type":"string"}, "ExceptionMessage":{"type":"string"}, - "ExportFilter":{ + "ExecutionErrorDetails":{ "type":"structure", "required":[ - "name", - "values", - "operator" + "errorCode", + "errorMessage" ], "members":{ - "name":{ - "shape":"ExportFilterName", - "documentation":"The name of the field to use for filtering.
" + "errorCode":{ + "shape":"NonEmptyString", + "documentation":"The error code for the error.
" + }, + "errorMessage":{ + "shape":"NonEmptyString", + "documentation":"The message describing the error.
" + } + }, + "documentation":"Details about an error in an execution of a test set.
" + }, + "ExportFilter":{ + "type":"structure", + "required":[ + "name", + "values", + "operator" + ], + "members":{ + "name":{ + "shape":"ExportFilterName", + "documentation":"The name of the field to use for filtering.
" }, "values":{ "shape":"FilterValues", @@ -5382,6 +6203,10 @@ "customVocabularyExportSpecification":{ "shape":"CustomVocabularyExportSpecification", "documentation":"The parameters required to export a custom vocabulary.
" + }, + "testSetExportSpecification":{ + "shape":"TestSetExportSpecification", + "documentation":"Specifications for the test set that is exported as a resource.
" } }, "documentation":"Provides information about the bot or bot locale that you want to export. You can specify the botExportSpecification or the botLocaleExportSpecification, but not both.
One to 5 message groups that contain start messages. Amazon Lex chooses one of the messages to play to the user.
" + "documentation":"1 - 5 message groups that contain start messages. Amazon Lex chooses one of the messages to play to the user.
" }, "allowInterrupt":{ "shape":"BoxedBoolean", @@ -5579,7 +6404,7 @@ }, "messageGroups":{ "shape":"MessageGroupsList", - "documentation":"One to 5 message groups that contain update messages. Amazon Lex chooses one of the messages to play to the user.
" + "documentation":"1 - 5 message groups that contain update messages. Amazon Lex chooses one of the messages to play to the user.
" }, "allowInterrupt":{ "shape":"BoxedBoolean", @@ -5611,6 +6436,31 @@ }, "documentation":"Provides information for updating the user on the progress of fulfilling an intent.
" }, + "GetTestExecutionArtifactsUrlRequest":{ + "type":"structure", + "required":["testExecutionId"], + "members":{ + "testExecutionId":{ + "shape":"Id", + "documentation":"The unique identifier of the completed test execution.
", + "location":"uri", + "locationName":"testExecutionId" + } + } + }, + "GetTestExecutionArtifactsUrlResponse":{ + "type":"structure", + "members":{ + "testExecutionId":{ + "shape":"Id", + "documentation":"The unique identifier of the completed test execution.
" + }, + "downloadArtifactsUrl":{ + "shape":"PresignedS3Url", + "documentation":"The pre-signed Amazon S3 URL to download completed test execution.
" + } + } + }, "GrammarSlotTypeSetting":{ "type":"structure", "members":{ @@ -5630,15 +6480,15 @@ "members":{ "s3BucketName":{ "shape":"S3BucketName", - "documentation":"The name of the S3 bucket that contains the grammar source.
" + "documentation":"The name of the Amazon S3 bucket that contains the grammar source.
" }, "s3ObjectKey":{ "shape":"S3ObjectPath", - "documentation":"The path to the grammar in the S3 bucket.
" + "documentation":"The path to the grammar in the Amazon S3 bucket.
" }, "kmsKeyArn":{ "shape":"KmsKeyArn", - "documentation":"The Amazon KMS key required to decrypt the contents of the grammar, if any.
" + "documentation":"The KMS key required to decrypt the contents of the grammar, if any.
" } }, "documentation":"Describes the Amazon S3 bucket name and location for the grammar that is the source for the slot type.
" @@ -5677,7 +6527,8 @@ "type":"string", "enum":[ "LexJson", - "TSV" + "TSV", + "CSV" ] }, "ImportExportFilePassword":{ @@ -5737,7 +6588,11 @@ "shape":"BotLocaleImportSpecification", "documentation":"Parameters for importing a bot locale.
" }, - "customVocabularyImportSpecification":{"shape":"CustomVocabularyImportSpecification"} + "customVocabularyImportSpecification":{"shape":"CustomVocabularyImportSpecification"}, + "testSetImportResourceSpecification":{ + "shape":"TestSetImportResourceSpecification", + "documentation":"Specifications for the test set that is imported.
" + } }, "documentation":"Provides information about the bot or bot locale that you want to import. You can specify the botImportSpecification or the botLocaleImportSpecification, but not both.
The name of the context.
" } }, - "documentation":"The name of a context that must be active for an intent to be selected by Amazon Lex.
" + "documentation":"A context that must be active for an intent to be selected by Amazon Lex.
" }, "InputContextsList":{ "type":"list", @@ -5858,6 +6714,84 @@ "max":5, "min":0 }, + "InputSessionStateSpecification":{ + "type":"structure", + "members":{ + "sessionAttributes":{ + "shape":"StringMap", + "documentation":"Session attributes for the session state.
" + }, + "activeContexts":{ + "shape":"ActiveContextList", + "documentation":"Active contexts for the session state.
" + }, + "runtimeHints":{ + "shape":"RuntimeHints", + "documentation":"Runtime hints for the session state.
" + } + }, + "documentation":"Specifications for the current state of the dialog between the user and the bot in the test set.
" + }, + "IntentClassificationTestResultItem":{ + "type":"structure", + "required":[ + "intentName", + "multiTurnConversation", + "resultCounts" + ], + "members":{ + "intentName":{ + "shape":"Name", + "documentation":"The name of the intent.
" + }, + "multiTurnConversation":{ + "shape":"Boolean", + "documentation":"Indicates whether the conversation involves multiple turns or not.
" + }, + "resultCounts":{ + "shape":"IntentClassificationTestResultItemCounts", + "documentation":"The result of the intent classification test.
" + } + }, + "documentation":"Information for an intent that is classified by the test workbench.
" + }, + "IntentClassificationTestResultItemCounts":{ + "type":"structure", + "required":[ + "totalResultCount", + "intentMatchResultCounts" + ], + "members":{ + "totalResultCount":{ + "shape":"Count", + "documentation":"The total number of results in the intent classification test.
" + }, + "speechTranscriptionResultCounts":{ + "shape":"TestResultMatchStatusCountMap", + "documentation":"The number of matched, mismatched, and execution error results for speech transcription for the intent.
" + }, + "intentMatchResultCounts":{ + "shape":"TestResultMatchStatusCountMap", + "documentation":"The number of matched and mismatched results for intent recognition for the intent.
" + } + }, + "documentation":"The number of items in the intent classification test.
" + }, + "IntentClassificationTestResultItemList":{ + "type":"list", + "member":{"shape":"IntentClassificationTestResultItem"} + }, + "IntentClassificationTestResults":{ + "type":"structure", + "required":["items"], + "members":{ + "items":{ + "shape":"IntentClassificationTestResultItemList", + "documentation":"A list of the results for the intent classification test.
" + } + }, + "documentation":"Information for the results of the intent classification test.
" + }, "IntentClosingSetting":{ "type":"structure", "members":{ @@ -5970,6 +6904,44 @@ "max":1, "min":1 }, + "IntentLevelSlotResolutionTestResultItem":{ + "type":"structure", + "required":[ + "intentName", + "multiTurnConversation", + "slotResolutionResults" + ], + "members":{ + "intentName":{ + "shape":"Name", + "documentation":"The name of the intent that was recognized.
" + }, + "multiTurnConversation":{ + "shape":"Boolean", + "documentation":"Indicates whether the conversation involves multiple turns or not.
" + }, + "slotResolutionResults":{ + "shape":"SlotResolutionTestResultItems", + "documentation":"The results for the slot resolution in the test execution result.
" + } + }, + "documentation":"Information about intent-level slot resolution in a test result.
" + }, + "IntentLevelSlotResolutionTestResultItemList":{ + "type":"list", + "member":{"shape":"IntentLevelSlotResolutionTestResultItem"} + }, + "IntentLevelSlotResolutionTestResults":{ + "type":"structure", + "required":["items"], + "members":{ + "items":{ + "shape":"IntentLevelSlotResolutionTestResultItemList", + "documentation":"Indicates the items for the slot level resolution for the intents.
" + } + }, + "documentation":"Indicates the success or failure of slots at the intent level.
" + }, "IntentOverride":{ "type":"structure", "members":{ @@ -5979,7 +6951,7 @@ }, "slots":{ "shape":"SlotValueOverrideMap", - "documentation":"A map of all of the slot value overrides for the intent. The name of the slot maps to the value of the slot. Slots that are not included in the map aren't overridden.,
" + "documentation":"A map of all of the slot value overrides for the intent. The name of the slot maps to the value of the slot. Slots that are not included in the map aren't overridden.
" } }, "documentation":"Override settings to configure the intent state.
" @@ -6079,18 +7051,18 @@ "members":{ "kendraIndex":{ "shape":"KendraIndexArn", - "documentation":"The Amazon Resource Name (ARN) of the Amazon Kendra index that you want the AMAZON.KendraSearchIntent intent to search. The index must be in the same account and Region as the Amazon Lex bot.
" + "documentation":"The Amazon Resource Name (ARN) of the Amazon Kendra index that you want the AMAZON.KendraSearchIntent intent to search. The index must be in the same account and Region as the Amazon Lex bot.
Determines whether the AMAZON.KendraSearchIntent intent uses a custom query string to query the Amazon Kendra index.
" + "documentation":"Determines whether the AMAZON.KendraSearchIntent intent uses a custom query string to query the Amazon Kendra index.
A query filter that Amazon Lex sends to Amazon Kendra to filter the response from a query. The filter is in the format defined by Amazon Kendra. For more information, see Filtering queries.
" } }, - "documentation":"Provides configuration information for the AMAZON.KendraSearchIntent intent. When you use this intent, Amazon Lex searches the specified Amazon Kendra index and returns documents from the index that match the user's utterance.
" + "documentation":"Provides configuration information for the AMAZON.KendraSearchIntent intent. When you use this intent, Amazon Lex searches the specified Amazon Kendra index and returns documents from the index that match the user's utterance.
The unique identifier of the test execution to list the result items.
", + "location":"uri", + "locationName":"testExecutionId" + }, + "resultFilterBy":{ + "shape":"TestExecutionResultFilterBy", + "documentation":"The filter for the list of results from the test set execution.
" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"The maximum number of test execution result items to return in each page. If there are fewer results than the max page size, only the actual number of results are returned.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"If the response from the ListTestExecutionResultItems operation contains more results than specified in the maxResults parameter, a token is returned in the response. Use that token in the nextToken parameter to return the next page of results.
The list of results from the test execution.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"A token that indicates whether there are more results to return in a response to the ListTestExecutionResultItems operation. If the nextToken field is present, you send the contents as the nextToken parameter of a ListTestExecutionResultItems operation request to get the next page of results.
The sort order of the test set executions.
" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"The maximum number of test executions to return in each page. If there are fewer results than the max page size, only the actual number of results are returned.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"If the response from the ListTestExecutions operation contains more results than specified in the maxResults parameter, a token is returned in the response. Use that token in the nextToken parameter to return the next page of results.
" + } + } + }, + "ListTestExecutionsResponse":{ + "type":"structure", + "members":{ + "testExecutions":{ + "shape":"TestExecutionSummaryList", + "documentation":"The list of test executions.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"A token that indicates whether there are more results to return in a response to the ListTestExecutions operation. If the nextToken field is present, you send the contents as the nextToken parameter of a ListTestExecutions operation request to get the next page of results.
" + } + } + }, + "ListTestSetRecordsRequest":{ + "type":"structure", + "required":["testSetId"], + "members":{ + "testSetId":{ + "shape":"Id", + "documentation":"The identifier of the test set to list its test set records.
", + "location":"uri", + "locationName":"testSetId" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"The maximum number of test set records to return in each page. If there are fewer records than the max page size, only the actual number of records are returned.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"If the response from the ListTestSetRecords operation contains more results than specified in the maxResults parameter, a token is returned in the response. Use that token in the nextToken parameter to return the next page of results.
" + } + } + }, + "ListTestSetRecordsResponse":{ + "type":"structure", + "members":{ + "testSetRecords":{ + "shape":"TestSetTurnRecordList", + "documentation":"The list of records from the test set.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"A token that indicates whether there are more records to return in a response to the ListTestSetRecords operation. If the nextToken field is present, you send the contents as the nextToken parameter of a ListTestSetRecords operation request to get the next page of records.
" + } + } + }, + "ListTestSetsRequest":{ + "type":"structure", + "members":{ + "sortBy":{ + "shape":"TestSetSortBy", + "documentation":"The sort order for the list of test sets.
" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"The maximum number of test sets to return in each page. If there are fewer results than the max page size, only the actual number of results are returned.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"If the response from the ListTestSets operation contains more results than specified in the maxResults parameter, a token is returned in the response. Use that token in the nextToken parameter to return the next page of results.
" + } + } + }, + "ListTestSetsResponse":{ + "type":"structure", + "members":{ + "testSets":{ + "shape":"TestSetSummaryList", + "documentation":"The selected test sets in a list of test sets.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"A token that indicates whether there are more results to return in a response to the ListTestSets operation. If the nextToken field is present, you send the contents as the nextToken parameter of a ListTestSets operation request to get the next page of results.
" + } + } + }, "LocaleId":{"type":"string"}, "LocaleName":{"type":"string"}, "LogPrefix":{ @@ -7077,7 +8182,7 @@ "documentation":"A message that defines a response card that the client application can show to the user.
" } }, - "documentation":"The object that provides message text and it's type.
" + "documentation":"The object that provides message text and its type.
" }, "MessageGroup":{ "type":"structure", @@ -7128,7 +8233,7 @@ "type":"string", "max":100, "min":1, - "pattern":"^([0-9a-zA-Z][_-]?)+$" + "pattern":"^([0-9a-zA-Z][_-]?){1,100}$" }, "NewCustomVocabularyItem":{ "type":"structure", @@ -7234,6 +8339,48 @@ "max":10, "min":0 }, + "OverallTestResultItem":{ + "type":"structure", + "required":[ + "multiTurnConversation", + "totalResultCount", + "endToEndResultCounts" + ], + "members":{ + "multiTurnConversation":{ + "shape":"Boolean", + "documentation":"Indicates whether the conversation contains multiple turns or not.
" + }, + "totalResultCount":{ + "shape":"Count", + "documentation":"The total number of overall results in the result of the test execution.
" + }, + "speechTranscriptionResultCounts":{ + "shape":"TestResultMatchStatusCountMap", + "documentation":"The number of speech transcription results in the overall test.
" + }, + "endToEndResultCounts":{ + "shape":"TestResultMatchStatusCountMap", + "documentation":"The number of results that succeeded.
" + } + }, + "documentation":"Information about the overall results for a test execution result.
" + }, + "OverallTestResultItemList":{ + "type":"list", + "member":{"shape":"OverallTestResultItem"} + }, + "OverallTestResults":{ + "type":"structure", + "required":["items"], + "members":{ + "items":{ + "shape":"OverallTestResultItemList", + "documentation":"A list of the overall test results.
" + } + }, + "documentation":"Information about the overall test results.
" + }, "ParentBotNetwork":{ "type":"structure", "required":[ @@ -7378,7 +8525,7 @@ "members":{ "service":{ "shape":"ServicePrincipal", - "documentation":"The name of the AWS service that should allowed or denied access to an Amazon Lex action.
" + "documentation":"The name of the Amazon Web Services service that should allowed or denied access to an Amazon Lex action.
" }, "arn":{ "shape":"PrincipalArn", @@ -7509,6 +8656,11 @@ "type":"list", "member":{"shape":"RecommendedIntentSummary"} }, + "RecordNumber":{ + "type":"long", + "max":200000, + "min":1 + }, "RegexPattern":{ "type":"string", "max":300, @@ -7570,6 +8722,52 @@ "min":32, "pattern":"^arn:aws:iam::[0-9]{12}:role/.*$" }, + "RuntimeHintDetails":{ + "type":"structure", + "members":{ + "runtimeHintValues":{ + "shape":"RuntimeHintValuesList", + "documentation":"One or more strings that Amazon Lex should look for in the input to the bot. Each phrase is given preference when deciding on slot values.
" + }, + "subSlotHints":{ + "shape":"SlotHintsSlotMap", + "documentation":"A map of constituent sub slot names inside a composite slot in the intent and the phrases that should be added for each sub slot. Inside each composite slot hints, this structure provides a mechanism to add granular sub slot phrases. Only sub slot hints are supported for composite slots. The intent name, composite slot name and the constituent sub slot names must exist.
" + } + }, + "documentation":"Provides an array of phrases that should be given preference when resolving values for a slot.
" + }, + "RuntimeHintPhrase":{ + "type":"string", + "max":140, + "min":1 + }, + "RuntimeHintValue":{ + "type":"structure", + "required":["phrase"], + "members":{ + "phrase":{ + "shape":"RuntimeHintPhrase", + "documentation":"The phrase that Amazon Lex should look for in the user's input to the bot.
" + } + }, + "documentation":"Provides the phrase that Amazon Lex should look for in the user's input to the bot.
" + }, + "RuntimeHintValuesList":{ + "type":"list", + "member":{"shape":"RuntimeHintValue"}, + "max":100, + "min":1 + }, + "RuntimeHints":{ + "type":"structure", + "members":{ + "slotHints":{ + "shape":"SlotHintsIntentMap", + "documentation":"A list of the slots in the intent that should have runtime hints added, and the phrases that should be added for each slot.
The first level of the slotHints map is the name of the intent. The second level is the name of the slot within the intent. For more information, see Using hints to improve accuracy.
The intent name and slot name must exist.
" + } + }, + "documentation":"You can provide Amazon Lex with hints to the phrases that a customer is likely to use for a slot. When a slot with hints is resolved, the phrases in the runtime hints are preferred in the resolution. You can provide hints for a maximum of 100 intents. You can provide a maximum of 100 slots.
Before you can use runtime hints with an existing bot, you must first rebuild the bot.
For more information, see Using runtime hints to improve recognition of slot values.
" + }, "S3BucketArn":{ "type":"string", "max":2048, @@ -7585,7 +8783,7 @@ "members":{ "kmsKeyArn":{ "shape":"KmsKeyArn", - "documentation":"The Amazon Resource Name (ARN) of an AWS Key Management Service (KMS) key for encrypting audio log files stored in an S3 bucket.
" + "documentation":"The Amazon Resource Name (ARN) of an Amazon Web Services Key Management Service (KMS) key for encrypting audio log files stored in an S3 bucket.
" }, "s3BucketArn":{ "shape":"S3BucketArn", @@ -7925,6 +9123,16 @@ "max":1, "min":1 }, + "SlotHintsIntentMap":{ + "type":"map", + "key":{"shape":"Name"}, + "value":{"shape":"SlotHintsSlotMap"} + }, + "SlotHintsSlotMap":{ + "type":"map", + "key":{"shape":"Name"}, + "value":{"shape":"RuntimeHintDetails"} + }, "SlotPrioritiesList":{ "type":"list", "member":{"shape":"SlotPriority"} @@ -7938,7 +9146,7 @@ "members":{ "priority":{ "shape":"PriorityValue", - "documentation":"The priority that a slot should be elicited.
" + "documentation":"The priority that Amazon Lex should apply to the slot.
" }, "slotId":{ "shape":"Id", @@ -7947,6 +9155,50 @@ }, "documentation":"Sets the priority that Amazon Lex should use when eliciting slot values from a user.
" }, + "SlotResolutionTestResultItem":{ + "type":"structure", + "required":[ + "slotName", + "resultCounts" + ], + "members":{ + "slotName":{ + "shape":"TestResultSlotName", + "documentation":"The name of the slot.
" + }, + "resultCounts":{ + "shape":"SlotResolutionTestResultItemCounts", + "documentation":"A result for slot resolution in the results of a test execution.
" + } + }, + "documentation":"Information about the success and failure rate of slot resolution in the results of a test execution.
" + }, + "SlotResolutionTestResultItemCounts":{ + "type":"structure", + "required":[ + "totalResultCount", + "slotMatchResultCounts" + ], + "members":{ + "totalResultCount":{ + "shape":"Count", + "documentation":"The total number of results.
" + }, + "speechTranscriptionResultCounts":{ + "shape":"TestResultMatchStatusCountMap", + "documentation":"The number of matched, mismatched and execution error results for speech transcription for the slot.
" + }, + "slotMatchResultCounts":{ + "shape":"TestResultMatchStatusCountMap", + "documentation":"The number of matched and mismatched results for slot resolution for the slot.
" + } + }, + "documentation":"Information about the counts for a slot resolution in the results of a test execution.
" + }, + "SlotResolutionTestResultItems":{ + "type":"list", + "member":{"shape":"SlotResolutionTestResultItem"} + }, "SlotShape":{ "type":"string", "enum":[ @@ -8130,7 +9382,7 @@ }, "slotTypeCategory":{ "shape":"SlotTypeCategory", - "documentation":"Indicates the type of the slot type.
Custom - A slot type that you created using custom values. For more information, see Creating custom slot types.
Extended - A slot type created by extending the AMAZON.AlphaNumeric built-in slot type. For more information, see AMAZON.AlphaNumeric.
ExternalGrammar - A slot type using a custom GRXML grammar to define values. For more information, see Using a custom grammar slot type.
Indicates the type of the slot type.
Custom - A slot type that you created using custom values. For more information, see Creating custom slot types.
Extended - A slot type created by extending the AMAZON.AlphaNumeric built-in slot type. For more information, see AMAZON.AlphaNumeric .
ExternalGrammar - A slot type using a custom GRXML grammar to define values. For more information, see Using a custom grammar slot type.
Provides summary information about a slot type.
" @@ -8195,7 +9447,7 @@ "documentation":"Specifies the settings that Amazon Lex uses when a slot value is successfully entered by a user.
" } }, - "documentation":"Specifies the elicitation setting details for constituent sub slots of a composite slot.
" + "documentation":"Specifies the elicitation setting details eliciting a slot.
" }, "SlotValueOverride":{ "type":"structure", @@ -8226,7 +9478,7 @@ "members":{ "pattern":{ "shape":"RegexPattern", - "documentation":"A regular expression used to validate the value of a slot.
Use a standard regular expression. Amazon Lex supports the following characters in the regular expression:
A-Z, a-z
0-9
Unicode characters (\"\\ u<Unicode>\")
Represent Unicode characters with four digits, for example \"\\u0041\" or \"\\u005A\".
The following regular expression operators are not supported:
Infinite repeaters: *, +, or {x,} with no upper bound.
Wild card (.)
A regular expression used to validate the value of a slot.
Use a standard regular expression. Amazon Lex supports the following characters in the regular expression:
A-Z, a-z
0-9
Unicode characters (\"\\u<Unicode>\")
Represent Unicode characters with four digits, for example \"\\u0041\" or \"\\u005A\".
The following regular expression operators are not supported:
Infinite repeaters: *, +, or {x,} with no upper bound.
Wild card (.)
Provides a regular expression used to validate the value of a slot.
" @@ -8245,7 +9497,7 @@ "members":{ "resolutionStrategy":{ "shape":"SlotValueResolutionStrategy", - "documentation":"Determines the slot resolution strategy that Amazon Lex uses to return slot type values. The field can be set to one of the following values:
OriginalValue - Returns the value entered by the user, if the user value is similar to the slot value.
TopResolution - If there is a resolution list for the slot, return the first value in the resolution list as the slot type value. If there is no resolution list, null is returned.
If you don't specify the valueSelectionStrategy, the default is OriginalValue.
" + "documentation":"Determines the slot resolution strategy that Amazon Lex uses to return slot type values. The field can be set to one of the following values:
ORIGINAL_VALUE - Returns the value entered by the user, if the user value is similar to the slot value.
TOP_RESOLUTION - If there is a resolution list for the slot, return the first value in the resolution list as the slot type value. If there is no resolution list, null is returned.
If you don't specify the valueSelectionStrategy, the default is ORIGINAL_VALUE.
Provides settings that enable advanced recognition settings for slot values.
" + "documentation":"Provides settings that enable advanced recognition settings for slot values. You can use this to enable using slot values as a custom vocabulary for recognizing user utterances.
" } }, "documentation":"Contains settings used by Amazon Lex to select a slot value.
" @@ -8412,7 +9664,140 @@ } } }, - "StillWaitingResponseFrequency":{ + "StartTestExecutionRequest":{ + "type":"structure", + "required":[ + "testSetId", + "target", + "apiMode" + ], + "members":{ + "testSetId":{ + "shape":"Id", + "documentation":"The test set Id for the test set execution.
", + "location":"uri", + "locationName":"testSetId" + }, + "target":{ + "shape":"TestExecutionTarget", + "documentation":"The target bot for the test set execution.
" + }, + "apiMode":{ + "shape":"TestExecutionApiMode", + "documentation":"Indicates whether we use streaming or non-streaming APIs for the test set execution. For streaming, StartConversation Runtime API is used. Whereas, for non-streaming, RecognizeUtterance and RecognizeText Amazon Lex Runtime API are used.
" + }, + "testExecutionModality":{ + "shape":"TestExecutionModality", + "documentation":"Indicates whether audio or text is used.
" + } + } + }, + "StartTestExecutionResponse":{ + "type":"structure", + "members":{ + "testExecutionId":{ + "shape":"Id", + "documentation":"The unique identifier of the test set execution.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The creation date and time for the test set execution.
" + }, + "testSetId":{ + "shape":"Id", + "documentation":"The test set Id for the test set execution.
" + }, + "target":{ + "shape":"TestExecutionTarget", + "documentation":"The target bot for the test set execution.
" + }, + "apiMode":{ + "shape":"TestExecutionApiMode", + "documentation":"Indicates whether we use streaming or non-streaming APIs for the test set execution. For streaming, StartConversation Amazon Lex Runtime API is used. Whereas for non-streaming, RecognizeUtterance and RecognizeText Amazon Lex Runtime API are used.
" + }, + "testExecutionModality":{ + "shape":"TestExecutionModality", + "documentation":"Indicates whether audio or text is used.
" + } + } + }, + "StartTestSetGenerationRequest":{ + "type":"structure", + "required":[ + "testSetName", + "storageLocation", + "generationDataSource", + "roleArn" + ], + "members":{ + "testSetName":{ + "shape":"Name", + "documentation":"The test set name for the test set generation request.
" + }, + "description":{ + "shape":"Description", + "documentation":"The test set description for the test set generation request.
" + }, + "storageLocation":{ + "shape":"TestSetStorageLocation", + "documentation":"The Amazon S3 storage location for the test set generation.
" + }, + "generationDataSource":{ + "shape":"TestSetGenerationDataSource", + "documentation":"The data source for the test set generation.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The roleARN used for any operation in the test set to access resources in the Amazon Web Services account.
" + }, + "testSetTags":{ + "shape":"TagMap", + "documentation":"A list of tags to add to the test set. You can only add tags when you import/generate a new test set. You can't use the UpdateTestSet operation to update tags. To update tags, use the TagResource operation.
The unique identifier of the test set generation to describe.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The creation date and time for the test set generation.
" + }, + "testSetGenerationStatus":{ + "shape":"TestSetGenerationStatus", + "documentation":"The status for the test set generation.
" + }, + "testSetName":{ + "shape":"Name", + "documentation":"The test set name used for the test set generation.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description used for the test set generation.
" + }, + "storageLocation":{ + "shape":"TestSetStorageLocation", + "documentation":"The Amazon S3 storage location for the test set generation.
" + }, + "generationDataSource":{ + "shape":"TestSetGenerationDataSource", + "documentation":"The data source for the test set generation.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The roleARN used for any operation in the test set to access resources in the Amazon Web Services account.
" + }, + "testSetTags":{ + "shape":"TagMap", + "documentation":"A list of tags that was used for the test set that is being generated.
" + } + } + }, + "StillWaitingResponseFrequency":{ "type":"integer", "max":300, "min":1 @@ -8533,105 +9918,653 @@ "documentation":"Specifications for the constituent sub slots of a composite slot.
" } }, - "documentation":"Specifications for the constituent sub slots and the expression for the composite slot.
" + "documentation":"Specifications for the constituent sub slots and the expression for the composite slot.
" + }, + "SubSlotSpecificationMap":{ + "type":"map", + "key":{"shape":"Name"}, + "value":{"shape":"Specifications"}, + "max":6, + "min":0 + }, + "SubSlotTypeComposition":{ + "type":"structure", + "required":[ + "name", + "slotTypeId" + ], + "members":{ + "name":{ + "shape":"Name", + "documentation":"Name of a constituent sub slot inside a composite slot.
" + }, + "slotTypeId":{ + "shape":"BuiltInOrCustomSlotTypeId", + "documentation":"The unique identifier assigned to a slot type. This refers to either a built-in slot type or the unique slotTypeId of a custom slot type.
" + } + }, + "documentation":"Subslot type composition.
" + }, + "SubSlotTypeList":{ + "type":"list", + "member":{"shape":"SubSlotTypeComposition"}, + "max":6, + "min":0 + }, + "SubSlotValueElicitationSetting":{ + "type":"structure", + "required":["promptSpecification"], + "members":{ + "defaultValueSpecification":{"shape":"SlotDefaultValueSpecification"}, + "promptSpecification":{"shape":"PromptSpecification"}, + "sampleUtterances":{ + "shape":"SampleUtterancesList", + "documentation":"If you know a specific pattern that users might respond to an Amazon Lex request for a sub slot value, you can provide those utterances to improve accuracy. This is optional. In most cases Amazon Lex is capable of understanding user utterances. This is similar to SampleUtterances for slots.
Subslot elicitation settings.
DefaultValueSpecification is a list of default values for a constituent sub slot in a composite slot. Default values are used when Amazon Lex hasn't determined a value for a slot. You can specify default values from context variables, session attributes, and defined values. This is similar to DefaultValueSpecification for slots.
PromptSpecification is the prompt that Amazon Lex uses to elicit the sub slot value from the user. This is similar to PromptSpecification for slots.
The Amazon Resource Name (ARN) of the bot, bot alias, or bot channel to tag.
", + "location":"uri", + "locationName":"resourceARN" + }, + "tags":{ + "shape":"TagMap", + "documentation":"A list of tag keys to add to the resource. If a tag key already exists, the existing value is replaced with the new value.
" + } + } + }, + "TagResourceResponse":{ + "type":"structure", + "members":{ + } + }, + "TagValue":{ + "type":"string", + "max":256, + "min":0 + }, + "TestExecutionApiMode":{ + "type":"string", + "enum":[ + "Streaming", + "NonStreaming" + ] + }, + "TestExecutionModality":{ + "type":"string", + "enum":[ + "Text", + "Audio" + ] + }, + "TestExecutionResultFilterBy":{ + "type":"structure", + "required":["resultTypeFilter"], + "members":{ + "resultTypeFilter":{ + "shape":"TestResultTypeFilter", + "documentation":"Specifies which results to filter. See Test result details\">Test results details for details about different types of results.
" + }, + "conversationLevelTestResultsFilterBy":{ + "shape":"ConversationLevelTestResultsFilterBy", + "documentation":"Contains information about the method for filtering Conversation level test results.
" + } + }, + "documentation":"Contains information about the method by which to filter the results of the test execution.
" + }, + "TestExecutionResultItems":{ + "type":"structure", + "members":{ + "overallTestResults":{ + "shape":"OverallTestResults", + "documentation":"Overall results for the test execution, including the breakdown of conversations and single-input utterances.
" + }, + "conversationLevelTestResults":{ + "shape":"ConversationLevelTestResults", + "documentation":"Results related to conversations in the test set, including metrics about success and failure of conversations and intent and slot failures.
" + }, + "intentClassificationTestResults":{ + "shape":"IntentClassificationTestResults", + "documentation":"Intent recognition results aggregated by intent name. The aggregated results contain success and failure rates of intent recognition, speech transcriptions, and end-to-end conversations.
" + }, + "intentLevelSlotResolutionTestResults":{ + "shape":"IntentLevelSlotResolutionTestResults", + "documentation":"Slot resolution results aggregated by intent and slot name. The aggregated results contain success and failure rates of slot resolution, speech transcriptions, and end-to-end conversations
" + }, + "utteranceLevelTestResults":{ + "shape":"UtteranceLevelTestResults", + "documentation":"Results related to utterances in the test set.
" + } + }, + "documentation":"Contains the results of the test execution, grouped by type of results. See Test result details\">Test results details for details about different types of results.
" + }, + "TestExecutionSortAttribute":{ + "type":"string", + "enum":[ + "TestSetName", + "CreationDateTime" + ] + }, + "TestExecutionSortBy":{ + "type":"structure", + "required":[ + "attribute", + "order" + ], + "members":{ + "attribute":{ + "shape":"TestExecutionSortAttribute", + "documentation":"Specifies whether to sort the test set executions by the date and time at which the test sets were created.
" + }, + "order":{ + "shape":"SortOrder", + "documentation":"Specifies whether to sort in ascending or descending order.
" + } + }, + "documentation":"Contains information about the method by which to sort the instances of test executions you have carried out.
" + }, + "TestExecutionStatus":{ + "type":"string", + "enum":[ + "Pending", + "Waiting", + "InProgress", + "Completed", + "Failed", + "Stopping", + "Stopped" + ] + }, + "TestExecutionSummary":{ + "type":"structure", + "members":{ + "testExecutionId":{ + "shape":"Id", + "documentation":"The unique identifier of the test execution.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The date and time at which the test execution was created.
" + }, + "lastUpdatedDateTime":{ + "shape":"Timestamp", + "documentation":"The date and time at which the test execution was last updated.
" + }, + "testExecutionStatus":{ + "shape":"TestExecutionStatus", + "documentation":"The current status of the test execution.
" + }, + "testSetId":{ + "shape":"Id", + "documentation":"The unique identifier of the test set used in the test execution.
" + }, + "testSetName":{ + "shape":"Name", + "documentation":"The name of the test set used in the test execution.
" + }, + "target":{ + "shape":"TestExecutionTarget", + "documentation":"Contains information about the bot used for the test execution..
" + }, + "apiMode":{ + "shape":"TestExecutionApiMode", + "documentation":"Specifies whether the API mode for the test execution is streaming or non-streaming.
" + }, + "testExecutionModality":{ + "shape":"TestExecutionModality", + "documentation":"Specifies whether the data used for the test execution is written or spoken.
" + } + }, + "documentation":"Summarizes metadata about the test execution.
" + }, + "TestExecutionSummaryList":{ + "type":"list", + "member":{"shape":"TestExecutionSummary"} + }, + "TestExecutionTarget":{ + "type":"structure", + "members":{ + "botAliasTarget":{ + "shape":"BotAliasTestExecutionTarget", + "documentation":"Contains information about the bot alias used for the test execution.
" + } + }, + "documentation":"Contains information about the bot used for the test execution.
" + }, + "TestResultMatchStatus":{ + "type":"string", + "enum":[ + "Matched", + "Mismatched", + "ExecutionError" + ] + }, + "TestResultMatchStatusCountMap":{ + "type":"map", + "key":{"shape":"TestResultMatchStatus"}, + "value":{"shape":"Count"} + }, + "TestResultSlotName":{ + "type":"string", + "max":100, + "min":1, + "pattern":"^([0-9a-zA-Z][_.-]?)+$" + }, + "TestResultTypeFilter":{ + "type":"string", + "enum":[ + "OverallTestResults", + "ConversationLevelTestResults", + "IntentClassificationTestResults", + "SlotResolutionTestResults", + "UtteranceLevelResults" + ] + }, + "TestSetAgentPrompt":{ + "type":"string", + "max":1024, + "min":1 + }, + "TestSetConversationId":{ + "type":"string", + "max":50, + "min":1, + "pattern":"^([0-9a-zA-Z][_-]?)+$" + }, + "TestSetDiscrepancyErrors":{ + "type":"structure", + "required":[ + "intentDiscrepancies", + "slotDiscrepancies" + ], + "members":{ + "intentDiscrepancies":{ + "shape":"TestSetIntentDiscrepancyList", + "documentation":"Contains information about discrepancies found for intents between the test set and the bot.
" + }, + "slotDiscrepancies":{ + "shape":"TestSetSlotDiscrepancyList", + "documentation":"Contains information about discrepancies found for slots between the test set and the bot.
" + } + }, + "documentation":"Contains details about the errors in the test set discrepancy report
" + }, + "TestSetDiscrepancyReportBotAliasTarget":{ + "type":"structure", + "required":[ + "botId", + "botAliasId", + "localeId" + ], + "members":{ + "botId":{ + "shape":"Id", + "documentation":"The unique identifier for the bot alias.
" + }, + "botAliasId":{ + "shape":"BotAliasId", + "documentation":"The unique identifier for the bot associated with the bot alias.
" + }, + "localeId":{ + "shape":"LocaleId", + "documentation":"The unique identifier of the locale associated with the bot alias.
" + } + }, + "documentation":"Contains information about the bot alias used for the test set discrepancy report.
" + }, + "TestSetDiscrepancyReportResourceTarget":{ + "type":"structure", + "members":{ + "botAliasTarget":{ + "shape":"TestSetDiscrepancyReportBotAliasTarget", + "documentation":"Contains information about the bot alias used as the resource for the test set discrepancy report.
" + } + }, + "documentation":"Contains information about the resource used for the test set discrepancy report.
" + }, + "TestSetDiscrepancyReportStatus":{ + "type":"string", + "enum":[ + "InProgress", + "Completed", + "Failed" + ] + }, + "TestSetExportSpecification":{ + "type":"structure", + "required":["testSetId"], + "members":{ + "testSetId":{ + "shape":"Id", + "documentation":"The unique identifier of the test set.
" + } + }, + "documentation":"Contains information about the test set that is exported.
" + }, + "TestSetGenerationDataSource":{ + "type":"structure", + "members":{ + "conversationLogsDataSource":{ + "shape":"ConversationLogsDataSource", + "documentation":"Contains information about the bot from which the conversation logs are sourced.
" + } + }, + "documentation":"Contains information about the data source from which the test set is generated.
" + }, + "TestSetGenerationStatus":{ + "type":"string", + "enum":[ + "Generating", + "Ready", + "Failed", + "Pending" + ] + }, + "TestSetImportInputLocation":{ + "type":"structure", + "required":[ + "s3BucketName", + "s3Path" + ], + "members":{ + "s3BucketName":{ + "shape":"S3BucketName", + "documentation":"The name of the Amazon S3 bucket.
" + }, + "s3Path":{ + "shape":"S3ObjectPath", + "documentation":"The path inside the Amazon S3 bucket pointing to the test-set CSV file.
" + } + }, + "documentation":"Contains information about the Amazon S3 location from which the test set is imported.
" + }, + "TestSetImportResourceSpecification":{ + "type":"structure", + "required":[ + "testSetName", + "roleArn", + "storageLocation", + "importInputLocation", + "modality" + ], + "members":{ + "testSetName":{ + "shape":"Name", + "documentation":"The name of the test set.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the test set.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The Amazon Resource Name (ARN) of an IAM role that has permission to access the test set.
" + }, + "storageLocation":{ + "shape":"TestSetStorageLocation", + "documentation":"Contains information about the location that Amazon Lex uses to store the test-set.
" + }, + "importInputLocation":{ + "shape":"TestSetImportInputLocation", + "documentation":"Contains information about the input location from where test-set should be imported.
" + }, + "modality":{ + "shape":"TestSetModality", + "documentation":"Specifies whether the test-set being imported contains written or spoken data.
" + }, + "testSetTags":{ + "shape":"TagMap", + "documentation":"A list of tags to add to the test set. You can only add tags when you import/generate a new test set. You can't use the UpdateTestSet operation to update tags. To update tags, use the TagResource operation.
Contains information about the test set that is imported.
" + }, + "TestSetIntentDiscrepancyItem":{ + "type":"structure", + "required":[ + "intentName", + "errorMessage" + ], + "members":{ + "intentName":{ + "shape":"Name", + "documentation":"The name of the intent in the discrepancy report.
" + }, + "errorMessage":{ + "shape":"String", + "documentation":"The error message for a discrepancy for an intent between the test set and the bot.
" + } + }, + "documentation":"Contains information about discrepancy in an intent information between the test set and the bot.
" + }, + "TestSetIntentDiscrepancyList":{ + "type":"list", + "member":{"shape":"TestSetIntentDiscrepancyItem"} + }, + "TestSetModality":{ + "type":"string", + "enum":[ + "Text", + "Audio" + ] + }, + "TestSetSlotDiscrepancyItem":{ + "type":"structure", + "required":[ + "intentName", + "slotName", + "errorMessage" + ], + "members":{ + "intentName":{ + "shape":"Name", + "documentation":"The name of the intent associated with the slot in the discrepancy report.
" + }, + "slotName":{ + "shape":"Name", + "documentation":"The name of the slot in the discrepancy report.
" + }, + "errorMessage":{ + "shape":"String", + "documentation":"The error message for a discrepancy for an intent between the test set and the bot.
" + } + }, + "documentation":"Contains information about discrepancy in a slot information between the test set and the bot.
" + }, + "TestSetSlotDiscrepancyList":{ + "type":"list", + "member":{"shape":"TestSetSlotDiscrepancyItem"} + }, + "TestSetSortAttribute":{ + "type":"string", + "enum":[ + "TestSetName", + "LastUpdatedDateTime" + ] + }, + "TestSetSortBy":{ + "type":"structure", + "required":[ + "attribute", + "order" + ], + "members":{ + "attribute":{ + "shape":"TestSetSortAttribute", + "documentation":"Specifies whether to sort the test sets by name or by the time they were last updated.
" + }, + "order":{ + "shape":"SortOrder", + "documentation":"Specifies whether to sort in ascending or descending order.
" + } + }, + "documentation":"Contains information about the methods by which to sort the test set.
" }, - "SubSlotSpecificationMap":{ - "type":"map", - "key":{"shape":"Name"}, - "value":{"shape":"Specifications"}, - "max":6, - "min":0 + "TestSetStatus":{ + "type":"string", + "enum":[ + "Importing", + "PendingAnnotation", + "Deleting", + "ValidationError", + "Ready" + ] }, - "SubSlotTypeComposition":{ + "TestSetStorageLocation":{ "type":"structure", "required":[ - "name", - "slotTypeId" + "s3BucketName", + "s3Path" ], "members":{ - "name":{ - "shape":"Name", - "documentation":"Name of a constituent sub slot inside a composite slot.
" + "s3BucketName":{ + "shape":"S3BucketName", + "documentation":"The name of the Amazon S3 bucket in which the test set is stored.
" }, - "slotTypeId":{ - "shape":"BuiltInOrCustomSlotTypeId", - "documentation":"The unique identifier assigned to a slot type. This refers to either a built-in slot type or the unique slotTypeId of a custom slot type.
" + "s3Path":{ + "shape":"S3ObjectPath", + "documentation":"The path inside the Amazon S3 bucket where the test set is stored.
" + }, + "kmsKeyArn":{ + "shape":"KmsKeyArn", + "documentation":"The Amazon Resource Name (ARN) of an Amazon Web Services Key Management Service (KMS) key for encrypting the test set.
" } }, - "documentation":"Subslot type composition.
" - }, - "SubSlotTypeList":{ - "type":"list", - "member":{"shape":"SubSlotTypeComposition"}, - "max":6, - "min":0 + "documentation":"Contains information about the location in which the test set is stored.
" }, - "SubSlotValueElicitationSetting":{ + "TestSetSummary":{ "type":"structure", - "required":["promptSpecification"], "members":{ - "defaultValueSpecification":{"shape":"SlotDefaultValueSpecification"}, - "promptSpecification":{"shape":"PromptSpecification"}, - "sampleUtterances":{ - "shape":"SampleUtterancesList", - "documentation":"If you know a specific pattern that users might respond to an Amazon Lex request for a sub slot value, you can provide those utterances to improve accuracy. This is optional. In most cases Amazon Lex is capable of understanding user utterances. This is similar to SampleUtterances for slots.
The unique identifier of the test set.
" }, - "waitAndContinueSpecification":{"shape":"WaitAndContinueSpecification"} + "testSetName":{ + "shape":"Name", + "documentation":"The name of the test set.
" + }, + "description":{ + "shape":"Description", + "documentation":"The description of the test set.
" + }, + "modality":{ + "shape":"TestSetModality", + "documentation":"Specifies whether the test set contains written or spoken data.
" + }, + "status":{ + "shape":"TestSetStatus", + "documentation":"The status of the test set.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The Amazon Resource Name (ARN) of an IAM role that has permission to access the test set.
" + }, + "numTurns":{ + "shape":"Count", + "documentation":"The number of turns in the test set.
" + }, + "storageLocation":{ + "shape":"TestSetStorageLocation", + "documentation":"Contains information about the location at which the test set is stored.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The date and time at which the test set was created.
" + }, + "lastUpdatedDateTime":{ + "shape":"Timestamp", + "documentation":"The date and time at which the test set was last updated.
" + } }, - "documentation":"Subslot elicitation settings.
DefaultValueSpecification is a list of default values for a constituent sub slot in a composite slot. Default values are used when Amazon Lex hasn't determined a value for a slot. You can specify default values from context variables, session attributes, and defined values. This is similar to DefaultValueSpecification for slots.
PromptSpecification is the prompt that Amazon Lex uses to elicit the sub slot value from the user. This is similar to PromptSpecification for slots.
Contains summary information about the test set.
" }, - "TagKey":{ - "type":"string", - "max":128, - "min":1 - }, - "TagKeyList":{ + "TestSetSummaryList":{ "type":"list", - "member":{"shape":"TagKey"}, - "max":200, - "min":0 - }, - "TagMap":{ - "type":"map", - "key":{"shape":"TagKey"}, - "value":{"shape":"TagValue"}, - "max":200, - "min":0 + "member":{"shape":"TestSetSummary"} }, - "TagResourceRequest":{ + "TestSetTurnRecord":{ "type":"structure", "required":[ - "resourceARN", - "tags" + "recordNumber", + "turnSpecification" ], "members":{ - "resourceARN":{ - "shape":"AmazonResourceName", - "documentation":"The Amazon Resource Name (ARN) of the bot, bot alias, or bot channel to tag.
", - "location":"uri", - "locationName":"resourceARN" + "recordNumber":{ + "shape":"RecordNumber", + "documentation":"The record number associated with the turn.
" }, - "tags":{ - "shape":"TagMap", - "documentation":"A list of tag keys to add to the resource. If a tag key already exists, the existing value is replaced with the new value.
" + "conversationId":{ + "shape":"TestSetConversationId", + "documentation":"The unique identifier for the conversation associated with the turn.
" + }, + "turnNumber":{ + "shape":"TurnNumber", + "documentation":"The number of turns that has elapsed up to that turn.
" + }, + "turnSpecification":{ + "shape":"TurnSpecification", + "documentation":"Contains information about the agent or user turn depending upon type of turn.
" } - } + }, + "documentation":"Contains information about a turn in a test set.
" }, - "TagResourceResponse":{ + "TestSetTurnRecordList":{ + "type":"list", + "member":{"shape":"TestSetTurnRecord"} + }, + "TestSetTurnResult":{ "type":"structure", "members":{ - } + "agent":{ + "shape":"AgentTurnResult", + "documentation":"Contains information about the agent messages in the turn.
" + }, + "user":{ + "shape":"UserTurnResult", + "documentation":"Contains information about the user messages in the turn.
" + } + }, + "documentation":"Contains information about the results of the analysis of a turn in the test set.
" }, - "TagValue":{ + "TestSetUtteranceText":{ "type":"string", - "max":256, - "min":0 + "max":1024, + "min":1 }, "TextInputSpecification":{ "type":"structure", @@ -8739,6 +10672,25 @@ }, "documentation":"Indicates the setting of the location where the transcript is stored.
" }, + "TurnNumber":{ + "type":"integer", + "max":30, + "min":0 + }, + "TurnSpecification":{ + "type":"structure", + "members":{ + "agentTurn":{ + "shape":"AgentTurnSpecification", + "documentation":"Contains information about the agent messages in the turn.
" + }, + "userTurn":{ + "shape":"UserTurnSpecification", + "documentation":"Contains information about the user messages in the turn.
" + } + }, + "documentation":"Contains information about the messages in the turn.
" + }, "UntagResourceRequest":{ "type":"structure", "required":[ @@ -9259,7 +11211,7 @@ }, "initialResponseSetting":{ "shape":"InitialResponseSetting", - "documentation":"" + "documentation":"Configuration settings for a response sent to the user before Amazon Lex starts eliciting slots.
" } } }, @@ -9340,7 +11292,7 @@ }, "initialResponseSetting":{ "shape":"InitialResponseSetting", - "documentation":"" + "documentation":"Configuration settings for a response sent to the user before Amazon Lex starts eliciting slots.
" } } }, @@ -9487,7 +11439,7 @@ }, "botVersion":{ "shape":"DraftBotVersion", - "documentation":"The identifier of the slot version that contains the slot. Will always be DRAFT.
The version of the bot that contains the slot. Will always be DRAFT.
The test set Id for which update test operation to be performed.
", + "location":"uri", + "locationName":"testSetId" + }, + "testSetName":{ + "shape":"Name", + "documentation":"The new test set name.
" + }, + "description":{ + "shape":"Description", + "documentation":"The new test set description.
" + } + } + }, + "UpdateTestSetResponse":{ + "type":"structure", + "members":{ + "testSetId":{ + "shape":"Id", + "documentation":"The test set Id for which update test operation to be performed.
" + }, + "testSetName":{ + "shape":"Name", + "documentation":"The test set name for the updated test set.
" + }, + "description":{ + "shape":"Description", + "documentation":"The test set description for the updated test set.
" + }, + "modality":{ + "shape":"TestSetModality", + "documentation":"Indicates whether audio or text is used for the updated test set.
" + }, + "status":{ + "shape":"TestSetStatus", + "documentation":"The status for the updated test set.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The roleARN used for any operation in the test set to access resources in the Amazon Web Services account.
" + }, + "numTurns":{ + "shape":"Count", + "documentation":"The number of conversation turns from the updated test set.
" + }, + "storageLocation":{ + "shape":"TestSetStorageLocation", + "documentation":"The Amazon S3 storage location for the updated test set.
" + }, + "creationDateTime":{ + "shape":"Timestamp", + "documentation":"The creation date and time for the updated test set.
" + }, + "lastUpdatedDateTime":{ + "shape":"Timestamp", + "documentation":"The date and time of the last update for the updated test set.
" + } + } + }, + "UserTurnInputSpecification":{ + "type":"structure", + "required":["utteranceInput"], + "members":{ + "utteranceInput":{ + "shape":"UtteranceInputSpecification", + "documentation":"The utterance input in the user turn.
" + }, + "requestAttributes":{ + "shape":"StringMap", + "documentation":"Request attributes of the user turn.
" + }, + "sessionState":{ + "shape":"InputSessionStateSpecification", + "documentation":"Contains information about the session state in the input.
" + } + }, + "documentation":"Contains information about the user messages in the turn in the input.
" + }, + "UserTurnIntentOutput":{ + "type":"structure", + "required":["name"], + "members":{ + "name":{ + "shape":"Name", + "documentation":"The name of the intent.
" + }, + "slots":{ + "shape":"UserTurnSlotOutputMap", + "documentation":"The slots associated with the intent.
" + } + }, + "documentation":"Contains information about the intent that is output for the turn by the test execution.
" + }, + "UserTurnOutputSpecification":{ + "type":"structure", + "required":["intent"], + "members":{ + "intent":{ + "shape":"UserTurnIntentOutput", + "documentation":"Contains information about the intent.
" + }, + "activeContexts":{ + "shape":"ActiveContextList", + "documentation":"The contexts that are active in the turn.
" + }, + "transcript":{ + "shape":"TestSetUtteranceText", + "documentation":"The transcript that is output for the user turn by the test execution.
" + } + }, + "documentation":"Contains results that are output for the user turn by the test execution.
" + }, + "UserTurnResult":{ + "type":"structure", + "required":[ + "input", + "expectedOutput" + ], + "members":{ + "input":{ + "shape":"UserTurnInputSpecification", + "documentation":"Contains information about the user messages in the turn in the input.
" + }, + "expectedOutput":{ + "shape":"UserTurnOutputSpecification", + "documentation":"Contains information about the expected output for the user turn.
" + }, + "actualOutput":{ + "shape":"UserTurnOutputSpecification", + "documentation":"Contains information about the actual output for the user turn.
" + }, + "errorDetails":{"shape":"ExecutionErrorDetails"}, + "endToEndResult":{ + "shape":"TestResultMatchStatus", + "documentation":"Specifies whether the expected and actual outputs match or not, or if there is an error in execution.
" + }, + "intentMatchResult":{ + "shape":"TestResultMatchStatus", + "documentation":"Specifies whether the expected and actual intents match or not.
" + }, + "slotMatchResult":{ + "shape":"TestResultMatchStatus", + "documentation":"Specifies whether the expected and actual slots match or not.
" + }, + "speechTranscriptionResult":{ + "shape":"TestResultMatchStatus", + "documentation":"Specifies whether the expected and actual speech transcriptions match or not, or if there is an error in execution.
" + }, + "conversationLevelResult":{ + "shape":"ConversationLevelResultDetail", + "documentation":"Contains information about the results related to the conversation associated with the user turn.
" + } + }, + "documentation":"Contains the results for the user turn by the test execution.
" + }, + "UserTurnSlotOutput":{ + "type":"structure", + "members":{ + "value":{ + "shape":"NonEmptyString", + "documentation":"The value output by the slot recognition.
" + }, + "values":{ + "shape":"UserTurnSlotOutputList", + "documentation":"Values that are output by the slot recognition.
" + }, + "subSlots":{ + "shape":"UserTurnSlotOutputMap", + "documentation":"A list of items mapping the name of the subslots to information about those subslots.
" + } + }, + "documentation":"Contains information about a slot output by the test set execution.
" + }, + "UserTurnSlotOutputList":{ + "type":"list", + "member":{"shape":"UserTurnSlotOutput"} + }, + "UserTurnSlotOutputMap":{ + "type":"map", + "key":{"shape":"Name"}, + "value":{"shape":"UserTurnSlotOutput"} + }, + "UserTurnSpecification":{ + "type":"structure", + "required":[ + "input", + "expected" + ], + "members":{ + "input":{ + "shape":"UserTurnInputSpecification", + "documentation":"Contains information about the user messages in the turn in the input.
" + }, + "expected":{ + "shape":"UserTurnOutputSpecification", + "documentation":"Contains results about the expected output for the user turn.
" + } + }, + "documentation":"Contains information about the expected and input values for the user turn.
" + }, "Utterance":{"type":"string"}, "UtteranceAggregationDuration":{ "type":"structure", @@ -9642,6 +11803,68 @@ }, "documentation":"Provides parameters for setting the time window and duration for aggregating utterance data.
" }, + "UtteranceAudioInputSpecification":{ + "type":"structure", + "required":["audioFileS3Location"], + "members":{ + "audioFileS3Location":{ + "shape":"AudioFileS3Location", + "documentation":"Amazon S3 file pointing to the audio.
" + } + }, + "documentation":"Contains information about the audio for an utterance.
" + }, + "UtteranceInputSpecification":{ + "type":"structure", + "members":{ + "textInput":{ + "shape":"TestSetUtteranceText", + "documentation":"A text input transcription of the utterance. It is only applicable for test-sets containing text data.
" + }, + "audioInput":{ + "shape":"UtteranceAudioInputSpecification", + "documentation":"Contains information about the audio input for an utterance.
" + } + }, + "documentation":"Contains information about input of an utterance.
" + }, + "UtteranceLevelTestResultItem":{ + "type":"structure", + "required":[ + "recordNumber", + "turnResult" + ], + "members":{ + "recordNumber":{ + "shape":"RecordNumber", + "documentation":"The record number of the result.
" + }, + "conversationId":{ + "shape":"TestSetConversationId", + "documentation":"The unique identifier for the conversation associated with the result.
" + }, + "turnResult":{ + "shape":"TestSetTurnResult", + "documentation":"Contains information about the turn associated with the result.
" + } + }, + "documentation":"Contains information about multiple utterances in the results of a test set execution.
" + }, + "UtteranceLevelTestResultItemList":{ + "type":"list", + "member":{"shape":"UtteranceLevelTestResultItem"} + }, + "UtteranceLevelTestResults":{ + "type":"structure", + "required":["items"], + "members":{ + "items":{ + "shape":"UtteranceLevelTestResultItemList", + "documentation":"Contains information about an utterance in the results of the test set execution.
" + } + }, + "documentation":"Contains information about the utterances in the results of the test set execution.
" + }, "ValidationException":{ "type":"structure", "members":{ From 22798ca8998a3d609a51f9f3572d2d80d6cc989b Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 6 Jun 2023 18:08:10 +0000 Subject: [PATCH 040/317] Amazon EMR Update: This release provides customers the ability to specify an allocation strategies amongst PRICE_CAPACITY_OPTIMIZED, CAPACITY_OPTIMIZED, LOWEST_PRICE, DIVERSIFIED for Spot instances in Instance Feet cluster. This enables customers to choose an allocation strategy best suited for their workload. --- .changes/next-release/feature-AmazonEMR-6060c81.json | 6 ++++++ .../src/main/resources/codegen-resources/service-2.json | 9 +++++++-- 2 files changed, 13 insertions(+), 2 deletions(-) create mode 100644 .changes/next-release/feature-AmazonEMR-6060c81.json diff --git a/.changes/next-release/feature-AmazonEMR-6060c81.json b/.changes/next-release/feature-AmazonEMR-6060c81.json new file mode 100644 index 000000000000..f9796b5b80d3 --- /dev/null +++ b/.changes/next-release/feature-AmazonEMR-6060c81.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon EMR", + "contributor": "", + "description": "This release provides customers the ability to specify an allocation strategies amongst PRICE_CAPACITY_OPTIMIZED, CAPACITY_OPTIMIZED, LOWEST_PRICE, DIVERSIFIED for Spot instances in Instance Feet cluster. This enables customers to choose an allocation strategy best suited for their workload." +} diff --git a/services/emr/src/main/resources/codegen-resources/service-2.json b/services/emr/src/main/resources/codegen-resources/service-2.json index 31d179449d33..fe8f1ef2a3e3 100644 --- a/services/emr/src/main/resources/codegen-resources/service-2.json +++ b/services/emr/src/main/resources/codegen-resources/service-2.json @@ -4859,7 +4859,12 @@ }, "SpotProvisioningAllocationStrategy":{ "type":"string", - "enum":["capacity-optimized"] + "enum":[ + "capacity-optimized", + "price-capacity-optimized", + "lowest-price", + "diversified" + ] }, "SpotProvisioningSpecification":{ "type":"structure", @@ -4882,7 +4887,7 @@ }, "AllocationStrategy":{ "shape":"SpotProvisioningAllocationStrategy", - "documentation":"Specifies the strategy to use in launching Spot Instance fleets. Currently, the only option is capacity-optimized (the default), which launches instances from Spot Instance pools with optimal capacity for the number of instances that are launching.
" + "documentation":"Specifies one of the following strategies to launch Spot Instance fleets: price-capacity-optimized, capacity-optimized, lowest-price, or diversified. For more information on the provisioning strategies, see Allocation strategies for Spot Instances in the Amazon EC2 User Guide for Linux Instances.
When you launch a Spot Instance fleet with the old console, it automatically launches with the capacity-optimized strategy. You can't change the allocation strategy from the old console.
The launch specification for Spot Instances in the instance fleet, which determines the defined duration, provisioning timeout behavior, and allocation strategy.
The instance fleet configuration is available only in Amazon EMR releases 4.8.0 and later, excluding 5.0.x versions. Spot Instance allocation strategy is available in Amazon EMR releases 5.12.1 and later.
Spot Instances with a defined duration (also known as Spot blocks) are no longer available to new customers from July 1, 2021. For customers who have previously used the feature, we will continue to support Spot Instances with a defined duration until December 31, 2022.
Returns information about a specific code signing job. You specify the job by using the jobId value that is returned by the StartSigningJob operation.
Retrieves the revocation status of one or more of the signing profile, signing job, and signing certificate.
", + "endpoint":{"hostPrefix":"verification."} + }, "GetSigningPlatform":{ "name":"GetSigningPlatform", "http":{ @@ -190,7 +207,7 @@ {"shape":"TooManyRequestsException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Creates a signing profile. A signing profile is a code signing template that can be used to carry out a pre-defined signing job. For more information, see http://docs.aws.amazon.com/signer/latest/developerguide/gs-profile.html
" + "documentation":"Creates a signing profile. A signing profile is a code signing template that can be used to carry out a pre-defined signing job.
" }, "RemoveProfilePermission":{ "name":"RemoveProfilePermission", @@ -242,6 +259,23 @@ ], "documentation":"Changes the state of a signing profile to REVOKED. This indicates that signatures generated using the signing profile after an effective start date are no longer valid.
" }, + "SignPayload":{ + "name":"SignPayload", + "http":{ + "method":"POST", + "requestUri":"/signing-jobs/with-payload" + }, + "input":{"shape":"SignPayloadRequest"}, + "output":{"shape":"SignPayloadResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalServiceErrorException"} + ], + "documentation":"Signs a binary payload and returns a signature envelope.
" + }, "StartSigningJob":{ "name":"StartSigningJob", "http":{ @@ -258,7 +292,7 @@ {"shape":"TooManyRequestsException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Initiates a signing job to be performed on the code provided. Signing jobs are viewable by the ListSigningJobs operation for two years after they are performed. Note the following requirements:
You must create an Amazon S3 source bucket. For more information, see Create a Bucket in the Amazon S3 Getting Started Guide.
Your S3 source bucket must be version enabled.
You must create an S3 destination bucket. Code signing uses your S3 destination bucket to write your signed code.
You specify the name of the source and destination buckets when calling the StartSigningJob operation.
You must also specify a request token that identifies your request to code signing.
You can call the DescribeSigningJob and the ListSigningJobs actions after you call StartSigningJob.
For a Java example that shows how to use this action, see http://docs.aws.amazon.com/acm/latest/userguide/
" + "documentation":"Initiates a signing job to be performed on the code provided. Signing jobs are viewable by the ListSigningJobs operation for two years after they are performed. Note the following requirements:
You must create an Amazon S3 source bucket. For more information, see Creating a Bucket in the Amazon S3 Getting Started Guide.
Your S3 source bucket must be version enabled.
You must create an S3 destination bucket. Code signing uses your S3 destination bucket to write your signed code.
You specify the name of the source and destination buckets when calling the StartSigningJob operation.
You must also specify a request token that identifies your request to code signing.
You can call the DescribeSigningJob and the ListSigningJobs actions after you call StartSigningJob.
For a Java example that shows how to use this action, see StartSigningJob.
" }, "TagResource":{ "name":"TagResource", @@ -371,6 +405,7 @@ "error":{"httpStatusCode":400}, "exception":true }, + "Blob":{"type":"blob"}, "BucketName":{"type":"string"}, "CancelSigningProfileRequest":{ "type":"structure", @@ -389,6 +424,10 @@ "enum":["AWSIoT"] }, "CertificateArn":{"type":"string"}, + "CertificateHashes":{ + "type":"list", + "member":{"shape":"String"} + }, "ClientRequestToken":{"type":"string"}, "ConflictException":{ "type":"structure", @@ -535,6 +574,57 @@ }, "ErrorCode":{"type":"string"}, "ErrorMessage":{"type":"string"}, + "GetRevocationStatusRequest":{ + "type":"structure", + "required":[ + "signatureTimestamp", + "platformId", + "profileVersionArn", + "jobArn", + "certificateHashes" + ], + "members":{ + "signatureTimestamp":{ + "shape":"Timestamp", + "documentation":"The timestamp of the signature that validates the profile or job.
", + "location":"querystring", + "locationName":"signatureTimestamp" + }, + "platformId":{ + "shape":"PlatformId", + "documentation":"The ID of a signing platform.
", + "location":"querystring", + "locationName":"platformId" + }, + "profileVersionArn":{ + "shape":"Arn", + "documentation":"The version of a signing profile.
", + "location":"querystring", + "locationName":"profileVersionArn" + }, + "jobArn":{ + "shape":"Arn", + "documentation":"The ARN of a signing job.
", + "location":"querystring", + "locationName":"jobArn" + }, + "certificateHashes":{ + "shape":"CertificateHashes", + "documentation":"A list of composite signed hashes that identify certificates.
A certificate identifier consists of a subject certificate TBS hash (signed by the parent CA) combined with a parent CA TBS hash (signed by the parent CA’s CA). Root certificates are defined as their own CA.
", + "location":"querystring", + "locationName":"certificateHashes" + } + } + }, + "GetRevocationStatusResponse":{ + "type":"structure", + "members":{ + "revokedEntities":{ + "shape":"RevokedEntities", + "documentation":"A list of revoked entities (including one or more of the signing profile ARN, signing job ID, and certificate hash) supplied as input to the API.
" + } + } + }, "GetSigningPlatformRequest":{ "type":"structure", "required":["platformId"], @@ -950,6 +1040,11 @@ "min":1 }, "MaxSizeInMB":{"type":"integer"}, + "Metadata":{ + "type":"map", + "key":{"shape":"String"}, + "value":{"shape":"String"} + }, "NextToken":{"type":"string"}, "NotFoundException":{ "type":"structure", @@ -961,6 +1056,11 @@ "error":{"httpStatusCode":404}, "exception":true }, + "Payload":{ + "type":"blob", + "max":4096, + "min":1 + }, "Permission":{ "type":"structure", "members":{ @@ -1163,6 +1263,10 @@ } } }, + "RevokedEntities":{ + "type":"list", + "member":{"shape":"String"} + }, "S3Destination":{ "type":"structure", "members":{ @@ -1224,6 +1328,53 @@ "error":{"httpStatusCode":402}, "exception":true }, + "SignPayloadRequest":{ + "type":"structure", + "required":[ + "profileName", + "payload", + "payloadFormat" + ], + "members":{ + "profileName":{ + "shape":"ProfileName", + "documentation":"The name of the signing profile.
" + }, + "profileOwner":{ + "shape":"AccountId", + "documentation":"The AWS account ID of the profile owner.
" + }, + "payload":{ + "shape":"Payload", + "documentation":"Specifies the object digest (hash) to sign.
" + }, + "payloadFormat":{ + "shape":"String", + "documentation":"Payload content type
" + } + } + }, + "SignPayloadResponse":{ + "type":"structure", + "members":{ + "jobId":{ + "shape":"JobId", + "documentation":"Unique identifier of the signing job.
" + }, + "jobOwner":{ + "shape":"AccountId", + "documentation":"The AWS account ID of the job owner.
" + }, + "metadata":{ + "shape":"Metadata", + "documentation":"Information including the signing profile ARN and the signing job ID. Clients use metadata to signature records, for example, as annotations added to the signature manifest inside an OCI registry.
" + }, + "signature":{ + "shape":"Blob", + "documentation":"A cryptographic signature.
" + } + } + }, "SignatureValidityPeriod":{ "type":"structure", "members":{ @@ -1405,7 +1556,7 @@ "members":{ "platformId":{ "shape":"String", - "documentation":"The ID of a code signing; platform.
" + "documentation":"The ID of a code signing platform.
" }, "displayName":{ "shape":"String", @@ -1724,5 +1875,5 @@ "bool":{"type":"boolean"}, "string":{"type":"string"} }, - "documentation":"AWS Signer is a fully managed code signing service to help you ensure the trust and integrity of your code.
AWS Signer supports the following applications:
With code signing for AWS Lambda, you can sign AWS Lambda deployment packages. Integrated support is provided for Amazon S3, Amazon CloudWatch, and AWS CloudTrail. In order to sign code, you create a signing profile and then use Signer to sign Lambda zip files in S3.
With code signing for IoT, you can sign code for any IoT device that is supported by AWS. IoT code signing is available for Amazon FreeRTOS and AWS IoT Device Management, and is integrated with AWS Certificate Manager (ACM). In order to sign code, you import a third-party code signing certificate using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device Management.
For more information about AWS Signer, see the AWS Signer Developer Guide.
" + "documentation":"AWS Signer is a fully managed code signing service to help you ensure the trust and integrity of your code.
AWS Signer supports the following applications:
With code signing for AWS Lambda, you can sign AWS Lambda deployment packages. Integrated support is provided for Amazon S3, Amazon CloudWatch, and AWS CloudTrail. In order to sign code, you create a signing profile and then use Signer to sign Lambda zip files in S3.
With code signing for IoT, you can sign code for any IoT device that is supported by AWS. IoT code signing is available for Amazon FreeRTOS and AWS IoT Device Management, and is integrated with AWS Certificate Manager (ACM). In order to sign code, you import a third-party code signing certificate using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device Management.
With code signing for containers …(TBD)
For more information about AWS Signer, see the AWS Signer Developer Guide.
" } From 15f4cb0b00b0b83d194ab2b8acc45f8be4fba7c2 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 6 Jun 2023 18:08:13 +0000 Subject: [PATCH 042/317] Inspector2 Update: Adds new response properties and request parameters for 'last scanned at' on the ListCoverage operation. This feature allows you to search and view the date of which your resources were last scanned by Inspector. --- .../feature-Inspector2-2a0c20b.json | 6 ++++ .../codegen-resources/service-2.json | 30 ++++++++++++++++++- 2 files changed, 35 insertions(+), 1 deletion(-) create mode 100644 .changes/next-release/feature-Inspector2-2a0c20b.json diff --git a/.changes/next-release/feature-Inspector2-2a0c20b.json b/.changes/next-release/feature-Inspector2-2a0c20b.json new file mode 100644 index 000000000000..695100819e54 --- /dev/null +++ b/.changes/next-release/feature-Inspector2-2a0c20b.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Inspector2", + "contributor": "", + "description": "Adds new response properties and request parameters for 'last scanned at' on the ListCoverage operation. This feature allows you to search and view the date of which your resources were last scanned by Inspector." +} diff --git a/services/inspector2/src/main/resources/codegen-resources/service-2.json b/services/inspector2/src/main/resources/codegen-resources/service-2.json index d2c6140d4ca4..0c830d8974f3 100644 --- a/services/inspector2/src/main/resources/codegen-resources/service-2.json +++ b/services/inspector2/src/main/resources/codegen-resources/service-2.json @@ -1506,6 +1506,26 @@ "max":5, "min":1 }, + "CoverageDateFilter":{ + "type":"structure", + "members":{ + "endInclusive":{ + "shape":"DateTimeTimestamp", + "documentation":"A timestamp representing the end of the time period to filter results by.
" + }, + "startInclusive":{ + "shape":"DateTimeTimestamp", + "documentation":"A timestamp representing the start of the time period to filter results by.
" + } + }, + "documentation":"Contains details of a coverage date filter.
" + }, + "CoverageDateFilterList":{ + "type":"list", + "member":{"shape":"CoverageDateFilter"}, + "max":10, + "min":1 + }, "CoverageFilterCriteria":{ "type":"structure", "members":{ @@ -1537,13 +1557,17 @@ "shape":"CoverageMapFilterList", "documentation":"Returns coverage statistics for AWS Lambda functions filtered by tag.
" }, + "lastScannedAt":{ + "shape":"CoverageDateFilterList", + "documentation":"Filters Amazon Web Services resources based on whether Amazon Inspector has checked them for vulnerabilities within the specified time range.
" + }, "resourceId":{ "shape":"CoverageStringFilterList", "documentation":"An array of Amazon Web Services resource IDs to return coverage statistics for.
" }, "resourceType":{ "shape":"CoverageStringFilterList", - "documentation":"An array of Amazon Web Services resource types to return coverage statistics for. The values can be AWS_EC2_INSTANCE or AWS_ECR_REPOSITORY.
An array of Amazon Web Services resource types to return coverage statistics for. The values can be AWS_EC2_INSTANCE, AWS_LAMBDA_FUNCTION or AWS_ECR_REPOSITORY.
The Amazon Web Services account ID of the covered resource.
" }, + "lastScannedAt":{ + "shape":"DateTimeTimestamp", + "documentation":"The date and time the resource was last checked for vulnerabilities.
" + }, "resourceId":{ "shape":"ResourceId", "documentation":"The ID of the covered resource.
" From 39fa90a35dfbf4e09ceddf0a128bc93b38a20253 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 6 Jun 2023 18:08:21 +0000 Subject: [PATCH 043/317] Amazon QuickSight Update: QuickSight support for pivot table field collapse state, radar chart range scale and multiple scope options in conditional formatting. --- .../feature-AmazonQuickSight-0636a39.json | 6 ++ .../codegen-resources/service-2.json | 97 ++++++++++++++++--- 2 files changed, 87 insertions(+), 16 deletions(-) create mode 100644 .changes/next-release/feature-AmazonQuickSight-0636a39.json diff --git a/.changes/next-release/feature-AmazonQuickSight-0636a39.json b/.changes/next-release/feature-AmazonQuickSight-0636a39.json new file mode 100644 index 000000000000..178b7860a464 --- /dev/null +++ b/.changes/next-release/feature-AmazonQuickSight-0636a39.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon QuickSight", + "contributor": "", + "description": "QuickSight support for pivot table field collapse state, radar chart range scale and multiple scope options in conditional formatting." +} diff --git a/services/quicksight/src/main/resources/codegen-resources/service-2.json b/services/quicksight/src/main/resources/codegen-resources/service-2.json index 46a0e17d16ad..704e0bad91f7 100644 --- a/services/quicksight/src/main/resources/codegen-resources/service-2.json +++ b/services/quicksight/src/main/resources/codegen-resources/service-2.json @@ -4011,11 +4011,11 @@ "members":{ "CredentialPair":{ "shape":"AssetBundleImportJobDataSourceCredentialPair", - "documentation":"A username and password credential pair to be used to create the imported data source. Leave this field blank if you are using an Secrets Manager Secret to provide credentials.
" + "documentation":"A username and password credential pair to be used to create the imported data source. Keep this field blank if you are using a Secrets Manager secret to provide credentials.
" }, "SecretArn":{ "shape":"SecretArn", - "documentation":"The ARN of the Secrets Manager Secret to be used to create the imported data source leave this field blank if you aren't using a Secret in place of a credential pair.
" + "documentation":"The ARN of the Secrets Manager secret that's used to create the imported data source. Keep this field blank, unless you are using a secret in place of a credential pair.
" } }, "documentation":"The login credentials to use to import a data source resource.
" @@ -4125,7 +4125,7 @@ }, "StartAfterDateTime":{ "shape":"Timestamp", - "documentation":"An override for the StartAfterDateTime of a RefreshSchedule to ensure that the StartAfterDateTime is set to a time that takes place in the future.
An override for the StartAfterDateTime of a RefreshSchedule. Make sure that the StartAfterDateTime is set to a time that takes place in the future.
A list of overrides for a specific RefreshsSchedule resource that is present in the asset bundle that is imported.
An option to request a CloudFormation variable for a prefix to be prepended to each resource's ID before import. The prefix is only added to the asset IDs and does not change the name of the asset.
" } }, - "documentation":"An optional structure that configures resource ID overrides for the import job
" + "documentation":"An optional structure that configures resource ID overrides for the import job.
" }, "AssetBundleImportJobStatus":{ "type":"string", @@ -4223,15 +4223,15 @@ }, "SubnetIds":{ "shape":"SubnetIdList", - "documentation":"A list of new subnet IDs for the VPC connection you are importing. This field is required if you are importing the VPC connection from another Amazon Web Services account or region.
" + "documentation":"A list of new subnet IDs for the VPC connection you are importing. This field is required if you are importing the VPC connection from another Amazon Web Services account or Region.
" }, "SecurityGroupIds":{ "shape":"SecurityGroupIdList", - "documentation":"A new security group ID for the VPC connection you are importing. This field is required if you are importing the VPC connection from another Amazon Web Services account or region.
" + "documentation":"A new security group ID for the VPC connection you are importing. This field is required if you are importing the VPC connection from another Amazon Web Services account or Region.
" }, "DnsResolvers":{ "shape":"DnsResolverList", - "documentation":"An optional override of dns resolvers to be used by the VPC connection.
" + "documentation":"An optional override of DNS resolvers to be used by the VPC connection.
" }, "RoleArn":{ "shape":"RoleArn", @@ -4251,11 +4251,11 @@ "members":{ "Body":{ "shape":"AssetBundleImportBodyBlob", - "documentation":"The bytes of the Base64 encoded asset bundle import zip file. This file can't exceed 20MB.
If you are calling the APIs from the Amazon Web Services Java, JavaScript, Python, or PHP SDKs, the SDK encodes Base64 automatically to allow the direct setting of the zip file's bytes. If you are using a SDK of a different language or are receiving related errors, try to Base64 encode your data.
" + "documentation":"The bytes of the base64 encoded asset bundle import zip file. This file can't exceed 20 MB.
If you are calling the API operations from the Amazon Web Services SDK for Java, JavaScript, Python, or PHP, the SDK encodes base64 automatically to allow the direct setting of the zip file's bytes. If you are using an SDK for a different language or receiving related errors, try to base64 encode your data.
" }, "S3Uri":{ "shape":"S3Uri", - "documentation":"The Amazon S3 uri for an asset bundle import file that exists in an Amazon S3 bucket that the caller has read access to. The file must be a zip format file and can't exceed 20MB.
" + "documentation":"The Amazon S3 URI for an asset bundle import file that exists in an Amazon S3 bucket that the caller has read access to. The file must be a zip format file and can't exceed 20 MB.
" } }, "documentation":"The source of the asset bundle zip file that contains the data that you want to import.
" @@ -4265,14 +4265,14 @@ "members":{ "Body":{ "shape":"String", - "documentation":"A HTTPS download URL for the provided asset bundle that you optionally provided at the start of the import job. This URL is valid for 5 minutes after issuance. Call DescribeAssetBundleExportJob again for a fresh URL if needed. The downloaded asset bundle is a .qs zip file.
An HTTPS download URL for the provided asset bundle that you optionally provided at the start of the import job. This URL is valid for five minutes after issuance. Call DescribeAssetBundleExportJob again for a fresh URL if needed. The downloaded asset bundle is a .qs zip file.
The Amazon S3 uri that you provided at the start of the import job.
" + "documentation":"The Amazon S3 URI that you provided at the start of the import job.
" } }, - "documentation":"A description of the import source that you provide at the start of an import job. This value is set to either Body or S3Uri depending on how the StartAssetBundleImportJobRequest is configured.
A description of the import source that you provide at the start of an import job. This value is set to either Body or S3Uri, depending on how the StartAssetBundleImportJobRequest is configured.
Indicates tha status of a job through its queueing and execution.
Poll this DescribeAssetBundleExportApi until JobStatus is either SUCCESSFUL or FAILED.
Indicates the status of a job through its queuing and execution.
Poll this DescribeAssetBundleExportApi until JobStatus is either SUCCESSFUL or FAILED.
Indicates tha status of a job through its queueing and execution.
Poll this DescribeAssetBundleImport API until JobStatus returns one of the following values.
SUCCESSFUL
FAILED
FAILED_ROLLBACK_COMPLETED
FAILED_ROLLBACK_ERROR
Indicates the status of a job through its queuing and execution.
Poll the DescribeAssetBundleImport API until JobStatus returns one of the following values:
SUCCESSFUL
FAILED
FAILED_ROLLBACK_COMPLETED
FAILED_ROLLBACK_ERROR
The scope of the cell for conditional formatting.
" + }, + "Scopes":{ + "shape":"PivotTableConditionalFormattingScopeList", + "documentation":"A list of cell scopes for conditional formatting.
" } }, "documentation":"The cell conditional formatting option for a pivot table.
" @@ -20662,6 +20666,11 @@ }, "documentation":"The scope of the cell for conditional formatting.
" }, + "PivotTableConditionalFormattingScopeList":{ + "type":"list", + "member":{"shape":"PivotTableConditionalFormattingScope"}, + "max":3 + }, "PivotTableConditionalFormattingScopeRole":{ "type":"string", "enum":[ @@ -20725,6 +20734,46 @@ "member":{"shape":"DimensionField"}, "max":40 }, + "PivotTableFieldCollapseState":{ + "type":"string", + "enum":[ + "COLLAPSED", + "EXPANDED" + ] + }, + "PivotTableFieldCollapseStateOption":{ + "type":"structure", + "required":["Target"], + "members":{ + "Target":{ + "shape":"PivotTableFieldCollapseStateTarget", + "documentation":"A tagged-union object that sets the collapse state.
" + }, + "State":{ + "shape":"PivotTableFieldCollapseState", + "documentation":"The state of the field target of a pivot table. Choose one of the following options:
COLLAPSED
EXPANDED
The collapse state options for the pivot table field options.
" + }, + "PivotTableFieldCollapseStateOptionList":{ + "type":"list", + "member":{"shape":"PivotTableFieldCollapseStateOption"} + }, + "PivotTableFieldCollapseStateTarget":{ + "type":"structure", + "members":{ + "FieldId":{ + "shape":"String", + "documentation":"The field ID of the pivot table that the collapse state needs to be set to.
" + }, + "FieldDataPathValues":{ + "shape":"DataPathValueList", + "documentation":"The data path of the pivot table's header. Used to set the collapse state.
" + } + }, + "documentation":"The target of a pivot table field collapse state.
" + }, "PivotTableFieldOption":{ "type":"structure", "required":["FieldId"], @@ -20759,6 +20808,10 @@ "DataPathOptions":{ "shape":"PivotTableDataPathOptionList", "documentation":"The data path options for the pivot table field options.
" + }, + "CollapseStateOptions":{ + "shape":"PivotTableFieldCollapseStateOptionList", + "documentation":"The collapse state options for the pivot table field options.
" } }, "documentation":"The field options for a pivot table visual.
" @@ -21258,6 +21311,14 @@ }, "documentation":"The configured style settings of a radar chart.
" }, + "RadarChartAxesRangeScale":{ + "type":"string", + "enum":[ + "AUTO", + "INDEPENDENT", + "SHARED" + ] + }, "RadarChartCategoryFieldList":{ "type":"list", "member":{"shape":"DimensionField"}, @@ -21326,6 +21387,10 @@ "Legend":{ "shape":"LegendOptions", "documentation":"The legend display setup of the visual.
" + }, + "AxesRangeScale":{ + "shape":"RadarChartAxesRangeScale", + "documentation":"The axis behavior options of a radar chart.
" } }, "documentation":"The configuration of a RadarChartVisual.
A Boolean that determines whether all dependencies of each resource ARN are recursively exported with the job. For example, say you provided a Dashboard ARN to the ResourceArns parameter. If you set IncludeAllDependencies to TRUE, any theme, dataset, and dataource resource that is a dependency of the dashboard is also exported.
A Boolean that determines whether all dependencies of each resource ARN are recursively exported with the job. For example, say you provided a Dashboard ARN to the ResourceArns parameter. If you set IncludeAllDependencies to TRUE, any theme, dataset, and data source resource that is a dependency of the dashboard is also exported.
The failure action for the import job.
If you choose ROLLBACK, failed import jobs will attempt to undo any asset changes caused by the failed job.
If you choose DO_NOTHING, failed import jobs will not attempt to roll back any asset changes caused by the failed job, possibly leaving the Amazon QuickSight account in an inconsistent state.
The failure action for the import job.
If you choose ROLLBACK, failed import jobs will attempt to undo any asset changes caused by the failed job.
If you choose DO_NOTHING, failed import jobs will not attempt to roll back any asset changes caused by the failed job, possibly keeping the Amazon QuickSight account in an inconsistent state.
Creates an X.509 certificate using the specified certificate signing request.
Requires permission to access the CreateCertificateFromCsr action.
The CSR must include a public key that is either an RSA key with a length of at least 2048 bits or an ECC key from NIST P-25 or NIST P-384 curves. For supported certificates, consult Certificate signing algorithms supported by IoT.
Reusing the same certificate signing request (CSR) results in a distinct certificate.
You can create multiple certificates in a batch by creating a directory, copying multiple .csr files into that directory, and then specifying that directory on the command line. The following commands show how to create a batch of certificates given a batch of CSRs. In the following commands, we assume that a set of CSRs are located inside of the directory my-csr-directory:
On Linux and OS X, the command is:
$ ls my-csr-directory/ | xargs -I {} aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/{}
This command lists all of the CSRs in my-csr-directory and pipes each CSR file name to the aws iot create-certificate-from-csr Amazon Web Services CLI command to create a certificate for the corresponding CSR.
You can also run the aws iot create-certificate-from-csr part of the command in parallel to speed up the certificate creation process:
$ ls my-csr-directory/ | xargs -P 10 -I {} aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/{}
On Windows PowerShell, the command to create certificates for all CSRs in my-csr-directory is:
> ls -Name my-csr-directory | %{aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/$_}
On a Windows command prompt, the command to create certificates for all CSRs in my-csr-directory is:
> forfiles /p my-csr-directory /c \"cmd /c aws iot create-certificate-from-csr --certificate-signing-request file://@path\"
Creates an X.509 certificate using the specified certificate signing request.
Requires permission to access the CreateCertificateFromCsr action.
The CSR must include a public key that is either an RSA key with a length of at least 2048 bits or an ECC key from NIST P-256 or NIST P-384 curves. For supported certificates, consult Certificate signing algorithms supported by IoT.
Reusing the same certificate signing request (CSR) results in a distinct certificate.
You can create multiple certificates in a batch by creating a directory, copying multiple .csr files into that directory, and then specifying that directory on the command line. The following commands show how to create a batch of certificates given a batch of CSRs. In the following commands, we assume that a set of CSRs are located inside of the directory my-csr-directory:
On Linux and OS X, the command is:
$ ls my-csr-directory/ | xargs -I {} aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/{}
This command lists all of the CSRs in my-csr-directory and pipes each CSR file name to the aws iot create-certificate-from-csr Amazon Web Services CLI command to create a certificate for the corresponding CSR.
You can also run the aws iot create-certificate-from-csr part of the command in parallel to speed up the certificate creation process:
$ ls my-csr-directory/ | xargs -P 10 -I {} aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/{}
On Windows PowerShell, the command to create certificates for all CSRs in my-csr-directory is:
> ls -Name my-csr-directory | %{aws iot create-certificate-from-csr --certificate-signing-request file://my-csr-directory/$_}
On a Windows command prompt, the command to create certificates for all CSRs in my-csr-directory is:
> forfiles /p my-csr-directory /c \"cmd /c aws iot create-certificate-from-csr --certificate-signing-request file://@path\"
Creates an IoT OTA update on a target group of things or groups.
Requires permission to access the CreateOTAUpdate action.
" }, + "CreatePackage":{ + "name":"CreatePackage", + "http":{ + "method":"PUT", + "requestUri":"/packages/{packageName}", + "responseCode":200 + }, + "input":{"shape":"CreatePackageRequest"}, + "output":{"shape":"CreatePackageResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ConflictException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ServiceQuotaExceededException"} + ], + "documentation":"Creates an IoT software package that can be deployed to your fleet.
Requires permission to access the CreatePackage and GetIndexingConfiguration actions.
", + "idempotent":true + }, + "CreatePackageVersion":{ + "name":"CreatePackageVersion", + "http":{ + "method":"PUT", + "requestUri":"/packages/{packageName}/versions/{versionName}", + "responseCode":200 + }, + "input":{"shape":"CreatePackageVersionRequest"}, + "output":{"shape":"CreatePackageVersionResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ConflictException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ServiceQuotaExceededException"} + ], + "documentation":"Creates a new version for an existing IoT software package.
Requires permission to access the CreatePackageVersion and GetIndexingConfiguration actions.
", + "idempotent":true + }, "CreatePolicy":{ "name":"CreatePolicy", "http":{ @@ -1061,6 +1099,40 @@ ], "documentation":"Delete an OTA update.
Requires permission to access the DeleteOTAUpdate action.
" }, + "DeletePackage":{ + "name":"DeletePackage", + "http":{ + "method":"DELETE", + "requestUri":"/packages/{packageName}", + "responseCode":200 + }, + "input":{"shape":"DeletePackageRequest"}, + "output":{"shape":"DeletePackageResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ], + "documentation":"Deletes a specific version from a software package.
Note: All package versions must be deleted before deleting the software package.
Requires permission to access the DeletePackageVersion action.
", + "idempotent":true + }, + "DeletePackageVersion":{ + "name":"DeletePackageVersion", + "http":{ + "method":"DELETE", + "requestUri":"/packages/{packageName}/versions/{versionName}", + "responseCode":200 + }, + "input":{"shape":"DeletePackageVersionRequest"}, + "output":{"shape":"DeletePackageVersionResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ], + "documentation":"Deletes a specific version from a software package.
Note: If a package version is designated as default, you must remove the designation from the package using the UpdatePackage action.
", + "idempotent":true + }, "DeletePolicy":{ "name":"DeletePolicy", "http":{ @@ -2133,6 +2205,55 @@ ], "documentation":"Gets an OTA update.
Requires permission to access the GetOTAUpdate action.
" }, + "GetPackage":{ + "name":"GetPackage", + "http":{ + "method":"GET", + "requestUri":"/packages/{packageName}", + "responseCode":200 + }, + "input":{"shape":"GetPackageRequest"}, + "output":{"shape":"GetPackageResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Gets information about the specified software package.
Requires permission to access the GetPackage action.
" + }, + "GetPackageConfiguration":{ + "name":"GetPackageConfiguration", + "http":{ + "method":"GET", + "requestUri":"/package-configuration", + "responseCode":200 + }, + "input":{"shape":"GetPackageConfigurationRequest"}, + "output":{"shape":"GetPackageConfigurationResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets information about the specified software package's configuration.
Requires permission to access the GetPackageConfiguration action.
" + }, + "GetPackageVersion":{ + "name":"GetPackageVersion", + "http":{ + "method":"GET", + "requestUri":"/packages/{packageName}/versions/{versionName}", + "responseCode":200 + }, + "input":{"shape":"GetPackageVersionRequest"}, + "output":{"shape":"GetPackageVersionResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Gets information about the specified package version.
Requires permission to access the GetPackageVersion action.
" + }, "GetPercentiles":{ "name":"GetPercentiles", "http":{ @@ -2724,6 +2845,38 @@ ], "documentation":"Lists certificates that are being transferred but not yet accepted.
Requires permission to access the ListOutgoingCertificates action.
" }, + "ListPackageVersions":{ + "name":"ListPackageVersions", + "http":{ + "method":"GET", + "requestUri":"/packages/{packageName}/versions", + "responseCode":200 + }, + "input":{"shape":"ListPackageVersionsRequest"}, + "output":{"shape":"ListPackageVersionsResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ], + "documentation":"Lists the software package versions associated to the account.
Requires permission to access the ListPackageVersions action.
" + }, + "ListPackages":{ + "name":"ListPackages", + "http":{ + "method":"GET", + "requestUri":"/packages", + "responseCode":200 + }, + "input":{"shape":"ListPackagesRequest"}, + "output":{"shape":"ListPackagesResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ], + "documentation":"Lists the software packages associated to the account.
Requires permission to access the ListPackages action.
" + }, "ListPolicies":{ "name":"ListPolicies", "http":{ @@ -3899,6 +4052,59 @@ ], "documentation":"Updates the definition for the specified mitigation action.
Requires permission to access the UpdateMitigationAction action.
" }, + "UpdatePackage":{ + "name":"UpdatePackage", + "http":{ + "method":"PATCH", + "requestUri":"/packages/{packageName}", + "responseCode":200 + }, + "input":{"shape":"UpdatePackageRequest"}, + "output":{"shape":"UpdatePackageResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Updates the supported fields for a specific package.
Requires permission to access the UpdatePackage and GetIndexingConfiguration actions.
", + "idempotent":true + }, + "UpdatePackageConfiguration":{ + "name":"UpdatePackageConfiguration", + "http":{ + "method":"PATCH", + "requestUri":"/package-configuration", + "responseCode":200 + }, + "input":{"shape":"UpdatePackageConfigurationRequest"}, + "output":{"shape":"UpdatePackageConfigurationResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ], + "documentation":"Updates the package configuration.
Requires permission to access the UpdatePackageConfiguration and iam:PassRole actions.
", + "idempotent":true + }, + "UpdatePackageVersion":{ + "name":"UpdatePackageVersion", + "http":{ + "method":"PATCH", + "requestUri":"/packages/{packageName}/versions/{versionName}", + "responseCode":200 + }, + "input":{"shape":"UpdatePackageVersionRequest"}, + "output":{"shape":"UpdatePackageVersionResponse"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Updates the supported fields for a specific package version.
Requires permission to access the UpdatePackageVersion and GetIndexingConfiguration actions.
", + "idempotent":true + }, "UpdateProvisioningTemplate":{ "name":"UpdateProvisioningTemplate", "http":{ @@ -6081,6 +6287,12 @@ "min":1, "pattern":"^[a-zA-Z0-9-_]+$" }, + "ClientToken":{ + "type":"string", + "max":64, + "min":36, + "pattern":"\\S{36,64}" + }, "CloudwatchAlarmAction":{ "type":"structure", "required":[ @@ -6276,7 +6488,11 @@ "ConflictException":{ "type":"structure", "members":{ - "message":{"shape":"errorMessage"} + "message":{"shape":"errorMessage"}, + "resourceId":{ + "shape":"resourceId", + "documentation":"A resource with the same name already exists.
" + } }, "documentation":"A resource with the same name already exists.
", "error":{"httpStatusCode":409}, @@ -6774,7 +6990,7 @@ }, "documentSource":{ "shape":"JobDocumentSource", - "documentation":"An S3 link, or S3 object URL, to the job document. The link is an Amazon S3 object URL and is required if you don't specify a value for document.
For example, --document-source https://s3.region-code.amazonaws.com/example-firmware/device-firmware.1.0.
For more information, see Methods for accessing a bucket.
" + "documentation":"An S3 link, or S3 object URL, to the job document. The link is an Amazon S3 object URL and is required if you don't specify a value for document.
For example, --document-source https://s3.region-code.amazonaws.com/example-firmware/device-firmware.1.0
For more information, see Methods for accessing a bucket.
" }, "document":{ "shape":"JobDocument", @@ -6827,6 +7043,10 @@ "schedulingConfig":{ "shape":"SchedulingConfig", "documentation":"The configuration that allows you to schedule a job for a future date and time in addition to specifying the end behavior for each job execution.
" + }, + "destinationPackageVersions":{ + "shape":"DestinationPackageVersions", + "documentation":"The package version Amazon Resource Names (ARNs) that are installed on the device when the job successfully completes.
Note:The following Length Constraints relates to a single string. Up to five strings are allowed.
" } } }, @@ -6891,6 +7111,10 @@ "maintenanceWindows":{ "shape":"MaintenanceWindows", "documentation":"Allows you to configure an optional maintenance window for the rollout of a job document to all devices in the target group for a job.
" + }, + "destinationPackageVersions":{ + "shape":"DestinationPackageVersions", + "documentation":"The package version Amazon Resource Names (ARNs) that are installed on the device when the job successfully completes.
Note:The following Length Constraints relates to a single string. Up to five strings are allowed.
" } } }, @@ -7072,6 +7296,123 @@ } } }, + "CreatePackageRequest":{ + "type":"structure", + "required":["packageName"], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the new package.
", + "location":"uri", + "locationName":"packageName" + }, + "description":{ + "shape":"ResourceDescription", + "documentation":"A summary of the package being created. This can be used to outline the package's contents or purpose.
" + }, + "tags":{ + "shape":"TagMap", + "documentation":"Metadata that can be used to manage the package.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "CreatePackageResponse":{ + "type":"structure", + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the package.
" + }, + "packageArn":{ + "shape":"PackageArn", + "documentation":"The Amazon Resource Name (ARN) for the package.
" + }, + "description":{ + "shape":"ResourceDescription", + "documentation":"The package description.
" + } + } + }, + "CreatePackageVersionRequest":{ + "type":"structure", + "required":[ + "packageName", + "versionName" + ], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the associated package.
", + "location":"uri", + "locationName":"packageName" + }, + "versionName":{ + "shape":"VersionName", + "documentation":"The name of the new package version.
", + "location":"uri", + "locationName":"versionName" + }, + "description":{ + "shape":"ResourceDescription", + "documentation":"A summary of the package version being created. This can be used to outline the package's contents or purpose.
" + }, + "attributes":{ + "shape":"ResourceAttributes", + "documentation":"Metadata that can be used to define a package version’s configuration. For example, the S3 file location, configuration options that are being sent to the device or fleet.
The combined size of all the attributes on a package version is limited to 3KB.
" + }, + "tags":{ + "shape":"TagMap", + "documentation":"Metadata that can be used to manage the package version.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "CreatePackageVersionResponse":{ + "type":"structure", + "members":{ + "packageVersionArn":{ + "shape":"PackageVersionArn", + "documentation":"The Amazon Resource Name (ARN) for the package.
" + }, + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the associated package.
" + }, + "versionName":{ + "shape":"VersionName", + "documentation":"The name of the new package version.
" + }, + "description":{ + "shape":"ResourceDescription", + "documentation":"The package version description.
" + }, + "attributes":{ + "shape":"ResourceAttributes", + "documentation":"Metadata that were added to the package version that can be used to define a package version’s configuration.
" + }, + "status":{ + "shape":"PackageVersionStatus", + "documentation":"The status of the package version. For more information, see Package version lifecycle.
" + }, + "errorReason":{ + "shape":"PackageVersionErrorReason", + "documentation":"Error reason for a package version failure during creation or update.
" + } + } + }, "CreatePolicyRequest":{ "type":"structure", "required":[ @@ -8076,6 +8417,63 @@ "members":{ } }, + "DeletePackageRequest":{ + "type":"structure", + "required":["packageName"], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the target package.
", + "location":"uri", + "locationName":"packageName" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "DeletePackageResponse":{ + "type":"structure", + "members":{ + } + }, + "DeletePackageVersionRequest":{ + "type":"structure", + "required":[ + "packageName", + "versionName" + ], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the associated package.
", + "location":"uri", + "locationName":"packageName" + }, + "versionName":{ + "shape":"VersionName", + "documentation":"The name of the target package version.
", + "location":"uri", + "locationName":"versionName" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "DeletePackageVersionResponse":{ + "type":"structure", + "members":{ + } + }, "DeletePolicyRequest":{ "type":"structure", "required":["policyName"], @@ -9094,6 +9492,10 @@ "maintenanceWindows":{ "shape":"MaintenanceWindows", "documentation":"Allows you to configure an optional maintenance window for the rollout of a job document to all devices in the target group for a job.
" + }, + "destinationPackageVersions":{ + "shape":"DestinationPackageVersions", + "documentation":"The package version Amazon Resource Names (ARNs) that are installed on the device when the job successfully completes.
Note:The following Length Constraints relates to a single string. Up to five strings are allowed.
" } } }, @@ -9665,6 +10067,10 @@ }, "documentation":"Describes the location of the updated firmware.
" }, + "DestinationPackageVersions":{ + "type":"list", + "member":{"shape":"PackageVersionArn"} + }, "DetachPolicyRequest":{ "type":"structure", "required":[ @@ -10080,7 +10486,7 @@ "DurationInMinutes":{ "type":"integer", "max":1430, - "min":30 + "min":1 }, "DurationSeconds":{"type":"integer"}, "DynamicGroupStatus":{ @@ -10264,6 +10670,10 @@ "documentation":"The input for the EnableTopicRuleRequest operation.
" }, "Enabled":{"type":"boolean"}, + "EnabledBoolean":{ + "type":"boolean", + "box":true + }, "EndpointAddress":{"type":"string"}, "EndpointType":{ "type":"string", @@ -10758,6 +11168,123 @@ } } }, + "GetPackageConfigurationRequest":{ + "type":"structure", + "members":{ + } + }, + "GetPackageConfigurationResponse":{ + "type":"structure", + "members":{ + "versionUpdateByJobsConfig":{ + "shape":"VersionUpdateByJobsConfig", + "documentation":"The version that is associated to a specific job.
" + } + } + }, + "GetPackageRequest":{ + "type":"structure", + "required":["packageName"], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the target package.
", + "location":"uri", + "locationName":"packageName" + } + } + }, + "GetPackageResponse":{ + "type":"structure", + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the package.
" + }, + "packageArn":{ + "shape":"PackageArn", + "documentation":"The ARN for the package.
" + }, + "description":{ + "shape":"ResourceDescription", + "documentation":"The package description.
" + }, + "defaultVersionName":{ + "shape":"VersionName", + "documentation":"The name of the default package version.
" + }, + "creationDate":{ + "shape":"CreationDate", + "documentation":"The date the package was created.
" + }, + "lastModifiedDate":{ + "shape":"LastModifiedDate", + "documentation":"The date when the package was last updated.
" + } + } + }, + "GetPackageVersionRequest":{ + "type":"structure", + "required":[ + "packageName", + "versionName" + ], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the associated package.
", + "location":"uri", + "locationName":"packageName" + }, + "versionName":{ + "shape":"VersionName", + "documentation":"The name of the target package version.
", + "location":"uri", + "locationName":"versionName" + } + } + }, + "GetPackageVersionResponse":{ + "type":"structure", + "members":{ + "packageVersionArn":{ + "shape":"PackageVersionArn", + "documentation":"The ARN for the package version.
" + }, + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the package.
" + }, + "versionName":{ + "shape":"VersionName", + "documentation":"The name of the package version.
" + }, + "description":{ + "shape":"ResourceDescription", + "documentation":"The package version description.
" + }, + "attributes":{ + "shape":"ResourceAttributes", + "documentation":"Metadata that were added to the package version that can be used to define a package version’s configuration.
" + }, + "status":{ + "shape":"PackageVersionStatus", + "documentation":"The status associated to the package version. For more information, see Package version lifecycle.
" + }, + "errorReason":{ + "shape":"PackageVersionErrorReason", + "documentation":"Error reason for a package version failure during creation or update.
" + }, + "creationDate":{ + "shape":"CreationDate", + "documentation":"The date when the package version was created.
" + }, + "lastModifiedDate":{ + "shape":"LastModifiedDate", + "documentation":"The date when the package version was last updated.
" + } + } + }, "GetPercentilesRequest":{ "type":"structure", "required":["queryString"], @@ -11524,6 +12051,10 @@ "scheduledJobRollouts":{ "shape":"ScheduledJobRolloutList", "documentation":"Displays the next seven maintenance window occurrences and their start times.
" + }, + "destinationPackageVersions":{ + "shape":"DestinationPackageVersions", + "documentation":"The package version Amazon Resource Names (ARNs) that are installed on the device when the job successfully completes.
Note:The following Length Constraints relates to a single string. Up to five strings are allowed.
" } }, "documentation":"The Job object contains details about a job.
The output from the ListOutgoingCertificates operation.
" }, + "ListPackageVersionsRequest":{ + "type":"structure", + "required":["packageName"], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the target package.
", + "location":"uri", + "locationName":"packageName" + }, + "status":{ + "shape":"PackageVersionStatus", + "documentation":"The status of the package version. For more information, see Package version lifecycle.
", + "location":"querystring", + "locationName":"status" + }, + "maxResults":{ + "shape":"PackageCatalogMaxResults", + "documentation":"The maximum number of results to return at one time.
", + "location":"querystring", + "locationName":"maxResults" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"The token for the next set of results.
", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListPackageVersionsResponse":{ + "type":"structure", + "members":{ + "packageVersionSummaries":{ + "shape":"PackageVersionSummaryList", + "documentation":"Lists the package versions associated to the package.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"The token for the next set of results.
" + } + } + }, + "ListPackagesRequest":{ + "type":"structure", + "members":{ + "maxResults":{ + "shape":"PackageCatalogMaxResults", + "documentation":"The maximum number of results returned at one time.
", + "location":"querystring", + "locationName":"maxResults" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"The token for the next set of results.
", + "location":"querystring", + "locationName":"nextToken" + } + } + }, + "ListPackagesResponse":{ + "type":"structure", + "members":{ + "packageSummaries":{ + "shape":"PackageSummaryList", + "documentation":"The software package summary.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"The token for the next set of results.
" + } + } + }, "ListPoliciesRequest":{ "type":"structure", "members":{ @@ -15187,6 +15791,97 @@ "member":{"shape":"OutgoingCertificate"} }, "OverrideDynamicGroups":{"type":"boolean"}, + "PackageArn":{"type":"string"}, + "PackageCatalogMaxResults":{ + "type":"integer", + "box":true, + "max":100, + "min":1 + }, + "PackageName":{ + "type":"string", + "max":128, + "min":1, + "pattern":"[a-zA-Z0-9-_.]+" + }, + "PackageSummary":{ + "type":"structure", + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name for the target package.
" + }, + "defaultVersionName":{ + "shape":"VersionName", + "documentation":"The name of the default package version.
" + }, + "creationDate":{ + "shape":"CreationDate", + "documentation":"The date that the package was created.
" + }, + "lastModifiedDate":{ + "shape":"LastModifiedDate", + "documentation":"The date that the package was last updated.
" + } + }, + "documentation":"A summary of information about a software package.
" + }, + "PackageSummaryList":{ + "type":"list", + "member":{"shape":"PackageSummary"} + }, + "PackageVersionAction":{ + "type":"string", + "enum":[ + "PUBLISH", + "DEPRECATE" + ] + }, + "PackageVersionArn":{ + "type":"string", + "max":1600, + "min":1, + "pattern":"^arn:[!-~]+$" + }, + "PackageVersionErrorReason":{"type":"string"}, + "PackageVersionStatus":{ + "type":"string", + "enum":[ + "DRAFT", + "PUBLISHED", + "DEPRECATED" + ] + }, + "PackageVersionSummary":{ + "type":"structure", + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the associated software package.
" + }, + "versionName":{ + "shape":"VersionName", + "documentation":"The name of the target package version.
" + }, + "status":{ + "shape":"PackageVersionStatus", + "documentation":"The status of the package version. For more information, see Package version lifecycle.
" + }, + "creationDate":{ + "shape":"CreationDate", + "documentation":"The date that the package version was created.
" + }, + "lastModifiedDate":{ + "shape":"LastModifiedDate", + "documentation":"The date that the package version was last updated.
" + } + }, + "documentation":"A summary of information about a package version.
" + }, + "PackageVersionSummaryList":{ + "type":"list", + "member":{"shape":"PackageVersionSummary"} + }, "PageSize":{ "type":"integer", "max":250, @@ -16029,6 +16724,29 @@ "key":{"shape":"ResourceLogicalId"}, "value":{"shape":"ResourceArn"} }, + "ResourceAttributeKey":{ + "type":"string", + "min":1, + "pattern":"[a-zA-Z0-9:_-]+" + }, + "ResourceAttributeValue":{ + "type":"string", + "min":1, + "pattern":"[^\\p{C}]+" + }, + "ResourceAttributes":{ + "type":"map", + "key":{"shape":"ResourceAttributeKey"}, + "value":{"shape":"ResourceAttributeValue"}, + "sensitive":true + }, + "ResourceDescription":{ + "type":"string", + "max":1024, + "min":0, + "pattern":"[^\\p{C}]+", + "sensitive":true + }, "ResourceIdentifier":{ "type":"structure", "members":{ @@ -16568,6 +17286,15 @@ "pattern":"[\\s\\S]*" }, "ServiceName":{"type":"string"}, + "ServiceQuotaExceededException":{ + "type":"structure", + "members":{ + "message":{"shape":"errorMessage"} + }, + "documentation":"A limit has been exceeded.
", + "error":{"httpStatusCode":402}, + "exception":true + }, "ServiceType":{ "type":"string", "enum":[ @@ -17245,6 +17972,13 @@ "type":"list", "member":{"shape":"Tag"} }, + "TagMap":{ + "type":"map", + "key":{"shape":"TagKey"}, + "value":{"shape":"TagValue"}, + "max":50, + "min":1 + }, "TagResourceRequest":{ "type":"structure", "required":[ @@ -18317,6 +19051,10 @@ "exception":true }, "UndoDeprecate":{"type":"boolean"}, + "UnsetDefaultVersion":{ + "type":"boolean", + "box":true + }, "UnsignedLong":{ "type":"long", "min":0 @@ -18886,6 +19624,108 @@ } } }, + "UpdatePackageConfigurationRequest":{ + "type":"structure", + "members":{ + "versionUpdateByJobsConfig":{ + "shape":"VersionUpdateByJobsConfig", + "documentation":"Configuration to manage job's package version reporting. This updates the thing's reserved named shadow that the job targets.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "UpdatePackageConfigurationResponse":{ + "type":"structure", + "members":{ + } + }, + "UpdatePackageRequest":{ + "type":"structure", + "required":["packageName"], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the target package.
", + "location":"uri", + "locationName":"packageName" + }, + "description":{ + "shape":"ResourceDescription", + "documentation":"The package description.
" + }, + "defaultVersionName":{ + "shape":"VersionName", + "documentation":"The name of the default package version.
Note: You cannot name a defaultVersion and set unsetDefaultVersion equal to true at the same time.
Indicates whether you want to remove the named default package version from the software package. Set as true to remove the default package version.
Note: You cannot name a defaultVersion and set unsetDefaultVersion equal to true at the same time.
A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "UpdatePackageResponse":{ + "type":"structure", + "members":{ + } + }, + "UpdatePackageVersionRequest":{ + "type":"structure", + "required":[ + "packageName", + "versionName" + ], + "members":{ + "packageName":{ + "shape":"PackageName", + "documentation":"The name of the associated software package.
", + "location":"uri", + "locationName":"packageName" + }, + "versionName":{ + "shape":"VersionName", + "documentation":"The name of the target package version.
", + "location":"uri", + "locationName":"versionName" + }, + "description":{ + "shape":"ResourceDescription", + "documentation":"The package version description.
" + }, + "attributes":{ + "shape":"ResourceAttributes", + "documentation":"Metadata that can be used to define a package version’s configuration. For example, the S3 file location, configuration options that are being sent to the device or fleet.
Note: Attributes can be updated only when the package version is in a draft state.
The combined size of all the attributes on a package version is limited to 3KB.
" + }, + "action":{ + "shape":"PackageVersionAction", + "documentation":"The status that the package version should be assigned. For more information, see Package version lifecycle.
" + }, + "clientToken":{ + "shape":"ClientToken", + "documentation":"A unique case-sensitive identifier that you can provide to ensure the idempotency of the request. Don't reuse this client token if a new idempotent request is required.
", + "idempotencyToken":true, + "location":"querystring", + "locationName":"clientToken" + } + } + }, + "UpdatePackageVersionResponse":{ + "type":"structure", + "members":{ + } + }, "UpdateProvisioningTemplateRequest":{ "type":"structure", "required":["templateName"], @@ -19333,6 +20173,15 @@ "type":"list", "member":{"shape":"ValidationError"} }, + "ValidationException":{ + "type":"structure", + "members":{ + "message":{"shape":"errorMessage"} + }, + "documentation":"The request is not valid.
", + "error":{"httpStatusCode":400}, + "exception":true + }, "Value":{ "type":"string", "max":4096, @@ -19367,7 +20216,27 @@ "error":{"httpStatusCode":409}, "exception":true }, + "VersionName":{ + "type":"string", + "max":64, + "min":1, + "pattern":"[a-zA-Z0-9-_.]+" + }, "VersionNumber":{"type":"long"}, + "VersionUpdateByJobsConfig":{ + "type":"structure", + "members":{ + "enabled":{ + "shape":"EnabledBoolean", + "documentation":"Indicates whether the Job is enabled or not.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The Amazon Resource Name (ARN) of the role that grants permission to the IoT jobs service to update the reserved named shadow when the job successfully completes.
" + } + }, + "documentation":"Configuration to manage IoT Job's package version reporting. If configured, Jobs updates the thing's reserved named shadow with the package version information up on successful job completion.
Note: For each job, the destinationPackageVersions attribute has to be set with the correct data for Jobs to report to the thing shadow.
" + }, "VersionsLimitExceededException":{ "type":"structure", "members":{ From 58bc647d9c83fccfdc99c700b13b3a89e6bdc2b0 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 6 Jun 2023 18:08:16 +0000 Subject: [PATCH 045/317] Amazon Simple Queue Service Update: Amazon SQS adds three new APIs - StartMessageMoveTask, CancelMessageMoveTask, and ListMessageMoveTasks to automate redriving messages from dead-letter queues to source queues or a custom destination. --- ...ture-AmazonSimpleQueueService-a9c0177.json | 6 + .../codegen-resources/service-2.json | 242 +++++++++++++++--- 2 files changed, 219 insertions(+), 29 deletions(-) create mode 100644 .changes/next-release/feature-AmazonSimpleQueueService-a9c0177.json diff --git a/.changes/next-release/feature-AmazonSimpleQueueService-a9c0177.json b/.changes/next-release/feature-AmazonSimpleQueueService-a9c0177.json new file mode 100644 index 000000000000..129337ba5338 --- /dev/null +++ b/.changes/next-release/feature-AmazonSimpleQueueService-a9c0177.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Simple Queue Service", + "contributor": "", + "description": "Amazon SQS adds three new APIs - StartMessageMoveTask, CancelMessageMoveTask, and ListMessageMoveTasks to automate redriving messages from dead-letter queues to source queues or a custom destination." +} diff --git a/services/sqs/src/main/resources/codegen-resources/service-2.json b/services/sqs/src/main/resources/codegen-resources/service-2.json index c1a758508ca3..59c24abf0823 100644 --- a/services/sqs/src/main/resources/codegen-resources/service-2.json +++ b/services/sqs/src/main/resources/codegen-resources/service-2.json @@ -22,7 +22,24 @@ "errors":[ {"shape":"OverLimit"} ], - "documentation":"Adds a permission to a queue for a specific principal. This allows sharing access to the queue.
When you create a queue, you have full control access rights for the queue. Only you, the owner of the queue, can grant or deny permissions to the queue. For more information about these permissions, see Allow Developers to Write Messages to a Shared Queue in the Amazon SQS Developer Guide.
AddPermission generates a policy for you. You can use SetQueueAttributes to upload your policy. For more information, see Using Custom Policies with the Amazon SQS Access Policy Language in the Amazon SQS Developer Guide.
An Amazon SQS policy can have a maximum of 7 actions.
To remove the ability to change queue permissions, you must deny permission to the AddPermission, RemovePermission, and SetQueueAttributes actions in your IAM policy.
Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:
&AttributeName.1=first
&AttributeName.2=second
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
Adds a permission to a queue for a specific principal. This allows sharing access to the queue.
When you create a queue, you have full control access rights for the queue. Only you, the owner of the queue, can grant or deny permissions to the queue. For more information about these permissions, see Allow Developers to Write Messages to a Shared Queue in the Amazon SQS Developer Guide.
AddPermission generates a policy for you. You can use SetQueueAttributes to upload your policy. For more information, see Using Custom Policies with the Amazon SQS Access Policy Language in the Amazon SQS Developer Guide.
An Amazon SQS policy can have a maximum of seven actions per statement.
To remove the ability to change queue permissions, you must deny permission to the AddPermission, RemovePermission, and SetQueueAttributes actions in your IAM policy.
Amazon SQS AddPermission does not support adding a non-account principal.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
Cancels a specified message movement task.
A message movement can only be cancelled when the current status is RUNNING.
Cancelling a message movement task does not revert the messages that have already been moved. It can only stop the messages that have not been moved yet.
Changes the visibility timeout of a specified message in a queue to a new value. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. For more information, see Visibility Timeout in the Amazon SQS Developer Guide.
For example, you have a message with a visibility timeout of 5 minutes. After 3 minutes, you call ChangeMessageVisibility with a timeout of 10 minutes. You can continue to call ChangeMessageVisibility to extend the visibility timeout to the maximum allowed time. If you try to extend the visibility timeout beyond the maximum, your request is rejected.
An Amazon SQS message has three basic states:
Sent to a queue by a producer.
Received from the queue by a consumer.
Deleted from the queue.
A message is considered to be stored after it is sent to a queue by a producer, but not yet received from the queue by a consumer (that is, between states 1 and 2). There is no limit to the number of stored messages. A message is considered to be in flight after it is received from a queue by a consumer, but not yet deleted from the queue (that is, between states 2 and 3). There is a limit to the number of inflight messages.
Limits that apply to inflight messages are unrelated to the unlimited number of stored messages.
For most standard queues (depending on queue traffic and message backlog), there can be a maximum of approximately 120,000 inflight messages (received from a queue by a consumer, but not yet deleted from the queue). If you reach this limit, Amazon SQS returns the OverLimit error message. To avoid reaching the limit, you should delete messages from the queue after they're processed. You can also increase the number of queues you use to process your messages. To request a limit increase, file a support request.
For FIFO queues, there can be a maximum of 20,000 inflight messages (received from a queue by a consumer, but not yet deleted from the queue). If you reach this limit, Amazon SQS returns no error messages.
If you attempt to set the VisibilityTimeout to a value greater than the maximum time left, Amazon SQS returns an error. Amazon SQS doesn't automatically recalculate and increase the timeout to the maximum remaining time.
Unlike with a queue, when you change the visibility timeout for a specific message the timeout value is applied immediately but isn't saved in memory for that message. If you don't delete a message after it is received, the visibility timeout for the message reverts to the original timeout value (not to the value you set using the ChangeMessageVisibility action) the next time the message is received.
Changes the visibility timeout of a specified message in a queue to a new value. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. For more information, see Visibility Timeout in the Amazon SQS Developer Guide.
For example, if the default timeout for a queue is 60 seconds, 15 seconds have elapsed since you received the message, and you send a ChangeMessageVisibility call with VisibilityTimeout set to 10 seconds, the 10 seconds begin to count from the time that you make the ChangeMessageVisibility call. Thus, any attempt to change the visibility timeout or to delete that message 10 seconds after you initially change the visibility timeout (a total of 25 seconds) might result in an error.
An Amazon SQS message has three basic states:
Sent to a queue by a producer.
Received from the queue by a consumer.
Deleted from the queue.
A message is considered to be stored after it is sent to a queue by a producer, but not yet received from the queue by a consumer (that is, between states 1 and 2). There is no limit to the number of stored messages. A message is considered to be in flight after it is received from a queue by a consumer, but not yet deleted from the queue (that is, between states 2 and 3). There is a limit to the number of in flight messages.
Limits that apply to in flight messages are unrelated to the unlimited number of stored messages.
For most standard queues (depending on queue traffic and message backlog), there can be a maximum of approximately 120,000 in flight messages (received from a queue by a consumer, but not yet deleted from the queue). If you reach this limit, Amazon SQS returns the OverLimit error message. To avoid reaching the limit, you should delete messages from the queue after they're processed. You can also increase the number of queues you use to process your messages. To request a limit increase, file a support request.
For FIFO queues, there can be a maximum of 20,000 in flight messages (received from a queue by a consumer, but not yet deleted from the queue). If you reach this limit, Amazon SQS returns no error messages.
If you attempt to set the VisibilityTimeout to a value greater than the maximum time left, Amazon SQS returns an error. Amazon SQS doesn't automatically recalculate and increase the timeout to the maximum remaining time.
Unlike with a queue, when you change the visibility timeout for a specific message the timeout value is applied immediately but isn't saved in memory for that message. If you don't delete a message after it is received, the visibility timeout for the message reverts to the original timeout value (not to the value you set using the ChangeMessageVisibility action) the next time the message is received.
Changes the visibility timeout of multiple messages. This is a batch version of ChangeMessageVisibility. The result of the action on each message is reported individually in the response. You can send up to 10 ChangeMessageVisibility requests with each ChangeMessageVisibilityBatch action.
Because the batch request can result in a combination of successful and unsuccessful actions, you should check for batch errors even when the call returns an HTTP status code of 200.
Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:
&AttributeName.1=first
&AttributeName.2=second
Changes the visibility timeout of multiple messages. This is a batch version of ChangeMessageVisibility. The result of the action on each message is reported individually in the response. You can send up to 10 ChangeMessageVisibility requests with each ChangeMessageVisibilityBatch action.
Because the batch request can result in a combination of successful and unsuccessful actions, you should check for batch errors even when the call returns an HTTP status code of 200.
Creates a new standard or FIFO queue. You can pass one or more attributes in the request. Keep the following in mind:
If you don't specify the FifoQueue attribute, Amazon SQS creates a standard queue.
You can't change the queue type after you create it and you can't convert an existing standard queue into a FIFO queue. You must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue. For more information, see Moving From a Standard Queue to a FIFO Queue in the Amazon SQS Developer Guide.
If you don't provide a value for an attribute, the queue is created with the default value for the attribute.
If you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
To successfully create a new queue, you must provide a queue name that adheres to the limits related to queues and is unique within the scope of your queues.
After you create a queue, you must wait at least one second after the queue is created to be able to use the queue.
To get the queue URL, use the GetQueueUrl action. GetQueueUrl requires only the QueueName parameter. be aware of existing queue names:
If you provide the name of an existing queue along with the exact names and values of all the queue's attributes, CreateQueue returns the queue URL for the existing queue.
If the queue name, attribute names, or attribute values don't match an existing queue, CreateQueue returns an error.
Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:
&AttributeName.1=first
&AttributeName.2=second
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
Creates a new standard or FIFO queue. You can pass one or more attributes in the request. Keep the following in mind:
If you don't specify the FifoQueue attribute, Amazon SQS creates a standard queue.
You can't change the queue type after you create it and you can't convert an existing standard queue into a FIFO queue. You must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue. For more information, see Moving From a Standard Queue to a FIFO Queue in the Amazon SQS Developer Guide.
If you don't provide a value for an attribute, the queue is created with the default value for the attribute.
If you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
To successfully create a new queue, you must provide a queue name that adheres to the limits related to queues and is unique within the scope of your queues.
After you create a queue, you must wait at least one second after the queue is created to be able to use the queue.
To get the queue URL, use the GetQueueUrl action. GetQueueUrl requires only the QueueName parameter. be aware of existing queue names:
If you provide the name of an existing queue along with the exact names and values of all the queue's attributes, CreateQueue returns the queue URL for the existing queue.
If the queue name, attribute names, or attribute values don't match an existing queue, CreateQueue returns an error.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
Deletes the specified message from the specified queue. To select the message to delete, use the ReceiptHandle of the message (not the MessageId which you receive when you send the message). Amazon SQS can delete a message from a queue even if a visibility timeout setting causes the message to be locked by another consumer. Amazon SQS automatically deletes messages left in a queue longer than the retention period configured for the queue.
The ReceiptHandle is associated with a specific instance of receiving a message. If you receive a message more than once, the ReceiptHandle is different each time you receive a message. When you use the DeleteMessage action, you must provide the most recently received ReceiptHandle for the message (otherwise, the request succeeds, but the message might not be deleted).
For standard queues, it is possible to receive a message even after you delete it. This might happen on rare occasions if one of the servers which stores a copy of the message is unavailable when you send the request to delete the message. The copy remains on the server and might be returned to you during a subsequent receive request. You should ensure that your application is idempotent, so that receiving a message more than once does not cause issues.
Deletes the specified message from the specified queue. To select the message to delete, use the ReceiptHandle of the message (not the MessageId which you receive when you send the message). Amazon SQS can delete a message from a queue even if a visibility timeout setting causes the message to be locked by another consumer. Amazon SQS automatically deletes messages left in a queue longer than the retention period configured for the queue.
The ReceiptHandle is associated with a specific instance of receiving a message. If you receive a message more than once, the ReceiptHandle is different each time you receive a message. When you use the DeleteMessage action, you must provide the most recently received ReceiptHandle for the message (otherwise, the request succeeds, but the message will not be deleted).
For standard queues, it is possible to receive a message even after you delete it. This might happen on rare occasions if one of the servers which stores a copy of the message is unavailable when you send the request to delete the message. The copy remains on the server and might be returned to you during a subsequent receive request. You should ensure that your application is idempotent, so that receiving a message more than once does not cause issues.
Deletes up to ten messages from the specified queue. This is a batch version of DeleteMessage. The result of the action on each message is reported individually in the response.
Because the batch request can result in a combination of successful and unsuccessful actions, you should check for batch errors even when the call returns an HTTP status code of 200.
Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:
&AttributeName.1=first
&AttributeName.2=second
Deletes up to ten messages from the specified queue. This is a batch version of DeleteMessage. The result of the action on each message is reported individually in the response.
Because the batch request can result in a combination of successful and unsuccessful actions, you should check for batch errors even when the call returns an HTTP status code of 200.
Deletes the queue specified by the QueueUrl, regardless of the queue's contents.
Be careful with the DeleteQueue action: When you delete a queue, any messages in the queue are no longer available.
When you delete a queue, the deletion process takes up to 60 seconds. Requests you send involving that queue during the 60 seconds might succeed. For example, a SendMessage request might succeed, but after 60 seconds the queue and the message you sent no longer exist.
When you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
Deletes the queue specified by the QueueUrl, regardless of the queue's contents.
Be careful with the DeleteQueue action: When you delete a queue, any messages in the queue are no longer available.
When you delete a queue, the deletion process takes up to 60 seconds. Requests you send involving that queue during the 60 seconds might succeed. For example, a SendMessage request might succeed, but after 60 seconds the queue and the message you sent no longer exist.
When you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
The delete operation uses the HTTP GET verb.
Returns a list of your queues that have the RedrivePolicy queue attribute configured with a dead-letter queue.
The ListDeadLetterSourceQueues methods supports pagination. Set parameter MaxResults in the request to specify the maximum number of results to be returned in the response. If you do not set MaxResults, the response includes a maximum of 1,000 results. If you set MaxResults and there are additional results to display, the response includes a value for NextToken. Use NextToken as a parameter in your next request to ListDeadLetterSourceQueues to receive the next page of results.
For more information about using dead-letter queues, see Using Amazon SQS Dead-Letter Queues in the Amazon SQS Developer Guide.
" }, + "ListMessageMoveTasks":{ + "name":"ListMessageMoveTasks", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListMessageMoveTasksRequest"}, + "output":{ + "shape":"ListMessageMoveTasksResult", + "resultWrapper":"ListMessageMoveTasksResult" + }, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"UnsupportedOperation"} + ], + "documentation":"Gets the most recent message movement tasks (up to 10) under a specific source queue.
" + }, "ListQueueTags":{ "name":"ListQueueTags", "http":{ @@ -173,7 +207,7 @@ "shape":"ListQueueTagsResult", "resultWrapper":"ListQueueTagsResult" }, - "documentation":"List all cost allocation tags added to the specified Amazon SQS queue. For an overview, see Tagging Your Amazon SQS Queues in the Amazon SQS Developer Guide.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
List all cost allocation tags added to the specified Amazon SQS queue. For an overview, see Tagging Your Amazon SQS Queues in the Amazon SQS Developer Guide.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
Returns a list of your queues in the current region. The response includes a maximum of 1,000 results. If you specify a value for the optional QueueNamePrefix parameter, only queues with a name that begins with the specified value are returned.
The listQueues methods supports pagination. Set parameter MaxResults in the request to specify the maximum number of results to be returned in the response. If you do not set MaxResults, the response includes a maximum of 1,000 results. If you set MaxResults and there are additional results to display, the response includes a value for NextToken. Use NextToken as a parameter in your next request to listQueues to receive the next page of results.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
Returns a list of your queues in the current region. The response includes a maximum of 1,000 results. If you specify a value for the optional QueueNamePrefix parameter, only queues with a name that begins with the specified value are returned.
The listQueues methods supports pagination. Set parameter MaxResults in the request to specify the maximum number of results to be returned in the response. If you do not set MaxResults, the response includes a maximum of 1,000 results. If you set MaxResults and there are additional results to display, the response includes a value for NextToken. Use NextToken as a parameter in your next request to listQueues to receive the next page of results.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
Revokes any permissions in the queue policy that matches the specified Label parameter.
Only the owner of a queue can remove permissions from it.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
To remove the ability to change queue permissions, you must deny permission to the AddPermission, RemovePermission, and SetQueueAttributes actions in your IAM policy.
Revokes any permissions in the queue policy that matches the specified Label parameter.
Only the owner of a queue can remove permissions from it.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
To remove the ability to change queue permissions, you must deny permission to the AddPermission, RemovePermission, and SetQueueAttributes actions in your IAM policy.
Delivers up to ten messages to the specified queue. This is a batch version of SendMessage. For a FIFO queue, multiple messages within a single batch are enqueued in the order they are sent.
The result of sending each message is reported individually in the response. Because the batch request can result in a combination of successful and unsuccessful actions, you should check for batch errors even when the call returns an HTTP status code of 200.
The maximum allowed individual message size and the maximum total payload size (the sum of the individual lengths of all of the batched messages) are both 256 KB (262,144 bytes).
A message can include only XML, JSON, and unformatted text. The following Unicode characters are allowed:
#x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000 to #x10FFFF
Any characters not included in this list will be rejected. For more information, see the W3C specification for characters.
If you don't specify the DelaySeconds parameter for an entry, Amazon SQS uses the default value for the queue.
Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:
&AttributeName.1=first
&AttributeName.2=second
You can use SendMessageBatch to send up to 10 messages to the specified queue by assigning either identical or different values to each message (or by not assigning values at all). This is a batch version of SendMessage. For a FIFO queue, multiple messages within a single batch are enqueued in the order they are sent.
The result of sending each message is reported individually in the response. Because the batch request can result in a combination of successful and unsuccessful actions, you should check for batch errors even when the call returns an HTTP status code of 200.
The maximum allowed individual message size and the maximum total payload size (the sum of the individual lengths of all of the batched messages) are both 256 KiB (262,144 bytes).
A message can include only XML, JSON, and unformatted text. The following Unicode characters are allowed:
#x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000 to #x10FFFF
Any characters not included in this list will be rejected. For more information, see the W3C specification for characters.
If you don't specify the DelaySeconds parameter for an entry, Amazon SQS uses the default value for the queue.
Sets the value of one or more queue attributes. When you change a queue's attributes, the change can take up to 60 seconds for most of the attributes to propagate throughout the Amazon SQS system. Changes made to the MessageRetentionPeriod attribute can take up to 15 minutes.
In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
To remove the ability to change queue permissions, you must deny permission to the AddPermission, RemovePermission, and SetQueueAttributes actions in your IAM policy.
Sets the value of one or more queue attributes. When you change a queue's attributes, the change can take up to 60 seconds for most of the attributes to propagate throughout the Amazon SQS system. Changes made to the MessageRetentionPeriod attribute can take up to 15 minutes and will impact existing messages in the queue potentially causing them to be expired and deleted if the MessageRetentionPeriod is reduced below the age of existing messages.
In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
To remove the ability to change queue permissions, you must deny permission to the AddPermission, RemovePermission, and SetQueueAttributes actions in your IAM policy.
Starts an asynchronous task to move messages from a specified source queue to a specified destination queue.
This action is currently limited to supporting message redrive from dead-letter queues (DLQs) only. In this context, the source queue is the dead-letter queue (DLQ), while the destination queue can be the original source queue (from which the messages were driven to the dead-letter-queue), or a custom destination queue.
Currently, only standard queues are supported.
Only one active message movement task is supported per queue at any given time.
Add cost allocation tags to the specified Amazon SQS queue. For an overview, see Tagging Your Amazon SQS Queues in the Amazon SQS Developer Guide.
When you use queue tags, keep the following guidelines in mind:
Adding more than 50 tags to a queue isn't recommended.
Tags don't have any semantic meaning. Amazon SQS interprets tags as character strings.
Tags are case-sensitive.
A new tag with a key identical to that of an existing tag overwrites the existing tag.
For a full list of tag restrictions, see Quotas related to queues in the Amazon SQS Developer Guide.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
Add cost allocation tags to the specified Amazon SQS queue. For an overview, see Tagging Your Amazon SQS Queues in the Amazon SQS Developer Guide.
When you use queue tags, keep the following guidelines in mind:
Adding more than 50 tags to a queue isn't recommended.
Tags don't have any semantic meaning. Amazon SQS interprets tags as character strings.
Tags are case-sensitive.
A new tag with a key identical to that of an existing tag overwrites the existing tag.
For a full list of tag restrictions, see Quotas related to queues in the Amazon SQS Developer Guide.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
Remove cost allocation tags from the specified Amazon SQS queue. For an overview, see Tagging Your Amazon SQS Queues in the Amazon SQS Developer Guide.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
Remove cost allocation tags from the specified Amazon SQS queue. For an overview, see Tagging Your Amazon SQS Queues in the Amazon SQS Developer Guide.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
An identifier associated with a message movement task.
" + } + } + }, + "CancelMessageMoveTaskResult":{ + "type":"structure", + "members":{ + "ApproximateNumberOfMessagesMoved":{ + "shape":"Long", + "documentation":"The approximate number of messages already moved to the destination queue.
" + } + } + }, "ChangeMessageVisibilityBatchRequest":{ "type":"structure", "required":[ @@ -433,7 +503,7 @@ }, "Entries":{ "shape":"ChangeMessageVisibilityBatchRequestEntryList", - "documentation":"A list of receipt handles of the messages for which the visibility timeout must be changed.
" + "documentation":"Lists the receipt handles of the messages for which the visibility timeout must be changed.
" } }, "documentation":"" @@ -458,7 +528,7 @@ "documentation":"The new value (in seconds) for the message's visibility timeout.
" } }, - "documentation":"Encloses a receipt handle and an entry id for each message in ChangeMessageVisibilityBatch.
All of the following list parameters must be prefixed with ChangeMessageVisibilityBatchRequestEntry.n, where n is an integer value starting with 1. For example, a parameter list for this action might look like this:
&ChangeMessageVisibilityBatchRequestEntry.1.Id=change_visibility_msg_2
&ChangeMessageVisibilityBatchRequestEntry.1.ReceiptHandle=your_receipt_handle
&ChangeMessageVisibilityBatchRequestEntry.1.VisibilityTimeout=45
Encloses a receipt handle and an entry ID for each message in ChangeMessageVisibilityBatch.
The receipt handle associated with the message whose visibility timeout is changed. This parameter is returned by the ReceiveMessage action.
The receipt handle associated with the message, whose visibility timeout is changed. This parameter is returned by the ReceiveMessage action.
A map of attributes with their corresponding values.
The following lists the names, descriptions, and values of the special request parameters that the CreateQueue action uses:
DelaySeconds – The length of time, in seconds, for which the delivery of all messages in the queue is delayed. Valid values: An integer from 0 to 900 seconds (15 minutes). Default: 0.
MaximumMessageSize – The limit of how many bytes a message can contain before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes (1 KiB) to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB).
MessageRetentionPeriod – The length of time, in seconds, for which Amazon SQS retains a message. Valid values: An integer from 60 seconds (1 minute) to 1,209,600 seconds (14 days). Default: 345,600 (4 days).
Policy – The queue's policy. A valid Amazon Web Services policy. For more information about policy structure, see Overview of Amazon Web Services IAM Policies in the Amazon IAM User Guide.
ReceiveMessageWaitTimeSeconds – The length of time, in seconds, for which a ReceiveMessage action waits for a message to arrive. Valid values: An integer from 0 to 20 (seconds). Default: 0.
RedrivePolicy – The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. For more information about the redrive policy and dead-letter queues, see Using Amazon SQS Dead-Letter Queues in the Amazon SQS Developer Guide.
deadLetterTargetArn – The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
maxReceiveCount – The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue.
VisibilityTimeout – The visibility timeout for the queue, in seconds. Valid values: An integer from 0 to 43,200 (12 hours). Default: 30. For more information about the visibility timeout, see Visibility Timeout in the Amazon SQS Developer Guide.
The following attributes apply only to server-side-encryption:
KmsMasterKeyId – The ID of an Amazon Web Services managed customer master key (CMK) for Amazon SQS or a custom CMK. For more information, see Key Terms. While the alias of the Amazon Web Services managed CMK for Amazon SQS is always alias/aws/sqs, the alias of a custom CMK can, for example, be alias/MyAlias . For more examples, see KeyId in the Key Management Service API Reference.
KmsDataKeyReusePeriodSeconds – The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). A shorter time period provides better security but results in more calls to KMS which might incur charges after Free Tier. For more information, see How Does the Data Key Reuse Period Work?.
SqsManagedSseEnabled – Enables server-side queue encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (e.g. SSE-KMS or SSE-SQS).
The following attributes apply only to FIFO (first-in-first-out) queues:
FifoQueue – Designates a queue as FIFO. Valid values are true and false. If you don't specify the FifoQueue attribute, Amazon SQS creates a standard queue. You can provide this attribute only during queue creation. You can't change it for an existing queue. When you set this attribute, you must also provide the MessageGroupId for your messages explicitly.
For more information, see FIFO queue logic in the Amazon SQS Developer Guide.
ContentBasedDeduplication – Enables content-based deduplication. Valid values are true and false. For more information, see Exactly-once processing in the Amazon SQS Developer Guide. Note the following:
Every message must have a unique MessageDeduplicationId.
You may provide a MessageDeduplicationId explicitly.
If you aren't able to provide a MessageDeduplicationId and you enable ContentBasedDeduplication for your queue, Amazon SQS uses a SHA-256 hash to generate the MessageDeduplicationId using the body of the message (but not the attributes of the message).
If you don't provide a MessageDeduplicationId and the queue doesn't have ContentBasedDeduplication set, the action fails with an error.
If the queue has ContentBasedDeduplication set, your MessageDeduplicationId overrides the generated one.
When ContentBasedDeduplication is in effect, messages with identical content sent within the deduplication interval are treated as duplicates and only one copy of the message is delivered.
If you send one message with ContentBasedDeduplication enabled and then another message with a MessageDeduplicationId that is the same as the one generated for the first MessageDeduplicationId, the two messages are treated as duplicates and only one copy of the message is delivered.
The following attributes apply only to high throughput for FIFO queues:
DeduplicationScope – Specifies whether message deduplication occurs at the message group or queue level. Valid values are messageGroup and queue.
FifoThroughputLimit – Specifies whether the FIFO queue throughput quota applies to the entire queue or per message group. Valid values are perQueue and perMessageGroupId. The perMessageGroupId value is allowed only when the value for DeduplicationScope is messageGroup.
To enable high throughput for FIFO queues, do the following:
Set DeduplicationScope to messageGroup.
Set FifoThroughputLimit to perMessageGroupId.
If you set these attributes to anything other than the values shown for enabling high throughput, normal throughput is in effect and deduplication occurs as specified.
For information on throughput quotas, see Quotas related to messages in the Amazon SQS Developer Guide.
", + "documentation":"A map of attributes with their corresponding values.
The following lists the names, descriptions, and values of the special request parameters that the CreateQueue action uses:
DelaySeconds – The length of time, in seconds, for which the delivery of all messages in the queue is delayed. Valid values: An integer from 0 to 900 seconds (15 minutes). Default: 0.
MaximumMessageSize – The limit of how many bytes a message can contain before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes (1 KiB) to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB).
MessageRetentionPeriod – The length of time, in seconds, for which Amazon SQS retains a message. Valid values: An integer from 60 seconds (1 minute) to 1,209,600 seconds (14 days). Default: 345,600 (4 days). When you change a queue's attributes, the change can take up to 60 seconds for most of the attributes to propagate throughout the Amazon SQS system. Changes made to the MessageRetentionPeriod attribute can take up to 15 minutes and will impact existing messages in the queue potentially causing them to be expired and deleted if the MessageRetentionPeriod is reduced below the age of existing messages.
Policy – The queue's policy. A valid Amazon Web Services policy. For more information about policy structure, see Overview of Amazon Web Services IAM Policies in the IAM User Guide.
ReceiveMessageWaitTimeSeconds – The length of time, in seconds, for which a ReceiveMessage action waits for a message to arrive. Valid values: An integer from 0 to 20 (seconds). Default: 0.
VisibilityTimeout – The visibility timeout for the queue, in seconds. Valid values: An integer from 0 to 43,200 (12 hours). Default: 30. For more information about the visibility timeout, see Visibility Timeout in the Amazon SQS Developer Guide.
The following attributes apply only to dead-letter queues:
RedrivePolicy – The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:
deadLetterTargetArn – The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
maxReceiveCount – The number of times a message is delivered to the source queue before being moved to the dead-letter queue. Default: 10. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
RedriveAllowPolicy – The string that includes the parameters for the permissions for the dead-letter queue redrive permission and which source queues can specify dead-letter queues as a JSON object. The parameters are as follows:
redrivePermission – The permission type that defines which source queues can specify the current queue as the dead-letter queue. Valid values are:
allowAll – (Default) Any source queues in this Amazon Web Services account in the same Region can specify this queue as the dead-letter queue.
denyAll – No source queues can specify this queue as the dead-letter queue.
byQueue – Only queues specified by the sourceQueueArns parameter can specify this queue as the dead-letter queue.
sourceQueueArns – The Amazon Resource Names (ARN)s of the source queues that can specify this queue as the dead-letter queue and redrive messages. You can specify this parameter only when the redrivePermission parameter is set to byQueue. You can specify up to 10 source queue ARNs. To allow more than 10 source queues to specify dead-letter queues, set the redrivePermission parameter to allowAll.
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue.
The following attributes apply only to server-side-encryption:
KmsMasterKeyId – The ID of an Amazon Web Services managed customer master key (CMK) for Amazon SQS or a custom CMK. For more information, see Key Terms. While the alias of the Amazon Web Services managed CMK for Amazon SQS is always alias/aws/sqs, the alias of a custom CMK can, for example, be alias/MyAlias . For more examples, see KeyId in the Key Management Service API Reference.
KmsDataKeyReusePeriodSeconds – The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). A shorter time period provides better security but results in more calls to KMS which might incur charges after Free Tier. For more information, see How Does the Data Key Reuse Period Work?
SqsManagedSseEnabled – Enables server-side queue encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (for example, SSE-KMS or SSE-SQS).
The following attributes apply only to FIFO (first-in-first-out) queues:
FifoQueue – Designates a queue as FIFO. Valid values are true and false. If you don't specify the FifoQueue attribute, Amazon SQS creates a standard queue. You can provide this attribute only during queue creation. You can't change it for an existing queue. When you set this attribute, you must also provide the MessageGroupId for your messages explicitly.
For more information, see FIFO queue logic in the Amazon SQS Developer Guide.
ContentBasedDeduplication – Enables content-based deduplication. Valid values are true and false. For more information, see Exactly-once processing in the Amazon SQS Developer Guide. Note the following:
Every message must have a unique MessageDeduplicationId.
You may provide a MessageDeduplicationId explicitly.
If you aren't able to provide a MessageDeduplicationId and you enable ContentBasedDeduplication for your queue, Amazon SQS uses a SHA-256 hash to generate the MessageDeduplicationId using the body of the message (but not the attributes of the message).
If you don't provide a MessageDeduplicationId and the queue doesn't have ContentBasedDeduplication set, the action fails with an error.
If the queue has ContentBasedDeduplication set, your MessageDeduplicationId overrides the generated one.
When ContentBasedDeduplication is in effect, messages with identical content sent within the deduplication interval are treated as duplicates and only one copy of the message is delivered.
If you send one message with ContentBasedDeduplication enabled and then another message with a MessageDeduplicationId that is the same as the one generated for the first MessageDeduplicationId, the two messages are treated as duplicates and only one copy of the message is delivered.
The following attributes apply only to high throughput for FIFO queues:
DeduplicationScope – Specifies whether message deduplication occurs at the message group or queue level. Valid values are messageGroup and queue.
FifoThroughputLimit – Specifies whether the FIFO queue throughput quota applies to the entire queue or per message group. Valid values are perQueue and perMessageGroupId. The perMessageGroupId value is allowed only when the value for DeduplicationScope is messageGroup.
To enable high throughput for FIFO queues, do the following:
Set DeduplicationScope to messageGroup.
Set FifoThroughputLimit to perMessageGroupId.
If you set these attributes to anything other than the values shown for enabling high throughput, normal throughput is in effect and deduplication occurs as specified.
For information on throughput quotas, see Quotas related to messages in the Amazon SQS Developer Guide.
", "locationName":"Attribute" }, "tags":{ "shape":"TagMap", - "documentation":"Add cost allocation tags to the specified Amazon SQS queue. For an overview, see Tagging Your Amazon SQS Queues in the Amazon SQS Developer Guide.
When you use queue tags, keep the following guidelines in mind:
Adding more than 50 tags to a queue isn't recommended.
Tags don't have any semantic meaning. Amazon SQS interprets tags as character strings.
Tags are case-sensitive.
A new tag with a key identical to that of an existing tag overwrites the existing tag.
For a full list of tag restrictions, see Quotas related to queues in the Amazon SQS Developer Guide.
To be able to tag a queue on creation, you must have the sqs:CreateQueue and sqs:TagQueue permissions.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a user name in the Amazon SQS Developer Guide.
Add cost allocation tags to the specified Amazon SQS queue. For an overview, see Tagging Your Amazon SQS Queues in the Amazon SQS Developer Guide.
When you use queue tags, keep the following guidelines in mind:
Adding more than 50 tags to a queue isn't recommended.
Tags don't have any semantic meaning. Amazon SQS interprets tags as character strings.
Tags are case-sensitive.
A new tag with a key identical to that of an existing tag overwrites the existing tag.
For a full list of tag restrictions, see Quotas related to queues in the Amazon SQS Developer Guide.
To be able to tag a queue on creation, you must have the sqs:CreateQueue and sqs:TagQueue permissions.
Cross-account permissions don't apply to this action. For more information, see Grant cross-account permissions to a role and a username in the Amazon SQS Developer Guide.
A list of receipt handles for the messages to be deleted.
" + "documentation":"Lists the receipt handles for the messages to be deleted.
" } }, "documentation":"" @@ -585,7 +655,7 @@ "members":{ "Id":{ "shape":"String", - "documentation":"An identifier for this particular receipt handle. This is used to communicate the result.
The Ids of a batch request need to be unique within a request.
This identifier can have up to 80 characters. The following characters are accepted: alphanumeric characters, hyphens(-), and underscores (_).
The identifier for this particular receipt handle. This is used to communicate the result.
The Ids of a batch request need to be unique within a request.
This identifier can have up to 80 characters. The following characters are accepted: alphanumeric characters, hyphens(-), and underscores (_).
A list of attributes for which to retrieve information.
The AttributeName.N parameter is optional, but if you don't specify values for this parameter, the request returns empty results.
In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.
The following attributes are supported:
The ApproximateNumberOfMessagesDelayed, ApproximateNumberOfMessagesNotVisible, and ApproximateNumberOfMessagesVisible metrics may not achieve consistency until at least 1 minute after the producers stop sending messages. This period is required for the queue metadata to reach eventual consistency.
All – Returns all values.
ApproximateNumberOfMessages – Returns the approximate number of messages available for retrieval from the queue.
ApproximateNumberOfMessagesDelayed – Returns the approximate number of messages in the queue that are delayed and not available for reading immediately. This can happen when the queue is configured as a delay queue or when a message has been sent with a delay parameter.
ApproximateNumberOfMessagesNotVisible – Returns the approximate number of messages that are in flight. Messages are considered to be in flight if they have been sent to a client but have not yet been deleted or have not yet reached the end of their visibility window.
CreatedTimestamp – Returns the time when the queue was created in seconds (epoch time).
DelaySeconds – Returns the default delay on the queue in seconds.
LastModifiedTimestamp – Returns the time when the queue was last changed in seconds (epoch time).
MaximumMessageSize – Returns the limit of how many bytes a message can contain before Amazon SQS rejects it.
MessageRetentionPeriod – Returns the length of time, in seconds, for which Amazon SQS retains a message.
Policy – Returns the policy of the queue.
QueueArn – Returns the Amazon resource name (ARN) of the queue.
ReceiveMessageWaitTimeSeconds – Returns the length of time, in seconds, for which the ReceiveMessage action waits for a message to arrive.
RedrivePolicy – The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. For more information about the redrive policy and dead-letter queues, see Using Amazon SQS Dead-Letter Queues in the Amazon SQS Developer Guide.
deadLetterTargetArn – The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
maxReceiveCount – The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
VisibilityTimeout – Returns the visibility timeout for the queue. For more information about the visibility timeout, see Visibility Timeout in the Amazon SQS Developer Guide.
The following attributes apply only to server-side-encryption:
KmsMasterKeyId – Returns the ID of an Amazon Web Services managed customer master key (CMK) for Amazon SQS or a custom CMK. For more information, see Key Terms.
KmsDataKeyReusePeriodSeconds – Returns the length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling KMS again. For more information, see How Does the Data Key Reuse Period Work?.
SqsManagedSseEnabled – Returns information about whether the queue is using SSE-SQS encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (e.g. SSE-KMS or SSE-SQS).
The following attributes apply only to FIFO (first-in-first-out) queues:
FifoQueue – Returns information about whether the queue is FIFO. For more information, see FIFO queue logic in the Amazon SQS Developer Guide.
To determine whether a queue is FIFO, you can check whether QueueName ends with the .fifo suffix.
ContentBasedDeduplication – Returns whether content-based deduplication is enabled for the queue. For more information, see Exactly-once processing in the Amazon SQS Developer Guide.
The following attributes apply only to high throughput for FIFO queues:
DeduplicationScope – Specifies whether message deduplication occurs at the message group or queue level. Valid values are messageGroup and queue.
FifoThroughputLimit – Specifies whether the FIFO queue throughput quota applies to the entire queue or per message group. Valid values are perQueue and perMessageGroupId. The perMessageGroupId value is allowed only when the value for DeduplicationScope is messageGroup.
To enable high throughput for FIFO queues, do the following:
Set DeduplicationScope to messageGroup.
Set FifoThroughputLimit to perMessageGroupId.
If you set these attributes to anything other than the values shown for enabling high throughput, normal throughput is in effect and deduplication occurs as specified.
For information on throughput quotas, see Quotas related to messages in the Amazon SQS Developer Guide.
" + "documentation":"A list of attributes for which to retrieve information.
The AttributeNames parameter is optional, but if you don't specify values for this parameter, the request returns empty results.
In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.
The following attributes are supported:
The ApproximateNumberOfMessagesDelayed, ApproximateNumberOfMessagesNotVisible, and ApproximateNumberOfMessages metrics may not achieve consistency until at least 1 minute after the producers stop sending messages. This period is required for the queue metadata to reach eventual consistency.
All – Returns all values.
ApproximateNumberOfMessages – Returns the approximate number of messages available for retrieval from the queue.
ApproximateNumberOfMessagesDelayed – Returns the approximate number of messages in the queue that are delayed and not available for reading immediately. This can happen when the queue is configured as a delay queue or when a message has been sent with a delay parameter.
ApproximateNumberOfMessagesNotVisible – Returns the approximate number of messages that are in flight. Messages are considered to be in flight if they have been sent to a client but have not yet been deleted or have not yet reached the end of their visibility window.
CreatedTimestamp – Returns the time when the queue was created in seconds (epoch time).
DelaySeconds – Returns the default delay on the queue in seconds.
LastModifiedTimestamp – Returns the time when the queue was last changed in seconds (epoch time).
MaximumMessageSize – Returns the limit of how many bytes a message can contain before Amazon SQS rejects it.
MessageRetentionPeriod – Returns the length of time, in seconds, for which Amazon SQS retains a message. When you change a queue's attributes, the change can take up to 60 seconds for most of the attributes to propagate throughout the Amazon SQS system. Changes made to the MessageRetentionPeriod attribute can take up to 15 minutes and will impact existing messages in the queue potentially causing them to be expired and deleted if the MessageRetentionPeriod is reduced below the age of existing messages.
Policy – Returns the policy of the queue.
QueueArn – Returns the Amazon resource name (ARN) of the queue.
ReceiveMessageWaitTimeSeconds – Returns the length of time, in seconds, for which the ReceiveMessage action waits for a message to arrive.
VisibilityTimeout – Returns the visibility timeout for the queue. For more information about the visibility timeout, see Visibility Timeout in the Amazon SQS Developer Guide.
The following attributes apply only to dead-letter queues:
RedrivePolicy – The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:
deadLetterTargetArn – The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
maxReceiveCount – The number of times a message is delivered to the source queue before being moved to the dead-letter queue. Default: 10. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
RedriveAllowPolicy – The string that includes the parameters for the permissions for the dead-letter queue redrive permission and which source queues can specify dead-letter queues as a JSON object. The parameters are as follows:
redrivePermission – The permission type that defines which source queues can specify the current queue as the dead-letter queue. Valid values are:
allowAll – (Default) Any source queues in this Amazon Web Services account in the same Region can specify this queue as the dead-letter queue.
denyAll – No source queues can specify this queue as the dead-letter queue.
byQueue – Only queues specified by the sourceQueueArns parameter can specify this queue as the dead-letter queue.
sourceQueueArns – The Amazon Resource Names (ARN)s of the source queues that can specify this queue as the dead-letter queue and redrive messages. You can specify this parameter only when the redrivePermission parameter is set to byQueue. You can specify up to 10 source queue ARNs. To allow more than 10 source queues to specify dead-letter queues, set the redrivePermission parameter to allowAll.
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue.
The following attributes apply only to server-side-encryption:
KmsMasterKeyId – Returns the ID of an Amazon Web Services managed customer master key (CMK) for Amazon SQS or a custom CMK. For more information, see Key Terms.
KmsDataKeyReusePeriodSeconds – Returns the length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling KMS again. For more information, see How Does the Data Key Reuse Period Work?.
SqsManagedSseEnabled – Returns information about whether the queue is using SSE-SQS encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (for example, SSE-KMS or SSE-SQS).
The following attributes apply only to FIFO (first-in-first-out) queues:
FifoQueue – Returns information about whether the queue is FIFO. For more information, see FIFO queue logic in the Amazon SQS Developer Guide.
To determine whether a queue is FIFO, you can check whether QueueName ends with the .fifo suffix.
ContentBasedDeduplication – Returns whether content-based deduplication is enabled for the queue. For more information, see Exactly-once processing in the Amazon SQS Developer Guide.
The following attributes apply only to high throughput for FIFO queues:
DeduplicationScope – Specifies whether message deduplication occurs at the message group or queue level. Valid values are messageGroup and queue.
FifoThroughputLimit – Specifies whether the FIFO queue throughput quota applies to the entire queue or per message group. Valid values are perQueue and perMessageGroupId. The perMessageGroupId value is allowed only when the value for DeduplicationScope is messageGroup.
To enable high throughput for FIFO queues, do the following:
Set DeduplicationScope to messageGroup.
Set FifoThroughputLimit to perMessageGroupId.
If you set these attributes to anything other than the values shown for enabling high throughput, normal throughput is in effect and deduplication occurs as specified.
For information on throughput quotas, see Quotas related to messages in the Amazon SQS Developer Guide.
" } }, "documentation":"" @@ -799,6 +869,79 @@ }, "documentation":"A list of your dead letter source queues.
" }, + "ListMessageMoveTasksRequest":{ + "type":"structure", + "required":["SourceArn"], + "members":{ + "SourceArn":{ + "shape":"String", + "documentation":"The ARN of the queue whose message movement tasks are to be listed.
" + }, + "MaxResults":{ + "shape":"Integer", + "documentation":"The maximum number of results to include in the response. The default is 1, which provides the most recent message movement task. The upper limit is 10.
" + } + } + }, + "ListMessageMoveTasksResult":{ + "type":"structure", + "members":{ + "Results":{ + "shape":"ListMessageMoveTasksResultEntryList", + "documentation":"A list of message movement tasks and their attributes.
" + } + } + }, + "ListMessageMoveTasksResultEntry":{ + "type":"structure", + "members":{ + "TaskHandle":{ + "shape":"String", + "documentation":"An identifier associated with a message movement task. When this field is returned in the response of the ListMessageMoveTasks action, it is only populated for tasks that are in RUNNING status.
The status of the message movement task. Possible values are: RUNNING, COMPLETED, CANCELLING, CANCELLED, and FAILED.
" + }, + "SourceArn":{ + "shape":"String", + "documentation":"The ARN of the queue that contains the messages to be moved to another queue.
" + }, + "DestinationArn":{ + "shape":"String", + "documentation":"The ARN of the destination queue if it has been specified in the StartMessageMoveTask request. If a DestinationArn has not been specified in the StartMessageMoveTask request, this field value will be NULL.
The number of messages to be moved per second (the message movement rate), if it has been specified in the StartMessageMoveTask request. If a MaxNumberOfMessagesPerSecond has not been specified in the StartMessageMoveTask request, this field value will be NULL.
The approximate number of messages already moved to the destination queue.
" + }, + "ApproximateNumberOfMessagesToMove":{ + "shape":"Long", + "documentation":"The number of messages to be moved from the source queue. This number is obtained at the time of starting the message movement task.
" + }, + "FailureReason":{ + "shape":"String", + "documentation":"The task failure reason (only included if the task status is FAILED).
" + }, + "StartedTimestamp":{ + "shape":"Long", + "documentation":"The timestamp of starting the message movement task.
" + } + }, + "documentation":"Contains the details of a message movement task.
" + }, + "ListMessageMoveTasksResultEntryList":{ + "type":"list", + "member":{ + "shape":"ListMessageMoveTasksResultEntry", + "locationName":"ListMessageMoveTasksResultEntry" + }, + "flattened":true + }, "ListQueueTagsRequest":{ "type":"structure", "required":["QueueUrl"], @@ -842,7 +985,7 @@ "members":{ "QueueUrls":{ "shape":"QueueUrlList", - "documentation":"A list of queue URLs, up to 1,000 entries, or the value of MaxResults that you sent in the request.
" + "documentation":"A list of queue URLs, up to 1,000 entries, or the value of MaxResults that you sent in the request.
A list of your queues.
" }, + "Long":{"type":"long"}, "Message":{ "type":"structure", "members":{ @@ -925,7 +1069,7 @@ "documentation":"Amazon SQS supports the following logical data types: String, Number, and Binary. For the Number data type, you must use StringValue.
You can also append custom labels. For more information, see Amazon SQS Message Attributes in the Amazon SQS Developer Guide.
" } }, - "documentation":"The user-specified message attribute value. For string data types, the Value attribute has the same restrictions on the content as the message body. For more information, see SendMessage.
Name, type, value and the message body must not be empty or null. All parts of the message attribute, including Name, Type, and Value, are part of the message size restriction (256 KB or 262,144 bytes).
The user-specified message attribute value. For string data types, the Value attribute has the same restrictions on the content as the message body. For more information, see SendMessage.
Name, type, value and the message body must not be empty or null. All parts of the message attribute, including Name, Type, and Value, are part of the message size restriction (256 KiB or 262,144 bytes).
The specified action violates a limit. For example, ReceiveMessage returns this error if the maximum number of inflight messages is reached and AddPermission returns this error if the maximum number of permissions for the queue is reached.
The specified action violates a limit. For example, ReceiveMessage returns this error if the maximum number of in flight messages is reached and AddPermission returns this error if the maximum number of permissions for the queue is reached.
A list of attributes that need to be returned along with each message. These attributes include:
All – Returns all values.
ApproximateFirstReceiveTimestamp – Returns the time the message was first received from the queue (epoch time in milliseconds).
ApproximateReceiveCount – Returns the number of times a message has been received across all queues but not deleted.
AWSTraceHeader – Returns the X-Ray trace header string.
SenderId
For an IAM user, returns the IAM user ID, for example ABCDEFGHI1JKLMNOPQ23R.
For an IAM role, returns the IAM role ID, for example ABCDE1F2GH3I4JK5LMNOP:i-a123b456.
SentTimestamp – Returns the time the message was sent to the queue (epoch time in milliseconds).
SqsManagedSseEnabled – Enables server-side queue encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (e.g. SSE-KMS or SSE-SQS).
MessageDeduplicationId – Returns the value provided by the producer that calls the SendMessage action.
MessageGroupId – Returns the value provided by the producer that calls the SendMessage action. Messages with the same MessageGroupId are returned in sequence.
SequenceNumber – Returns the value provided by Amazon SQS.
A list of attributes that need to be returned along with each message. These attributes include:
All – Returns all values.
ApproximateFirstReceiveTimestamp – Returns the time the message was first received from the queue (epoch time in milliseconds).
ApproximateReceiveCount – Returns the number of times a message has been received across all queues but not deleted.
AWSTraceHeader – Returns the X-Ray trace header string.
SenderId
For a user, returns the user ID, for example ABCDEFGHI1JKLMNOPQ23R.
For an IAM role, returns the IAM role ID, for example ABCDE1F2GH3I4JK5LMNOP:i-a123b456.
SentTimestamp – Returns the time the message was sent to the queue (epoch time in milliseconds).
SqsManagedSseEnabled – Enables server-side queue encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (for example, SSE-KMS or SSE-SQS).
MessageDeduplicationId – Returns the value provided by the producer that calls the SendMessage action.
MessageGroupId – Returns the value provided by the producer that calls the SendMessage action. Messages with the same MessageGroupId are returned in sequence.
SequenceNumber – Returns the value provided by Amazon SQS.
One or more specified resources don't exist.
", + "error":{ + "code":"ResourceNotFoundException", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, "SendMessageBatchRequest":{ "type":"structure", "required":[ @@ -1361,7 +1518,7 @@ }, "MessageBody":{ "shape":"String", - "documentation":"The message to send. The minimum size is one character. The maximum size is 256 KB.
A message can include only XML, JSON, and unformatted text. The following Unicode characters are allowed:
#x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000 to #x10FFFF
Any characters not included in this list will be rejected. For more information, see the W3C specification for characters.
The message to send. The minimum size is one character. The maximum size is 256 KiB.
A message can include only XML, JSON, and unformatted text. The following Unicode characters are allowed:
#x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000 to #x10FFFF
Any characters not included in this list will be rejected. For more information, see the W3C specification for characters.
A map of attributes to set.
The following lists the names, descriptions, and values of the special request parameters that the SetQueueAttributes action uses:
DelaySeconds – The length of time, in seconds, for which the delivery of all messages in the queue is delayed. Valid values: An integer from 0 to 900 (15 minutes). Default: 0.
MaximumMessageSize – The limit of how many bytes a message can contain before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes (1 KiB) up to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB).
MessageRetentionPeriod – The length of time, in seconds, for which Amazon SQS retains a message. Valid values: An integer representing seconds, from 60 (1 minute) to 1,209,600 (14 days). Default: 345,600 (4 days).
Policy – The queue's policy. A valid Amazon Web Services policy. For more information about policy structure, see Overview of Amazon Web Services IAM Policies in the Identity and Access Management User Guide.
ReceiveMessageWaitTimeSeconds – The length of time, in seconds, for which a ReceiveMessage action waits for a message to arrive. Valid values: An integer from 0 to 20 (seconds). Default: 0.
RedrivePolicy – The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. For more information about the redrive policy and dead-letter queues, see Using Amazon SQS Dead-Letter Queues in the Amazon SQS Developer Guide.
deadLetterTargetArn – The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
maxReceiveCount – The number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue.
VisibilityTimeout – The visibility timeout for the queue, in seconds. Valid values: An integer from 0 to 43,200 (12 hours). Default: 30. For more information about the visibility timeout, see Visibility Timeout in the Amazon SQS Developer Guide.
The following attributes apply only to server-side-encryption:
KmsMasterKeyId – The ID of an Amazon Web Services managed customer master key (CMK) for Amazon SQS or a custom CMK. For more information, see Key Terms. While the alias of the AWS-managed CMK for Amazon SQS is always alias/aws/sqs, the alias of a custom CMK can, for example, be alias/MyAlias . For more examples, see KeyId in the Key Management Service API Reference.
KmsDataKeyReusePeriodSeconds – The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). A shorter time period provides better security but results in more calls to KMS which might incur charges after Free Tier. For more information, see How Does the Data Key Reuse Period Work?.
SqsManagedSseEnabled – Enables server-side queue encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (e.g. SSE-KMS or SSE-SQS).
The following attribute applies only to FIFO (first-in-first-out) queues:
ContentBasedDeduplication – Enables content-based deduplication. For more information, see Exactly-once processing in the Amazon SQS Developer Guide. Note the following:
Every message must have a unique MessageDeduplicationId.
You may provide a MessageDeduplicationId explicitly.
If you aren't able to provide a MessageDeduplicationId and you enable ContentBasedDeduplication for your queue, Amazon SQS uses a SHA-256 hash to generate the MessageDeduplicationId using the body of the message (but not the attributes of the message).
If you don't provide a MessageDeduplicationId and the queue doesn't have ContentBasedDeduplication set, the action fails with an error.
If the queue has ContentBasedDeduplication set, your MessageDeduplicationId overrides the generated one.
When ContentBasedDeduplication is in effect, messages with identical content sent within the deduplication interval are treated as duplicates and only one copy of the message is delivered.
If you send one message with ContentBasedDeduplication enabled and then another message with a MessageDeduplicationId that is the same as the one generated for the first MessageDeduplicationId, the two messages are treated as duplicates and only one copy of the message is delivered.
The following attributes apply only to high throughput for FIFO queues:
DeduplicationScope – Specifies whether message deduplication occurs at the message group or queue level. Valid values are messageGroup and queue.
FifoThroughputLimit – Specifies whether the FIFO queue throughput quota applies to the entire queue or per message group. Valid values are perQueue and perMessageGroupId. The perMessageGroupId value is allowed only when the value for DeduplicationScope is messageGroup.
To enable high throughput for FIFO queues, do the following:
Set DeduplicationScope to messageGroup.
Set FifoThroughputLimit to perMessageGroupId.
If you set these attributes to anything other than the values shown for enabling high throughput, normal throughput is in effect and deduplication occurs as specified.
For information on throughput quotas, see Quotas related to messages in the Amazon SQS Developer Guide.
", + "documentation":"A map of attributes to set.
The following lists the names, descriptions, and values of the special request parameters that the SetQueueAttributes action uses:
DelaySeconds – The length of time, in seconds, for which the delivery of all messages in the queue is delayed. Valid values: An integer from 0 to 900 (15 minutes). Default: 0.
MaximumMessageSize – The limit of how many bytes a message can contain before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes (1 KiB) up to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB).
MessageRetentionPeriod – The length of time, in seconds, for which Amazon SQS retains a message. Valid values: An integer representing seconds, from 60 (1 minute) to 1,209,600 (14 days). Default: 345,600 (4 days). When you change a queue's attributes, the change can take up to 60 seconds for most of the attributes to propagate throughout the Amazon SQS system. Changes made to the MessageRetentionPeriod attribute can take up to 15 minutes and will impact existing messages in the queue potentially causing them to be expired and deleted if the MessageRetentionPeriod is reduced below the age of existing messages.
Policy – The queue's policy. A valid Amazon Web Services policy. For more information about policy structure, see Overview of Amazon Web Services IAM Policies in the Identity and Access Management User Guide.
ReceiveMessageWaitTimeSeconds – The length of time, in seconds, for which a ReceiveMessage action waits for a message to arrive. Valid values: An integer from 0 to 20 (seconds). Default: 0.
VisibilityTimeout – The visibility timeout for the queue, in seconds. Valid values: An integer from 0 to 43,200 (12 hours). Default: 30. For more information about the visibility timeout, see Visibility Timeout in the Amazon SQS Developer Guide.
The following attributes apply only to dead-letter queues:
RedrivePolicy – The string that includes the parameters for the dead-letter queue functionality of the source queue as a JSON object. The parameters are as follows:
deadLetterTargetArn – The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
maxReceiveCount – The number of times a message is delivered to the source queue before being moved to the dead-letter queue. Default: 10. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
RedriveAllowPolicy – The string that includes the parameters for the permissions for the dead-letter queue redrive permission and which source queues can specify dead-letter queues as a JSON object. The parameters are as follows:
redrivePermission – The permission type that defines which source queues can specify the current queue as the dead-letter queue. Valid values are:
allowAll – (Default) Any source queues in this Amazon Web Services account in the same Region can specify this queue as the dead-letter queue.
denyAll – No source queues can specify this queue as the dead-letter queue.
byQueue – Only queues specified by the sourceQueueArns parameter can specify this queue as the dead-letter queue.
sourceQueueArns – The Amazon Resource Names (ARN)s of the source queues that can specify this queue as the dead-letter queue and redrive messages. You can specify this parameter only when the redrivePermission parameter is set to byQueue. You can specify up to 10 source queue ARNs. To allow more than 10 source queues to specify dead-letter queues, set the redrivePermission parameter to allowAll.
The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, the dead-letter queue of a standard queue must also be a standard queue.
The following attributes apply only to server-side-encryption:
KmsMasterKeyId – The ID of an Amazon Web Services managed customer master key (CMK) for Amazon SQS or a custom CMK. For more information, see Key Terms. While the alias of the AWS-managed CMK for Amazon SQS is always alias/aws/sqs, the alias of a custom CMK can, for example, be alias/MyAlias . For more examples, see KeyId in the Key Management Service API Reference.
KmsDataKeyReusePeriodSeconds – The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). A shorter time period provides better security but results in more calls to KMS which might incur charges after Free Tier. For more information, see How Does the Data Key Reuse Period Work?.
SqsManagedSseEnabled – Enables server-side queue encryption using SQS owned encryption keys. Only one server-side encryption option is supported per queue (for example, SSE-KMS or SSE-SQS).
The following attribute applies only to FIFO (first-in-first-out) queues:
ContentBasedDeduplication – Enables content-based deduplication. For more information, see Exactly-once processing in the Amazon SQS Developer Guide. Note the following:
Every message must have a unique MessageDeduplicationId.
You may provide a MessageDeduplicationId explicitly.
If you aren't able to provide a MessageDeduplicationId and you enable ContentBasedDeduplication for your queue, Amazon SQS uses a SHA-256 hash to generate the MessageDeduplicationId using the body of the message (but not the attributes of the message).
If you don't provide a MessageDeduplicationId and the queue doesn't have ContentBasedDeduplication set, the action fails with an error.
If the queue has ContentBasedDeduplication set, your MessageDeduplicationId overrides the generated one.
When ContentBasedDeduplication is in effect, messages with identical content sent within the deduplication interval are treated as duplicates and only one copy of the message is delivered.
If you send one message with ContentBasedDeduplication enabled and then another message with a MessageDeduplicationId that is the same as the one generated for the first MessageDeduplicationId, the two messages are treated as duplicates and only one copy of the message is delivered.
The following attributes apply only to high throughput for FIFO queues:
DeduplicationScope – Specifies whether message deduplication occurs at the message group or queue level. Valid values are messageGroup and queue.
FifoThroughputLimit – Specifies whether the FIFO queue throughput quota applies to the entire queue or per message group. Valid values are perQueue and perMessageGroupId. The perMessageGroupId value is allowed only when the value for DeduplicationScope is messageGroup.
To enable high throughput for FIFO queues, do the following:
Set DeduplicationScope to messageGroup.
Set FifoThroughputLimit to perMessageGroupId.
If you set these attributes to anything other than the values shown for enabling high throughput, normal throughput is in effect and deduplication occurs as specified.
For information on throughput quotas, see Quotas related to messages in the Amazon SQS Developer Guide.
", "locationName":"Attribute" } }, "documentation":"" }, + "StartMessageMoveTaskRequest":{ + "type":"structure", + "required":["SourceArn"], + "members":{ + "SourceArn":{ + "shape":"String", + "documentation":"The ARN of the queue that contains the messages to be moved to another queue. Currently, only dead-letter queue (DLQ) ARNs are accepted.
" + }, + "DestinationArn":{ + "shape":"String", + "documentation":"The ARN of the queue that receives the moved messages. You can use this field to specify the destination queue where you would like to redrive messages. If this field is left blank, the messages will be redriven back to their respective original source queues.
" + }, + "MaxNumberOfMessagesPerSecond":{ + "shape":"Integer", + "documentation":"The number of messages to be moved per second (the message movement rate). You can use this field to define a fixed message movement rate. The maximum value for messages per second is 500. If this field is left blank, the system will optimize the rate based on the queue message backlog size, which may vary throughout the duration of the message movement task.
" + } + } + }, + "StartMessageMoveTaskResult":{ + "type":"structure", + "members":{ + "TaskHandle":{ + "shape":"String", + "documentation":"An identifier associated with a message movement task. You can use this identifier to cancel a specified message movement task using the CancelMessageMoveTask action.
Creates an alias for your Amazon Web Services account. For information about using an Amazon Web Services account alias, see Using an alias for your Amazon Web Services account ID in the IAM User Guide.
" + "documentation":"Creates an alias for your Amazon Web Services account. For information about using an Amazon Web Services account alias, see Creating, deleting, and listing an Amazon Web Services account alias in the Amazon Web Services Sign-In User Guide.
" }, "CreateGroup":{ "name":"CreateGroup", @@ -406,7 +407,8 @@ {"shape":"EntityTemporarilyUnmodifiableException"}, {"shape":"NoSuchEntityException"}, {"shape":"LimitExceededException"}, - {"shape":"ServiceFailureException"} + {"shape":"ServiceFailureException"}, + {"shape":"ConcurrentModificationException"} ], "documentation":"Deactivates the specified MFA device and removes it from association with the user name for which it was originally enabled.
For more information about creating and working with virtual MFA devices, see Enabling a virtual multi-factor authentication (MFA) device in the IAM User Guide.
" }, @@ -432,11 +434,12 @@ }, "input":{"shape":"DeleteAccountAliasRequest"}, "errors":[ + {"shape":"ConcurrentModificationException"}, {"shape":"NoSuchEntityException"}, {"shape":"LimitExceededException"}, {"shape":"ServiceFailureException"} ], - "documentation":"Deletes the specified Amazon Web Services account alias. For information about using an Amazon Web Services account alias, see Using an alias for your Amazon Web Services account ID in the IAM User Guide.
" + "documentation":"Deletes the specified Amazon Web Services account alias. For information about using an Amazon Web Services account alias, see Creating, deleting, and listing an Amazon Web Services account alias in the Amazon Web Services Sign-In User Guide.
" }, "DeleteAccountPasswordPolicy":{ "name":"DeleteAccountPasswordPolicy", @@ -684,6 +687,7 @@ "errors":[ {"shape":"NoSuchEntityException"}, {"shape":"LimitExceededException"}, + {"shape":"ConcurrentModificationException"}, {"shape":"ServiceFailureException"} ], "documentation":"Deletes a signing certificate associated with the specified IAM user.
If you do not specify a user name, IAM determines the user name implicitly based on the Amazon Web Services access key ID signing the request. This operation works for access keys under the Amazon Web Services account. Consequently, you can use this operation to manage Amazon Web Services account root user credentials even if the Amazon Web Services account has no associated IAM users.
" @@ -742,7 +746,8 @@ {"shape":"NoSuchEntityException"}, {"shape":"DeleteConflictException"}, {"shape":"LimitExceededException"}, - {"shape":"ServiceFailureException"} + {"shape":"ServiceFailureException"}, + {"shape":"ConcurrentModificationException"} ], "documentation":"Deletes a virtual MFA device.
You must deactivate a user's virtual MFA device before you can delete it. For information about deactivating MFA devices, see DeactivateMFADevice.
Enables the specified MFA device and associates it with the specified IAM user. When enabled, the MFA device is required for every subsequent login by the IAM user associated with the device.
" }, @@ -1313,7 +1319,7 @@ "errors":[ {"shape":"ServiceFailureException"} ], - "documentation":"Lists the account alias associated with the Amazon Web Services account (Note: you can have only one). For information about using an Amazon Web Services account alias, see Using an alias for your Amazon Web Services account ID in the IAM User Guide.
" + "documentation":"Lists the account alias associated with the Amazon Web Services account (Note: you can have only one). For information about using an Amazon Web Services account alias, see Creating, deleting, and listing an Amazon Web Services account alias in the Amazon Web Services Sign-In User Guide.
" }, "ListAttachedGroupPolicies":{ "name":"ListAttachedGroupPolicies", @@ -2002,7 +2008,8 @@ {"shape":"InvalidAuthenticationCodeException"}, {"shape":"NoSuchEntityException"}, {"shape":"LimitExceededException"}, - {"shape":"ServiceFailureException"} + {"shape":"ServiceFailureException"}, + {"shape":"ConcurrentModificationException"} ], "documentation":"Synchronizes the specified MFA device with its IAM resource object on the Amazon Web Services servers.
For more information about creating and working with virtual MFA devices, see Using a virtual MFA device in the IAM User Guide.
" }, @@ -2589,6 +2596,7 @@ {"shape":"InvalidCertificateException"}, {"shape":"DuplicateCertificateException"}, {"shape":"NoSuchEntityException"}, + {"shape":"ConcurrentModificationException"}, {"shape":"ServiceFailureException"} ], "documentation":"Uploads an X.509 signing certificate and associates it with the specified IAM user. Some Amazon Web Services services require you to use certificates to validate requests that are signed with a corresponding private key. When you upload the certificate, its default status is Active.
For information about when you would use an X.509 signing certificate, see Managing server certificates in IAM in the IAM User Guide.
If the UserName is not specified, the IAM user name is determined implicitly based on the Amazon Web Services access key ID used to sign the request. This operation works for access keys under the Amazon Web Services account. Consequently, you can use this operation to manage Amazon Web Services account root user credentials even if the Amazon Web Services account has no associated users.
Because the body of an X.509 certificate can be large, you should use POST rather than GET when calling UploadSigningCertificate. For information about setting up signatures and authorization through the API, see Signing Amazon Web Services API requests in the Amazon Web Services General Reference. For general information about using the Query API with IAM, see Making query requests in the IAM User Guide.
The base32 seed defined as specified in RFC3548. The Base32StringSeed is base64-encoded.
The base32 seed defined as specified in RFC3548. The Base32StringSeed is base32-encoded.
Gets metric data from the specified Amazon Connect instance.
GetMetricDataV2 offers more features than GetMetricData, the previous version of this API. It has new metrics, offers filtering at a metric level, and offers the ability to filter and group data by channels, queues, routing profiles, agents, and agent hierarchy levels. It can retrieve historical data for the last 35 days, in 24-hour intervals.
For a description of the historical metrics that are supported by GetMetricDataV2 and GetMetricData, see Historical metrics definitions in the Amazon Connect Administrator's Guide.
This API is not available in the Amazon Web Services GovCloud (US) Regions.
" + "documentation":"Gets metric data from the specified Amazon Connect instance.
GetMetricDataV2 offers more features than GetMetricData, the previous version of this API. It has new metrics, offers filtering at a metric level, and offers the ability to filter and group data by channels, queues, routing profiles, agents, and agent hierarchy levels. It can retrieve historical data for the last 35 days, in 24-hour intervals.
For a description of the historical metrics that are supported by GetMetricDataV2 and GetMetricData, see Historical metrics definitions in the Amazon Connect Administrator's Guide.
The date and time this contact was initiated, in UTC time. For INBOUND, this is when the contact arrived. For OUTBOUND, this is when the agent began dialing. For CALLBACK, this is when the callback contact was created. For TRANSFER and QUEUE_TRANSFER, this is when the transfer was initiated. For API, this is when the request arrived.
The date and time this contact was initiated, in UTC time. For INBOUND, this is when the contact arrived. For OUTBOUND, this is when the agent began dialing. For CALLBACK, this is when the callback contact was created. For TRANSFER and QUEUE_TRANSFER, this is when the transfer was initiated. For API, this is when the request arrived. For EXTERNAL_OUTBOUND, this is when the agent started dialing the external participant. For MONITOR, this is when the supervisor started listening to a contact.
Creates a log group with the specified name. You can create up to 20,000 log groups per account.
You must use the following guidelines when naming a log group:
Log group names must be unique within a Region for an Amazon Web Services account.
Log group names can be between 1 and 512 characters long.
Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and '#' (number sign)
When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy.
If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a KMS key with the log group but the KMS keydoes not exist or the KMS key is disabled, you receive an InvalidParameterException error.
CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see Using Symmetric and Asymmetric Keys.
Creates a log group with the specified name. You can create up to 20,000 log groups per account.
You must use the following guidelines when naming a log group:
Log group names must be unique within a Region for an Amazon Web Services account.
Log group names can be between 1 and 512 characters long.
Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and '#' (number sign)
When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy.
If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a KMS key with the log group but the KMS key does not exist or the KMS key is disabled, you receive an InvalidParameterException error.
CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see Using Symmetric and Asymmetric Keys.
Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored.
There is no limit on the number of log streams that you can create for a log group. There is a limit of 50 TPS on CreateLogStream operations, after which transactions are throttled.
You must use the following guidelines when naming a log stream:
Log stream names must be unique within the log group.
Log stream names can be between 1 and 512 characters long.
Don't use ':' (colon) or '*' (asterisk) characters.
Deletes a CloudWatch Logs account policy.
To use this operation, you must be signed on with the logs:DeleteDataProtectionPolicy and logs:DeleteAccountPolicy permissions.
Deletes the specified subscription filter.
" }, + "DescribeAccountPolicies":{ + "name":"DescribeAccountPolicies", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeAccountPoliciesRequest"}, + "output":{"shape":"DescribeAccountPoliciesResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"OperationAbortedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ServiceUnavailableException"} + ], + "documentation":"Returns a list of all CloudWatch Logs account policies in the account.
" + }, "DescribeDestinations":{ "name":"DescribeDestinations", "http":{ @@ -383,7 +414,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.
You must have the logs;FilterLogEvents permission to perform this operation.
You can specify the log group to search by using either logGroupIdentifier or logGroupName. You must include one of these two parameters, but you can't include both.
By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events) or all the events found within the specified time range. If the results include a token, that means there are more log events available. You can get additional results by specifying the token in a subsequent call. This operation can return empty results while there are more log events available through the token.
The returned log events are sorted by event timestamp, the timestamp when the event was ingested by CloudWatch Logs, and the ID of the PutLogEvents request.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
" + "documentation":"Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.
You must have the logs:FilterLogEvents permission to perform this operation.
You can specify the log group to search by using either logGroupIdentifier or logGroupName. You must include one of these two parameters, but you can't include both.
By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events) or all the events found within the specified time range. If the results include a token, that means there are more log events available. You can get additional results by specifying the token in a subsequent call. This operation can return empty results while there are more log events available through the token.
The returned log events are sorted by event timestamp, the timestamp when the event was ingested by CloudWatch Logs, and the ID of the PutLogEvents request.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
" }, "GetDataProtectionPolicy":{ "name":"GetDataProtectionPolicy", @@ -494,6 +525,22 @@ "deprecated":true, "deprecatedMessage":"Please use the generic tagging API ListTagsForResource" }, + "PutAccountPolicy":{ + "name":"PutAccountPolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutAccountPolicyRequest"}, + "output":{"shape":"PutAccountPolicyResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"OperationAbortedException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"Creates an account-level data protection policy that applies to all log groups in the account. A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level policy.
Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked.
If you use PutAccountPolicy to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.
For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
To use the PutAccountPolicy operation, you must be signed on with the logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.
The PutAccountPolicy operation applies to all log groups in the account. You can also use PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.
Creates a data protection policy for the specified log group. A data protection policy can help safeguard sensitive data that's ingested by the log group by auditing and masking the sensitive log data.
Sensitive data is detected and masked when it is ingested into the log group. When you set a data protection policy, log events ingested into the log group before that time are not masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.
For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
" + "documentation":"Creates a data protection policy for the specified log group. A data protection policy can help safeguard sensitive data that's ingested by the log group by auditing and masking the sensitive log data.
Sensitive data is detected and masked when it is ingested into the log group. When you set a data protection policy, log events ingested into the log group before that time are not masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.
For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
The PutDataProtectionPolicy operation applies to only the specified log group. You can also use PutAccountPolicy to create an account-level data protection policy that applies to all log groups in the account, including both existing log groups and log groups that are created level. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.
Uploads a batch of log events to the specified log stream.
The sequence token is now ignored in PutLogEvents actions. PutLogEvents actions are always accepted and never return InvalidSequenceTokenException or DataAlreadyAcceptedException even if the sequence token is not valid. You can use parallel PutLogEvents actions on the same log stream.
The batch of events must satisfy the following constraints:
The maximum batch size is 1,048,576 bytes. This size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.
None of the log events in the batch can be more than 2 hours in the future.
None of the log events in the batch can be more than 14 days in the past. Also, none of the log events can be from earlier than the retention period of the log group.
The log events in the batch must be in chronological order by their timestamp. The timestamp is the time that the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. (In Amazon Web Services Tools for PowerShell and the Amazon Web Services SDK for .NET, the timestamp is specified in .NET format: yyyy-mm-ddThh:mm:ss. For example, 2017-09-15T13:45:30.)
A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.
The maximum number of log events in a batch is 10,000.
The quota of five requests per second per log stream has been removed. Instead, PutLogEvents actions are throttled based on a per-second per-account quota. You can request an increase to the per-second throttling quota by using the Service Quotas service.
If a call to PutLogEvents returns \"UnrecognizedClientException\" the most likely cause is a non-valid Amazon Web Services access key ID or secret key.
Uploads a batch of log events to the specified log stream.
The sequence token is now ignored in PutLogEvents actions. PutLogEvents actions are always accepted and never return InvalidSequenceTokenException or DataAlreadyAcceptedException even if the sequence token is not valid. You can use parallel PutLogEvents actions on the same log stream.
The batch of events must satisfy the following constraints:
The maximum batch size is 1,048,576 bytes. This size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.
None of the log events in the batch can be more than 2 hours in the future.
None of the log events in the batch can be more than 14 days in the past. Also, none of the log events can be from earlier than the retention period of the log group.
The log events in the batch must be in chronological order by their timestamp. The timestamp is the time that the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. (In Amazon Web Services Tools for PowerShell and the Amazon Web Services SDK for .NET, the timestamp is specified in .NET format: yyyy-mm-ddThh:mm:ss. For example, 2017-09-15T13:45:30.)
A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.
Each log event can be no larger than 256 KB.
The maximum number of log events in a batch is 10,000.
The quota of five requests per second per log stream has been removed. Instead, PutLogEvents actions are throttled based on a per-second per-account quota. You can request an increase to the per-second throttling quota by using the Service Quotas service.
If a call to PutLogEvents returns \"UnrecognizedClientException\" the most likely cause is a non-valid Amazon Web Services access key ID or secret key.
Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
An Amazon Kinesis data stream belonging to the same account as the subscription filter, for same-account delivery.
A logical destination that belongs to a different account, for cross-account delivery.
An Amazon Kinesis Data Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.
An Lambda function that belongs to the same account as the subscription filter, for same-account delivery.
Each log group can have up to two subscription filters associated with it. If you are updating an existing filter, you must specify the correct name in filterName.
To perform a PutSubscriptionFilter operation, you must also have the iam:PassRole permission.
Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
An Amazon Kinesis data stream belonging to the same account as the subscription filter, for same-account delivery.
A logical destination that belongs to a different account, for cross-account delivery.
An Amazon Kinesis Data Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.
An Lambda function that belongs to the same account as the subscription filter, for same-account delivery.
Each log group can have up to two subscription filters associated with it. If you are updating an existing filter, you must specify the correct name in filterName.
To perform a PutSubscriptionFilter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.
Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group and time range to query and the query string to use.
For more information, see CloudWatch Logs Insights Query Syntax.
Queries time out after 15 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start a query in a linked source account. For more information, see CloudWatch cross-account observability. For a cross-account StartQuery operation, the query definition must be defined in the monitoring account.
You can have up to 20 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards.
" + "documentation":"Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group and time range to query and the query string to use.
For more information, see CloudWatch Logs Insights Query Syntax.
Queries time out after 60 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start a query in a linked source account. For more information, see CloudWatch cross-account observability. For a cross-account StartQuery operation, the query definition must be defined in the monitoring account.
You can have up to 30 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards.
" }, "StopQuery":{ "name":"StopQuery", @@ -758,6 +805,41 @@ "max":20, "min":0 }, + "AccountPolicies":{ + "type":"list", + "member":{"shape":"AccountPolicy"} + }, + "AccountPolicy":{ + "type":"structure", + "members":{ + "policyName":{ + "shape":"PolicyName", + "documentation":"The name of the account policy.
" + }, + "policyDocument":{ + "shape":"AccountPolicyDocument", + "documentation":"The policy document for this account policy.
The JSON specified in policyDocument can be up to 30,720 characters.
The date and time that this policy was most recently updated.
" + }, + "policyType":{ + "shape":"PolicyType", + "documentation":"The type of policy for this account policy.
" + }, + "scope":{ + "shape":"Scope", + "documentation":"The scope of the account policy.
" + }, + "accountId":{ + "shape":"AccountId", + "documentation":"The Amazon Web Services account ID that the policy applies to.
" + } + }, + "documentation":"A structure that contains information about one CloudWatch Logs account policy.
" + }, + "AccountPolicyDocument":{"type":"string"}, "AmazonResourceName":{ "type":"string", "max":1011, @@ -895,9 +977,26 @@ }, "Days":{ "type":"integer", - "documentation":"The number of days to retain the log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 2192, 2557, 2922, 3288, and 3653.
To set a log group so that its log events do not expire, use DeleteRetentionPolicy.
" + "documentation":"The number of days to retain the log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1096, 1827, 2192, 2557, 2922, 3288, and 3653.
To set a log group so that its log events do not expire, use DeleteRetentionPolicy.
" }, "DefaultValue":{"type":"double"}, + "DeleteAccountPolicyRequest":{ + "type":"structure", + "required":[ + "policyName", + "policyType" + ], + "members":{ + "policyName":{ + "shape":"PolicyName", + "documentation":"The name of the policy to delete.
" + }, + "policyType":{ + "shape":"PolicyType", + "documentation":"The type of policy to delete. Currently, the only valid value is DATA_PROTECTION_POLICY.
Use this parameter to limit the returned policies to only the policies that match the policy type that you specify. Currently, the only valid value is DATA_PROTECTION_POLICY.
Use this parameter to limit the returned policies to only the policy with the name that you specify.
" + }, + "accountIdentifiers":{ + "shape":"AccountIds", + "documentation":"If you are using an account that is set up as a monitoring account for CloudWatch unified cross-account observability, you can use this to specify the account ID of a source account. If you do, the operation returns the account policy for the specified account. Currently, you can specify only one account ID in this parameter.
If you omit this parameter, only the policy in the current account is returned.
" + } + } + }, + "DescribeAccountPoliciesResponse":{ + "type":"structure", + "members":{ + "accountPolicies":{ + "shape":"AccountPolicies", + "documentation":"An array of structures that contain information about the CloudWatch Logs account policies that match the specified filters.
" + } + } + }, "DescribeDestinationsRequest":{ "type":"structure", "members":{ @@ -1094,7 +1220,7 @@ }, "logGroupNamePattern":{ "shape":"LogGroupNamePattern", - "documentation":"If you specify a string for this parameter, the operation returns only log groups that have names that match the string based on a case-sensitive substring search. For example, if you specify Foo, log groups named FooBar, aws/Foo, and GroupFoo would match, but foo, F/o/o and Froo would not match.
logGroupNamePattern and logGroupNamePrefix are mutually exclusive. Only one of these parameters can be passed.
If you specify a string for this parameter, the operation returns only log groups that have names that match the string based on a case-sensitive substring search. For example, if you specify Foo, log groups named FooBar, aws/Foo, and GroupFoo would match, but foo, F/o/o and Froo would not match.
If you specify logGroupNamePattern in your request, then only arn, creationTime, and logGroupName are included in the response.
logGroupNamePattern and logGroupNamePrefix are mutually exclusive. Only one of these parameters can be passed.
If you are using a monitoring account, set this to True to have the operation return log groups in the accounts listed in accountIdentifiers.
If this parameter is set to true and accountIdentifiers contains a null value, the operation returns all log groups in the monitoring account and all log groups in all source accounts that are linked to the monitoring account.
If you specify includeLinkedAccounts in your request, then metricFilterCount, retentionInDays, and storedBytes are not included in the response.
If you are using a monitoring account, set this to True to have the operation return log groups in the accounts listed in accountIdentifiers.
If this parameter is set to true and accountIdentifiers contains a null value, the operation returns all log groups in the monitoring account and all log groups in all source accounts that are linked to the monitoring account.
The status of the most recent running of the query. Possible values are Cancelled, Complete, Failed, Running, Scheduled, Timeout, and Unknown.
Queries time out after 15 minutes of runtime. To avoid having your queries time out, reduce the time range being searched or partition your query into a number of queries.
" + "documentation":"The status of the most recent running of the query. Possible values are Cancelled, Complete, Failed, Running, Scheduled, Timeout, and Unknown.
Queries time out after 60 minutes of runtime. To avoid having your queries time out, reduce the time range being searched or partition your query into a number of queries.
" } } }, "IncludeLinkedAccounts":{"type":"boolean"}, + "InheritedProperties":{ + "type":"list", + "member":{"shape":"InheritedProperty"} + }, + "InheritedProperty":{ + "type":"string", + "enum":["ACCOUNT_DATA_PROTECTION"] + }, "InputLogEvent":{ "type":"structure", "required":[ @@ -1793,7 +1927,7 @@ }, "message":{ "shape":"EventMessage", - "documentation":"The raw event message.
" + "documentation":"The raw event message. Each log event can be no larger than 256 KB.
" } }, "documentation":"Represents a log event, which is a record of activity that was recorded by the application or resource being monitored.
" @@ -1918,6 +2052,10 @@ "dataProtectionStatus":{ "shape":"DataProtectionStatus", "documentation":"Displays whether this log group has a protection policy, or whether it had one in the past. For more information, see PutDataProtectionPolicy.
" + }, + "inheritedProperties":{ + "shape":"InheritedProperties", + "documentation":"Displays all the properties that this log group has inherited from account-level settings.
" } }, "documentation":"Represents a log group.
" @@ -2194,6 +2332,45 @@ "min":1 }, "PolicyName":{"type":"string"}, + "PolicyType":{ + "type":"string", + "enum":["DATA_PROTECTION_POLICY"] + }, + "PutAccountPolicyRequest":{ + "type":"structure", + "required":[ + "policyName", + "policyDocument", + "policyType" + ], + "members":{ + "policyName":{ + "shape":"PolicyName", + "documentation":"A name for the policy. This must be unique within the account.
" + }, + "policyDocument":{ + "shape":"AccountPolicyDocument", + "documentation":"Specify the data protection policy, in JSON.
This policy must include two JSON blocks:
The first block must include both a DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask.
The Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Kinesis Data Firehose streams, and S3 buckets, they must already exist.
The second block must include both a DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy.
The Operation property with the Deidentify action is what actually masks the data, and it must contain the \"MaskConfig\": {} object. The \"MaskConfig\": {} object must be empty.
For an example data protection policy, see the Examples section on this page.
The contents of the two DataIdentifer arrays must match exactly.
In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is different than the operation's policyName parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.
The JSON specified in policyDocument can be up to 30,720 characters.
Currently the only valid value for this parameter is DATA_PROTECTION_POLICY.
Currently the only valid value for this parameter is GLOBAL, which specifies that the data protection policy applies to all log groups in the account. If you omit this parameter, the default of GLOBAL is used.
The account policy that you created.
" + } + } + }, "PutDataProtectionPolicyRequest":{ "type":"structure", "required":[ @@ -2207,7 +2384,7 @@ }, "policyDocument":{ "shape":"DataProtectionPolicyDocument", - "documentation":"Specify the data protection policy, in JSON.
This policy must include two JSON blocks:
The first block must include both a DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask.
The Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Kinesis Data Firehose streams, and S3 buckets, they must already exist.
The second block must include both a DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy.
The Operation property with the Deidentify action is what actually masks the data, and it must contain the \"MaskConfig\": {} object. The \"MaskConfig\": {} object must be empty.
For an example data protection policy, see the Examples section on this page.
The contents of two DataIdentifer arrays must match exactly.
Specify the data protection policy, in JSON.
This policy must include two JSON blocks:
The first block must include both a DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask.
The Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Kinesis Data Firehose streams, and S3 buckets, they must already exist.
The second block must include both a DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy.
The Operation property with the Deidentify action is what actually masks the data, and it must contain the \"MaskConfig\": {} object. The \"MaskConfig\": {} object must be empty.
For an example data protection policy, see the Examples section on this page.
The contents of the two DataIdentifer arrays must match exactly.
In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.
The JSON specified in policyDocument can be up to 30,720 characters.
Specify true if you are updating an existing destination policy to grant permission to an organization ID instead of granting permission to individual AWS accounts. Before you update a destination policy this way, you must first update the subscription filters in the accounts that send logs to this destination. If you do not, the subscription filters might stop working. By specifying true for forceUpdate, you are affirming that you have already updated the subscription filters. For more information, see Updating an existing cross-account subscription
If you omit this parameter, the default of false is used.
Specify true if you are updating an existing destination policy to grant permission to an organization ID instead of granting permission to individual Amazon Web Services accounts. Before you update a destination policy this way, you must first update the subscription filters in the accounts that send logs to this destination. If you do not, the subscription filters might stop working. By specifying true for forceUpdate, you are affirming that you have already updated the subscription filters. For more information, see Updating an existing cross-account subscription
If you omit this parameter, the default of false is used.
The information about the container used for a job run or a managed endpoint.
", "union":true }, + "ContainerLogRotationConfiguration":{ + "type":"structure", + "required":[ + "rotationSize", + "maxFilesToKeep" + ], + "members":{ + "rotationSize":{ + "shape":"RotationSize", + "documentation":"The file size at which to rotate logs. Minimum of 2KB, Maximum of 2GB.
" + }, + "maxFilesToKeep":{ + "shape":"MaxFilesToKeep", + "documentation":"The number of files to keep in container after rotation.
" + } + }, + "documentation":"The settings for container log rotation.
" + }, "ContainerProvider":{ "type":"structure", "required":[ @@ -1276,7 +1294,7 @@ "type":"string", "max":2048, "min":3, - "pattern":"^(arn:(aws[a-zA-Z0-9-]*):kms:([a-zA-Z0-9]+-?)+:(\\d{12})?:key\\/[(0-9a-zA-Z)-?]+|\\$\\{[a-zA-Z]\\w*\\})$" + "pattern":"^(arn:(aws[a-zA-Z0-9-]*):kms:.+:(\\d{12})?:key\\/[(0-9a-zA-Z)-?]+|\\$\\{[a-zA-Z]\\w*\\})$" }, "KubernetesNamespace":{ "type":"string", @@ -1541,6 +1559,11 @@ "min":1, "pattern":"[\\.\\-_/#A-Za-z0-9]+" }, + "MaxFilesToKeep":{ + "type":"integer", + "max":50, + "min":1 + }, "MonitoringConfiguration":{ "type":"structure", "members":{ @@ -1555,6 +1578,10 @@ "s3MonitoringConfiguration":{ "shape":"S3MonitoringConfiguration", "documentation":"Amazon S3 configuration for monitoring log publishing.
" + }, + "containerLogRotationConfiguration":{ + "shape":"ContainerLogRotationConfiguration", + "documentation":"Enable or disable container log rotation.
" } }, "documentation":"Configuration setting for monitoring.
" @@ -1704,6 +1731,12 @@ }, "documentation":"The current status of the retry policy executed on the job.
" }, + "RotationSize":{ + "type":"string", + "max":12, + "min":3, + "pattern":"^\\d+(\\.\\d+)?[KMG][Bb]?$" + }, "RsiArn":{ "type":"string", "max":500, From 0ae6e916a5e0e5cc08124d22e04efc304975ec17 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 7 Jun 2023 18:08:29 +0000 Subject: [PATCH 055/317] AWS IoT Core Device Advisor Update: AWS IoT Core Device Advisor now supports new Qualification Suite test case list. With this update, customers can more easily create new qualification test suite with an empty rootGroup input. --- .../feature-AWSIoTCoreDeviceAdvisor-c1301c9.json | 6 ++++++ .../src/main/resources/codegen-resources/service-2.json | 6 +++--- 2 files changed, 9 insertions(+), 3 deletions(-) create mode 100644 .changes/next-release/feature-AWSIoTCoreDeviceAdvisor-c1301c9.json diff --git a/.changes/next-release/feature-AWSIoTCoreDeviceAdvisor-c1301c9.json b/.changes/next-release/feature-AWSIoTCoreDeviceAdvisor-c1301c9.json new file mode 100644 index 000000000000..ef2354c65921 --- /dev/null +++ b/.changes/next-release/feature-AWSIoTCoreDeviceAdvisor-c1301c9.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS IoT Core Device Advisor", + "contributor": "", + "description": "AWS IoT Core Device Advisor now supports new Qualification Suite test case list. With this update, customers can more easily create new qualification test suite with an empty rootGroup input." +} diff --git a/services/iotdeviceadvisor/src/main/resources/codegen-resources/service-2.json b/services/iotdeviceadvisor/src/main/resources/codegen-resources/service-2.json index 610ba7904949..cf68b11e45fd 100644 --- a/services/iotdeviceadvisor/src/main/resources/codegen-resources/service-2.json +++ b/services/iotdeviceadvisor/src/main/resources/codegen-resources/service-2.json @@ -689,7 +689,7 @@ "RootGroup":{ "type":"string", "max":2048, - "min":1 + "min":0 }, "SelectedTestList":{ "type":"list", @@ -821,7 +821,7 @@ }, "rootGroup":{ "shape":"RootGroup", - "documentation":"Gets the test suite root group. This is a required parameter.
" + "documentation":"Gets the test suite root group. This is a required parameter. For updating or creating the latest qualification suite, if intendedForQualification is set to true, rootGroup can be an empty string. If intendedForQualification is false, rootGroup cannot be an empty string. If rootGroup is empty, and intendedForQualification is set to true, all the qualification tests are included, and the configuration is default.
For a qualification suite, the minimum length is 0, and the maximum is 2048. For a non-qualification suite, the minimum length is 1, and the maximum is 2048.
" }, "devicePermissionRoleArn":{ "shape":"AmazonResourceName", @@ -1081,7 +1081,7 @@ }, "systemMessage":{ "shape":"SystemMessage", - "documentation":"" + "documentation":"
Provides test case scenario system messages if any.
" } }, "documentation":"Provides test case scenario.
" From 647b4bbc6d23bae8e6a00fa8d41f907473e60402 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 7 Jun 2023 18:08:31 +0000 Subject: [PATCH 056/317] Amazon Connect Customer Profiles Update: This release introduces event stream related APIs. --- ...AmazonConnectCustomerProfiles-88d780f.json | 6 + .../codegen-resources/paginators-1.json | 6 + .../codegen-resources/service-2.json | 343 ++++++++++++++++++ 3 files changed, 355 insertions(+) create mode 100644 .changes/next-release/feature-AmazonConnectCustomerProfiles-88d780f.json diff --git a/.changes/next-release/feature-AmazonConnectCustomerProfiles-88d780f.json b/.changes/next-release/feature-AmazonConnectCustomerProfiles-88d780f.json new file mode 100644 index 000000000000..bbbd0442156d --- /dev/null +++ b/.changes/next-release/feature-AmazonConnectCustomerProfiles-88d780f.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Connect Customer Profiles", + "contributor": "", + "description": "This release introduces event stream related APIs." +} diff --git a/services/customerprofiles/src/main/resources/codegen-resources/paginators-1.json b/services/customerprofiles/src/main/resources/codegen-resources/paginators-1.json index 5677bd8e4a2d..58e94da63dd2 100644 --- a/services/customerprofiles/src/main/resources/codegen-resources/paginators-1.json +++ b/services/customerprofiles/src/main/resources/codegen-resources/paginators-1.json @@ -1,4 +1,10 @@ { "pagination": { + "ListEventStreams": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults", + "result_key": "Items" + } } } diff --git a/services/customerprofiles/src/main/resources/codegen-resources/service-2.json b/services/customerprofiles/src/main/resources/codegen-resources/service-2.json index 63c45cd9d3c6..dcd2477861c9 100644 --- a/services/customerprofiles/src/main/resources/codegen-resources/service-2.json +++ b/services/customerprofiles/src/main/resources/codegen-resources/service-2.json @@ -64,6 +64,23 @@ ], "documentation":"Creates a domain, which is a container for all customer data, such as customer profile attributes, object types, profile keys, and encryption keys. You can create multiple domains, and each domain can have multiple third-party integrations.
Each Amazon Connect instance can be associated with only one domain. Multiple Amazon Connect instances can be associated with one domain.
Use this API or UpdateDomain to enable identity resolution: set Matching to true.
To prevent cross-service impersonation when you call this API, see Cross-service confused deputy prevention for sample policies that you should apply.
" }, + "CreateEventStream":{ + "name":"CreateEventStream", + "http":{ + "method":"POST", + "requestUri":"/domains/{DomainName}/event-streams/{EventStreamName}" + }, + "input":{"shape":"CreateEventStreamRequest"}, + "output":{"shape":"CreateEventStreamResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Creates an event stream, which is a subscription to real-time events, such as when profiles are created and updated through Amazon Connect Customer Profiles.
Each event stream can be associated with only one Kinesis Data Stream destination in the same region and Amazon Web Services account as the customer profiles domain
" + }, "CreateIntegrationWorkflow":{ "name":"CreateIntegrationWorkflow", "http":{ @@ -132,6 +149,24 @@ ], "documentation":"Deletes a specific domain and all of its customer data, such as customer profile attributes and their related objects.
" }, + "DeleteEventStream":{ + "name":"DeleteEventStream", + "http":{ + "method":"DELETE", + "requestUri":"/domains/{DomainName}/event-streams/{EventStreamName}" + }, + "input":{"shape":"DeleteEventStreamRequest"}, + "output":{"shape":"DeleteEventStreamResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Disables and deletes the specified event stream.
", + "idempotent":true + }, "DeleteIntegration":{ "name":"DeleteIntegration", "http":{ @@ -302,6 +337,23 @@ ], "documentation":"Returns information about a specific domain.
" }, + "GetEventStream":{ + "name":"GetEventStream", + "http":{ + "method":"GET", + "requestUri":"/domains/{DomainName}/event-streams/{EventStreamName}" + }, + "input":{"shape":"GetEventStreamRequest"}, + "output":{"shape":"GetEventStreamResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Returns information about the specified event stream in a specific domain.
" + }, "GetIdentityResolutionJob":{ "name":"GetIdentityResolutionJob", "http":{ @@ -489,6 +541,23 @@ ], "documentation":"Returns a list of all the domains for an AWS account that have been created.
" }, + "ListEventStreams":{ + "name":"ListEventStreams", + "http":{ + "method":"GET", + "requestUri":"/domains/{DomainName}/event-streams" + }, + "input":{"shape":"ListEventStreamsRequest"}, + "output":{"shape":"ListEventStreamsResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Returns a list of all the event streams in a specific domain.
" + }, "ListIdentityResolutionJobs":{ "name":"ListIdentityResolutionJobs", "http":{ @@ -1365,6 +1434,50 @@ } } }, + "CreateEventStreamRequest":{ + "type":"structure", + "required":[ + "DomainName", + "Uri", + "EventStreamName" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "Uri":{ + "shape":"string1To255", + "documentation":"The StreamARN of the destination to deliver profile events to. For example, arn:aws:kinesis:region:account-id:stream/stream-name
" + }, + "EventStreamName":{ + "shape":"name", + "documentation":"The name of the event stream.
", + "location":"uri", + "locationName":"EventStreamName" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + } + }, + "CreateEventStreamResponse":{ + "type":"structure", + "required":["EventStreamArn"], + "members":{ + "EventStreamArn":{ + "shape":"string1To255", + "documentation":"A unique identifier for the event stream.
" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + } + }, "CreateIntegrationWorkflowRequest":{ "type":"structure", "required":[ @@ -1595,6 +1708,32 @@ } } }, + "DeleteEventStreamRequest":{ + "type":"structure", + "required":[ + "DomainName", + "EventStreamName" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "EventStreamName":{ + "shape":"name", + "documentation":"The name of the event stream
", + "location":"uri", + "locationName":"EventStreamName" + } + } + }, + "DeleteEventStreamResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteIntegrationRequest":{ "type":"structure", "required":[ @@ -1790,6 +1929,28 @@ "max":256, "pattern":".*" }, + "DestinationSummary":{ + "type":"structure", + "required":[ + "Uri", + "Status" + ], + "members":{ + "Uri":{ + "shape":"string1To255", + "documentation":"The StreamARN of the destination to deliver profile events to. For example, arn:aws:kinesis:region:account-id:stream/stream-name.
" + }, + "Status":{ + "shape":"EventStreamDestinationStatus", + "documentation":"The status of enabling the Kinesis stream as a destination for export.
" + }, + "UnhealthySince":{ + "shape":"timestamp", + "documentation":"The timestamp when the status last changed to UNHEALHY.
Summary information about the Kinesis data stream
" + }, "DomainList":{ "type":"list", "member":{"shape":"ListDomainItem"} @@ -1822,6 +1983,90 @@ "max":1.0, "min":0.0 }, + "EventStreamDestinationDetails":{ + "type":"structure", + "required":[ + "Uri", + "Status" + ], + "members":{ + "Uri":{ + "shape":"string1To255", + "documentation":"The StreamARN of the destination to deliver profile events to. For example, arn:aws:kinesis:region:account-id:stream/stream-name.
" + }, + "Status":{ + "shape":"EventStreamDestinationStatus", + "documentation":"The status of enabling the Kinesis stream as a destination for export.
" + }, + "UnhealthySince":{ + "shape":"timestamp", + "documentation":"The timestamp when the status last changed to UNHEALHY.
The human-readable string that corresponds to the error or success while enabling the streaming destination.
" + } + }, + "documentation":"Details of the destination being used for the EventStream.
" + }, + "EventStreamDestinationStatus":{ + "type":"string", + "enum":[ + "HEALTHY", + "UNHEALTHY" + ] + }, + "EventStreamState":{ + "type":"string", + "enum":[ + "RUNNING", + "STOPPED" + ] + }, + "EventStreamSummary":{ + "type":"structure", + "required":[ + "DomainName", + "EventStreamName", + "EventStreamArn", + "State" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
" + }, + "EventStreamName":{ + "shape":"name", + "documentation":"The name of the event stream.
" + }, + "EventStreamArn":{ + "shape":"string1To255", + "documentation":"A unique identifier for the event stream.
" + }, + "State":{ + "shape":"EventStreamState", + "documentation":"The operational state of destination stream for export.
" + }, + "StoppedSince":{ + "shape":"timestamp", + "documentation":"The timestamp when the State changed to STOPPED.
Summary information about the Kinesis data stream.
" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + }, + "documentation":"An instance of EventStream in a list of EventStreams.
" + }, + "EventStreamSummaryList":{ + "type":"list", + "member":{"shape":"EventStreamSummary"} + }, "ExportingConfig":{ "type":"structure", "members":{ @@ -2240,6 +2485,67 @@ } } }, + "GetEventStreamRequest":{ + "type":"structure", + "required":[ + "DomainName", + "EventStreamName" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "EventStreamName":{ + "shape":"name", + "documentation":"The name of the event stream provided during create operations.
", + "location":"uri", + "locationName":"EventStreamName" + } + } + }, + "GetEventStreamResponse":{ + "type":"structure", + "required":[ + "DomainName", + "EventStreamArn", + "CreatedAt", + "State", + "DestinationDetails" + ], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
" + }, + "EventStreamArn":{ + "shape":"string1To255", + "documentation":"A unique identifier for the event stream.
" + }, + "CreatedAt":{ + "shape":"timestamp", + "documentation":"The timestamp of when the export was created.
" + }, + "State":{ + "shape":"EventStreamState", + "documentation":"The operational state of destination stream for export.
" + }, + "StoppedSince":{ + "shape":"timestamp", + "documentation":"The timestamp when the State changed to STOPPED.
Details regarding the Kinesis stream.
" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags used to organize, track, or control access for this resource.
" + } + } + }, "GetIdentityResolutionJobRequest":{ "type":"structure", "required":[ @@ -3042,6 +3348,43 @@ } } }, + "ListEventStreamsRequest":{ + "type":"structure", + "required":["DomainName"], + "members":{ + "DomainName":{ + "shape":"name", + "documentation":"The unique name of the domain.
", + "location":"uri", + "locationName":"DomainName" + }, + "NextToken":{ + "shape":"token", + "documentation":"Identifies the next page of results to return.
", + "location":"querystring", + "locationName":"next-token" + }, + "MaxResults":{ + "shape":"maxSize100", + "documentation":"The maximum number of objects returned per page.
", + "location":"querystring", + "locationName":"max-results" + } + } + }, + "ListEventStreamsResponse":{ + "type":"structure", + "members":{ + "Items":{ + "shape":"EventStreamSummaryList", + "documentation":"Contains summary information about an EventStream.
" + }, + "NextToken":{ + "shape":"token", + "documentation":"Identifies the next page of results to return.
" + } + } + }, "ListIdentityResolutionJobsRequest":{ "type":"structure", "required":["DomainName"], From ab6007fa5831089b972b4a6c4feed9dd4249d2d0 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 7 Jun 2023 18:08:38 +0000 Subject: [PATCH 057/317] AWS CloudFormation Update: AWS CloudFormation StackSets is updating the deployment experience for all stackset operations to skip suspended AWS accounts during deployments. StackSets will skip target AWS accounts that are suspended and set the Detailed Status of the corresponding stack instances as SKIPPED_SUSPENDED_ACCOUNT --- .../feature-AWSCloudFormation-7d1f406.json | 6 +++ .../codegen-resources/service-2.json | 39 ++++++++++--------- 2 files changed, 26 insertions(+), 19 deletions(-) create mode 100644 .changes/next-release/feature-AWSCloudFormation-7d1f406.json diff --git a/.changes/next-release/feature-AWSCloudFormation-7d1f406.json b/.changes/next-release/feature-AWSCloudFormation-7d1f406.json new file mode 100644 index 000000000000..c6aec22ad8fd --- /dev/null +++ b/.changes/next-release/feature-AWSCloudFormation-7d1f406.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS CloudFormation", + "contributor": "", + "description": "AWS CloudFormation StackSets is updating the deployment experience for all stackset operations to skip suspended AWS accounts during deployments. StackSets will skip target AWS accounts that are suspended and set the Detailed Status of the corresponding stack instances as SKIPPED_SUSPENDED_ACCOUNT" +} diff --git a/services/cloudformation/src/main/resources/codegen-resources/service-2.json b/services/cloudformation/src/main/resources/codegen-resources/service-2.json index 3a2aff947ad6..72e35e8937e2 100644 --- a/services/cloudformation/src/main/resources/codegen-resources/service-2.json +++ b/services/cloudformation/src/main/resources/codegen-resources/service-2.json @@ -126,7 +126,7 @@ {"shape":"TokenAlreadyExistsException"}, {"shape":"InsufficientCapabilitiesException"} ], - "documentation":"Creates a stack as specified in the template. After the call completes successfully, the stack creation starts. You can check the status of the stack through the DescribeStacksoperation.
" + "documentation":"Creates a stack as specified in the template. After the call completes successfully, the stack creation starts. You can check the status of the stack through the DescribeStacks operation.
" }, "CreateStackInstances":{ "name":"CreateStackInstances", @@ -524,7 +524,7 @@ "errors":[ {"shape":"CFNRegistryException"} ], - "documentation":"Returns information about an extension's registration, including its current status and type and version identifiers.
When you initiate a registration request using RegisterType , you can then use DescribeTypeRegistration to monitor the progress of that registration request.
Once the registration request has completed, use DescribeType to return detailed information about an extension.
Returns information about an extension's registration, including its current status and type and version identifiers.
When you initiate a registration request using RegisterType, you can then use DescribeTypeRegistration to monitor the progress of that registration request.
Once the registration request has completed, use DescribeType to return detailed information about an extension.
", "idempotent":true }, "DetectStackDrift":{ @@ -569,7 +569,7 @@ {"shape":"OperationInProgressException"}, {"shape":"StackSetNotFoundException"} ], - "documentation":"Detect drift on a stack set. When CloudFormation performs drift detection on a stack set, it performs drift detection on the stack associated with each stack instance in the stack set. For more information, see How CloudFormation performs drift detection on a stack set.
DetectStackSetDrift returns the OperationId of the stack set drift detection operation. Use this operation id with DescribeStackSetOperation to monitor the progress of the drift detection operation. The drift detection operation may take some time, depending on the number of stack instances included in the stack set, in addition to the number of resources included in each stack.
Once the operation has completed, use the following actions to return drift information:
Use DescribeStackSet to return detailed information about the stack set, including detailed information about the last completed drift operation performed on the stack set. (Information about drift operations that are in progress isn't included.)
Use ListStackInstances to return a list of stack instances belonging to the stack set, including the drift status and last drift time checked of each instance.
Use DescribeStackInstance to return detailed information about a specific stack instance, including its drift status and last drift time checked.
For more information about performing a drift detection operation on a stack set, see Detecting unmanaged changes in stack sets.
You can only run a single drift detection operation on a given stack set at one time.
To stop a drift detection stack set operation, use StopStackSetOperation .
Detect drift on a stack set. When CloudFormation performs drift detection on a stack set, it performs drift detection on the stack associated with each stack instance in the stack set. For more information, see How CloudFormation performs drift detection on a stack set.
DetectStackSetDrift returns the OperationId of the stack set drift detection operation. Use this operation id with DescribeStackSetOperation to monitor the progress of the drift detection operation. The drift detection operation may take some time, depending on the number of stack instances included in the stack set, in addition to the number of resources included in each stack.
Once the operation has completed, use the following actions to return drift information:
Use DescribeStackSet to return detailed information about the stack set, including detailed information about the last completed drift operation performed on the stack set. (Information about drift operations that are in progress isn't included.)
Use ListStackInstances to return a list of stack instances belonging to the stack set, including the drift status and last drift time checked of each instance.
Use DescribeStackInstance to return detailed information about a specific stack instance, including its drift status and last drift time checked.
For more information about performing a drift detection operation on a stack set, see Detecting unmanaged changes in stack sets.
You can only run a single drift detection operation on a given stack set at one time.
To stop a drift detection stack set operation, use StopStackSetOperation.
" }, "EstimateTemplateCost":{ "name":"EstimateTemplateCost", @@ -694,7 +694,7 @@ "shape":"ListExportsOutput", "resultWrapper":"ListExportsResult" }, - "documentation":"Lists all exported output values in the account and Region in which you call this action. Use this action to see the exported output values that you can import into other stacks. To import values, use the Fn::ImportValue function.
For more information, see CloudFormation export stack output values.
" + "documentation":"Lists all exported output values in the account and Region in which you call this action. Use this action to see the exported output values that you can import into other stacks. To import values, use the Fn::ImportValue function.
For more information, see CloudFormation export stack output values.
" }, "ListImports":{ "name":"ListImports", @@ -707,7 +707,7 @@ "shape":"ListImportsOutput", "resultWrapper":"ListImportsResult" }, - "documentation":"Lists all stacks that are importing an exported output value. To modify or remove an exported output value, first use this action to see which stacks are using it. To see the exported output values in your account, see ListExports.
For more information about importing an exported output value, see the Fn::ImportValue function.
Lists all stacks that are importing an exported output value. To modify or remove an exported output value, first use this action to see which stacks are using it. To see the exported output values in your account, see ListExports.
For more information about importing an exported output value, see the Fn::ImportValue function.
" }, "ListStackInstances":{ "name":"ListStackInstances", @@ -915,7 +915,7 @@ "errors":[ {"shape":"CFNRegistryException"} ], - "documentation":"Registers an extension with the CloudFormation service. Registering an extension makes it available for use in CloudFormation templates in your Amazon Web Services account, and includes:
Validating the extension schema.
Determining which handlers, if any, have been specified for the extension.
Making the extension available for use in your account.
For more information about how to develop extensions and ready them for registration, see Creating Resource Providers in the CloudFormation CLI User Guide.
You can have a maximum of 50 resource extension versions registered at a time. This maximum is per account and per Region. Use DeregisterType to deregister specific extension versions if necessary.
Once you have initiated a registration request using RegisterType , you can use DescribeTypeRegistration to monitor the progress of the registration request.
Once you have registered a private extension in your account and Region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
", + "documentation":"Registers an extension with the CloudFormation service. Registering an extension makes it available for use in CloudFormation templates in your Amazon Web Services account, and includes:
Validating the extension schema.
Determining which handlers, if any, have been specified for the extension.
Making the extension available for use in your account.
For more information about how to develop extensions and ready them for registration, see Creating Resource Providers in the CloudFormation CLI User Guide.
You can have a maximum of 50 resource extension versions registered at a time. This maximum is per account and per Region. Use DeregisterType to deregister specific extension versions if necessary.
Once you have initiated a registration request using RegisterType, you can use DescribeTypeRegistration to monitor the progress of the registration request.
Once you have registered a private extension in your account and Region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.
", "idempotent":true }, "RollbackStack":{ @@ -1660,7 +1660,7 @@ }, "ClientRequestToken":{ "shape":"ClientRequestToken", - "documentation":"A unique identifier for this ContinueUpdateRollback request. Specify this token if you plan to retry requests so that CloudFormationknows that you're not attempting to continue the rollback to a stack with the same name. You might retry ContinueUpdateRollback requests to ensure that CloudFormation successfully received them.
A unique identifier for this ContinueUpdateRollback request. Specify this token if you plan to retry requests so that CloudFormation knows that you're not attempting to continue the rollback to a stack with the same name. You might retry ContinueUpdateRollback requests to ensure that CloudFormation successfully received them.
The input for the ContinueUpdateRollback action.
" @@ -1736,7 +1736,7 @@ }, "ChangeSetType":{ "shape":"ChangeSetType", - "documentation":"The type of change set operation. To create a change set for a new stack, specify CREATE. To create a change set for an existing stack, specify UPDATE. To create a change set for an import operation, specify IMPORT.
If you create a change set for a new stack, CloudFormation creates a stack with a unique stack ID, but no template or resources. The stack will be in the REVIEW_IN_PROGRESS state until you execute the change set.
By default, CloudFormation specifies UPDATE. You can't use the UPDATE type to create a change set for a new stack or the CREATE type to create a change set for an existing stack.
The type of change set operation. To create a change set for a new stack, specify CREATE. To create a change set for an existing stack, specify UPDATE. To create a change set for an import operation, specify IMPORT.
If you create a change set for a new stack, CloudFormation creates a stack with a unique stack ID, but no template or resources. The stack will be in the REVIEW_IN_PROGRESS state until you execute the change set.
By default, CloudFormation specifies UPDATE. You can't use the UPDATE type to create a change set for a new stack or the CREATE type to create a change set for an existing stack.
The ID of the default version of the extension. The default version is used when the extension version isn't specified.
This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon Web Services and published by third parties, CloudFormation returns null. For more information, see RegisterType.
To set the default version of an extension, use SetTypeDefaultVersion .
The ID of the default version of the extension. The default version is used when the extension version isn't specified.
This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon Web Services and published by third parties, CloudFormation returns null. For more information, see RegisterType.
To set the default version of an extension, use SetTypeDefaultVersion.
" }, "IsDefaultVersion":{ "shape":"IsDefaultVersion", @@ -2852,7 +2852,7 @@ "members":{ "RegistrationToken":{ "shape":"RegistrationToken", - "documentation":"The identifier for this registration request.
This registration token is generated by CloudFormation when you initiate a registration request using RegisterType .
The identifier for this registration request.
This registration token is generated by CloudFormation when you initiate a registration request using RegisterType.
" } } }, @@ -2961,7 +2961,7 @@ "members":{ "OperationId":{ "shape":"ClientRequestToken", - "documentation":"The ID of the drift detection stack set operation.
You can use this operation ID with DescribeStackSetOperation to monitor the progress of the drift detection operation.
The ID of the drift detection stack set operation.
You can use this operation ID with DescribeStackSetOperation to monitor the progress of the drift detection operation.
" } } }, @@ -3778,7 +3778,7 @@ "members":{ "RegistrationTokenList":{ "shape":"RegistrationTokenList", - "documentation":"A list of extension registration tokens.
Use DescribeTypeRegistration to return detailed information about a type registration request.
A list of extension registration tokens.
Use DescribeTypeRegistration to return detailed information about a type registration request.
" }, "NextToken":{ "shape":"NextToken", @@ -4143,7 +4143,7 @@ }, "ResolvedValue":{ "shape":"ParameterValue", - "documentation":"Read-only. The value that corresponds to a SSM parameter key. This field is returned only for SSM parameter types in the template.
Read-only. The value that corresponds to a SSM parameter key. This field is returned only for SSM parameter types in the template.
" } }, "documentation":"The Parameter data type.
" @@ -4456,7 +4456,7 @@ "members":{ "RegistrationToken":{ "shape":"RegistrationToken", - "documentation":"The identifier for this registration request.
Use this registration token when calling DescribeTypeRegistration , which returns information about the status and IDs of the extension registration.
The identifier for this registration request.
Use this registration token when calling DescribeTypeRegistration, which returns information about the status and IDs of the extension registration.
" } } }, @@ -5272,7 +5272,7 @@ "members":{ "DetailedStatus":{ "shape":"StackInstanceDetailedStatus", - "documentation":" CANCELLED: The operation in the specified account and Region has been canceled. This is either because a user has stopped the stack set operation, or because the failure tolerance of the stack set operation has been exceeded.
FAILED: The operation in the specified account and Region failed. If the stack set operation fails in enough accounts within a Region, the failure tolerance for the stack set operation as a whole might be exceeded.
INOPERABLE: A DeleteStackInstances operation has failed and left the stack in an unstable state. Stacks in this state are excluded from further UpdateStackSet operations. You might need to perform a DeleteStackInstances operation, with RetainStacks set to true, to delete the stack instance, and then delete the stack manually.
PENDING: The operation in the specified account and Region has yet to start.
RUNNING: The operation in the specified account and Region is currently in progress.
SUCCEEDED: The operation in the specified account and Region completed successfully.
CANCELLED: The operation in the specified account and Region has been canceled. This is either because a user has stopped the stack set operation, or because the failure tolerance of the stack set operation has been exceeded.
FAILED: The operation in the specified account and Region failed. If the stack set operation fails in enough accounts within a Region, the failure tolerance for the stack set operation as a whole might be exceeded.
INOPERABLE: A DeleteStackInstances operation has failed and left the stack in an unstable state. Stacks in this state are excluded from further UpdateStackSet operations. You might need to perform a DeleteStackInstances operation, with RetainStacks set to true, to delete the stack instance, and then delete the stack manually.
PENDING: The operation in the specified account and Region has yet to start.
RUNNING: The operation in the specified account and Region is currently in progress.
SKIPPED_SUSPENDED_ACCOUNT: The operation in the specified account and Region has been skipped because the account was suspended at the time of the operation.
SUCCEEDED: The operation in the specified account and Region completed successfully.
The detailed status of the stack instance.
" @@ -5285,7 +5285,8 @@ "SUCCEEDED", "FAILED", "CANCELLED", - "INOPERABLE" + "INOPERABLE", + "SKIPPED_SUSPENDED_ACCOUNT" ] }, "StackInstanceFilter":{ @@ -5947,7 +5948,7 @@ }, "RegionOrder":{ "shape":"RegionList", - "documentation":"The order of the Regions in where you want to perform the stack operation.
" + "documentation":"The order of the Regions where you want to perform the stack operation.
" }, "FailureToleranceCount":{ "shape":"FailureToleranceCount", @@ -6598,7 +6599,7 @@ }, "DefaultVersionId":{ "shape":"TypeVersionId", - "documentation":"The ID of the default version of the extension. The default version is used when the extension version isn't specified.
This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon and published by third parties, CloudFormation returns null. For more information, see RegisterType.
To set the default version of an extension, use SetTypeDefaultVersion .
The ID of the default version of the extension. The default version is used when the extension version isn't specified.
This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon and published by third parties, CloudFormation returns null. For more information, see RegisterType.
To set the default version of an extension, use SetTypeDefaultVersion.
" }, "TypeArn":{ "shape":"TypeArn", @@ -7024,5 +7025,5 @@ ] } }, - "documentation":"CloudFormation allows you to create and manage Amazon Web Services infrastructure deployments predictably and repeatedly. You can use CloudFormation to leverage Amazon Web Services products, such as Amazon Elastic Compute Cloud, Amazon Elastic Block Store, Amazon Simple Notification Service, Elastic Load Balancing, and Auto Scaling to build highly reliable, highly scalable, cost-effective applications without creating or configuring the underlying Amazon Web Services infrastructure.
With CloudFormation, you declare all your resources and dependencies in a template file. The template defines a collection of resources as a single unit called a stack. CloudFormation creates and deletes all member resources of the stack together and manages all dependencies between the resources for you.
For more information about CloudFormation, see the CloudFormation product page.
CloudFormation makes use of other Amazon Web Services products. If you need additional technical information about a specific Amazon Web Services product, you can find the product's technical documentation at docs.aws.amazon.com .
CloudFormation allows you to create and manage Amazon Web Services infrastructure deployments predictably and repeatedly. You can use CloudFormation to leverage Amazon Web Services products, such as Amazon Elastic Compute Cloud, Amazon Elastic Block Store, Amazon Simple Notification Service, Elastic Load Balancing, and Auto Scaling to build highly reliable, highly scalable, cost-effective applications without creating or configuring the underlying Amazon Web Services infrastructure.
With CloudFormation, you declare all your resources and dependencies in a template file. The template defines a collection of resources as a single unit called a stack. CloudFormation creates and deletes all member resources of the stack together and manages all dependencies between the resources for you.
For more information about CloudFormation, see the CloudFormation product page.
CloudFormation makes use of other Amazon Web Services products. If you need additional technical information about a specific Amazon Web Services product, you can find the product's technical documentation at docs.aws.amazon.com.
" } From 746485aa7cecbc72fdaf4a2b1f14df191b5becbc Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 7 Jun 2023 18:08:46 +0000 Subject: [PATCH 058/317] AWS Direct Connect Update: This update corrects the jumbo frames mtu values from 9100 to 8500 for transit virtual interfaces. --- .../next-release/feature-AWSDirectConnect-b05812c.json | 6 ++++++ .../main/resources/codegen-resources/service-2.json | 10 +++++----- 2 files changed, 11 insertions(+), 5 deletions(-) create mode 100644 .changes/next-release/feature-AWSDirectConnect-b05812c.json diff --git a/.changes/next-release/feature-AWSDirectConnect-b05812c.json b/.changes/next-release/feature-AWSDirectConnect-b05812c.json new file mode 100644 index 000000000000..2df35f81601e --- /dev/null +++ b/.changes/next-release/feature-AWSDirectConnect-b05812c.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Direct Connect", + "contributor": "", + "description": "This update corrects the jumbo frames mtu values from 9100 to 8500 for transit virtual interfaces." +} diff --git a/services/directconnect/src/main/resources/codegen-resources/service-2.json b/services/directconnect/src/main/resources/codegen-resources/service-2.json index a66f40b7ed04..7de7943feef1 100644 --- a/services/directconnect/src/main/resources/codegen-resources/service-2.json +++ b/services/directconnect/src/main/resources/codegen-resources/service-2.json @@ -915,7 +915,7 @@ {"shape":"DirectConnectServerException"}, {"shape":"DirectConnectClientException"} ], - "documentation":"Updates the specified attributes of the specified virtual private interface.
Setting the MTU of a virtual interface to 9001 (jumbo frames) can cause an update to the underlying physical connection if it wasn't updated to support jumbo frames. Updating the connection disrupts network connectivity for all virtual interfaces associated with the connection for up to 30 seconds. To check whether your connection supports jumbo frames, call DescribeConnections. To check whether your virtual q interface supports jumbo frames, call DescribeVirtualInterfaces.
" + "documentation":"Updates the specified attributes of the specified virtual private interface.
Setting the MTU of a virtual interface to 9001 (jumbo frames) can cause an update to the underlying physical connection if it wasn't updated to support jumbo frames. Updating the connection disrupts network connectivity for all virtual interfaces associated with the connection for up to 30 seconds. To check whether your connection supports jumbo frames, call DescribeConnections. To check whether your virtual interface supports jumbo frames, call DescribeVirtualInterfaces.
" } }, "shapes":{ @@ -1472,7 +1472,7 @@ }, "jumboFrameCapable":{ "shape":"JumboFrameCapable", - "documentation":"Indicates whether jumbo frames (9001 MTU) are supported.
" + "documentation":"Indicates whether jumbo frames are supported.
" }, "awsDeviceV2":{ "shape":"AwsDeviceV2", @@ -2685,7 +2685,7 @@ }, "jumboFrameCapable":{ "shape":"JumboFrameCapable", - "documentation":"Indicates whether jumbo frames (9001 MTU) are supported.
" + "documentation":"Indicates whether jumbo frames are supported.
" }, "awsDeviceV2":{ "shape":"AwsDeviceV2", @@ -2799,7 +2799,7 @@ }, "jumboFrameCapable":{ "shape":"JumboFrameCapable", - "documentation":"Indicates whether jumbo frames (9001 MTU) are supported.
" + "documentation":"Indicates whether jumbo frames are supported.
" }, "hasLogicalRedundancy":{ "shape":"HasLogicalRedundancy", @@ -3764,7 +3764,7 @@ }, "jumboFrameCapable":{ "shape":"JumboFrameCapable", - "documentation":"Indicates whether jumbo frames (9001 MTU) are supported.
" + "documentation":"Indicates whether jumbo frames are supported.
" }, "virtualGatewayId":{ "shape":"VirtualGatewayId", From 7e1f1592cd1b1e8625cd8ae94429f840963d2ea3 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 7 Jun 2023 18:10:45 +0000 Subject: [PATCH 059/317] Release 2.20.81. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.20.81.json | 42 +++++++++++++++++++ .../feature-AWSCloudFormation-7d1f406.json | 6 --- .../feature-AWSDirectConnect-b05812c.json | 6 --- ...ature-AWSIoTCoreDeviceAdvisor-c1301c9.json | 6 --- .../feature-AmazonCloudWatchLogs-639f2a0.json | 6 --- ...AmazonConnectCustomerProfiles-88d780f.json | 6 --- .../feature-AmazonEMRContainers-7bd72fe.json | 6 --- CHANGELOG.md | 25 +++++++++++ README.md | 8 ++-- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 2 +- bom-internal/pom.xml | 2 +- bom/pom.xml | 2 +- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth-crt/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/crt-core/pom.xml | 2 +- core/endpoints-spi/pom.xml | 2 +- core/imds/pom.xml | 2 +- core/json-utils/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services-custom/s3-transfer-manager/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/account/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/amplifyuibuilder/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appconfigdata/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationcostprofiler/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/apprunner/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/arczonalshift/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/backupgateway/pom.xml | 2 +- services/backupstorage/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/billingconductor/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/chimesdkidentity/pom.xml | 2 +- services/chimesdkmediapipelines/pom.xml | 2 +- services/chimesdkmeetings/pom.xml | 2 +- services/chimesdkmessaging/pom.xml | 2 +- services/chimesdkvoice/pom.xml | 2 +- services/cleanrooms/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/cloudcontrol/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudtraildata/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecatalyst/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcampaigns/pom.xml | 2 +- services/connectcases/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/controltower/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/docdbelastic/pom.xml | 2 +- services/drs/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/emrserverless/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/evidently/pom.xml | 2 +- services/finspace/pom.xml | 2 +- services/finspacedata/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/gamesparks/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/grafana/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/inspector2/pom.xml | 2 +- services/internetmonitor/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotfleetwise/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotroborunner/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iottwinmaker/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/ivschat/pom.xml | 2 +- services/ivsrealtime/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kafkaconnect/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kendraranking/pom.xml | 2 +- services/keyspaces/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kinesisvideowebrtcstorage/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- .../licensemanagerlinuxsubscriptions/pom.xml | 2 +- .../licensemanagerusersubscriptions/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutequipment/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/m2/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagev2/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/memorydb/pom.xml | 2 +- services/mgn/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/migrationhuborchestrator/pom.xml | 2 +- services/migrationhubrefactorspaces/pom.xml | 2 +- services/migrationhubstrategy/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/nimble/pom.xml | 2 +- services/oam/pom.xml | 2 +- services/omics/pom.xml | 2 +- services/opensearch/pom.xml | 2 +- services/opensearchserverless/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/osis/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/panorama/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/pinpointsmsvoicev2/pom.xml | 2 +- services/pipes/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 2 +- services/pricing/pom.xml | 2 +- services/privatenetworks/pom.xml | 2 +- services/proton/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rbin/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/redshiftserverless/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resiliencehub/pom.xml | 2 +- services/resourceexplorer2/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/rolesanywhere/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53recoverycluster/pom.xml | 2 +- services/route53recoverycontrolconfig/pom.xml | 2 +- services/route53recoveryreadiness/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/rum/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakergeospatial/pom.xml | 2 +- services/sagemakermetrics/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/scheduler/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- services/securitylake/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/simspaceweaver/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/snowdevicemanagement/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/ssmcontacts/pom.xml | 2 +- services/ssmincidents/pom.xml | 2 +- services/ssmsap/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/supportapp/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/tnb/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/voiceid/pom.xml | 2 +- services/vpclattice/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/wisdom/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/workspacesweb/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/auth-tests/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/region-testing/pom.xml | 2 +- test/ruleset-testing-core/pom.xml | 2 +- test/s3-benchmarks/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- third-party/pom.xml | 2 +- third-party/third-party-jackson-core/pom.xml | 2 +- .../pom.xml | 2 +- utils/pom.xml | 2 +- 416 files changed, 478 insertions(+), 447 deletions(-) create mode 100644 .changes/2.20.81.json delete mode 100644 .changes/next-release/feature-AWSCloudFormation-7d1f406.json delete mode 100644 .changes/next-release/feature-AWSDirectConnect-b05812c.json delete mode 100644 .changes/next-release/feature-AWSIoTCoreDeviceAdvisor-c1301c9.json delete mode 100644 .changes/next-release/feature-AmazonCloudWatchLogs-639f2a0.json delete mode 100644 .changes/next-release/feature-AmazonConnectCustomerProfiles-88d780f.json delete mode 100644 .changes/next-release/feature-AmazonEMRContainers-7bd72fe.json diff --git a/.changes/2.20.81.json b/.changes/2.20.81.json new file mode 100644 index 000000000000..edbde98b38f4 --- /dev/null +++ b/.changes/2.20.81.json @@ -0,0 +1,42 @@ +{ + "version": "2.20.81", + "date": "2023-06-07", + "entries": [ + { + "type": "feature", + "category": "AWS CloudFormation", + "contributor": "", + "description": "AWS CloudFormation StackSets is updating the deployment experience for all stackset operations to skip suspended AWS accounts during deployments. StackSets will skip target AWS accounts that are suspended and set the Detailed Status of the corresponding stack instances as SKIPPED_SUSPENDED_ACCOUNT" + }, + { + "type": "feature", + "category": "AWS Direct Connect", + "contributor": "", + "description": "This update corrects the jumbo frames mtu values from 9100 to 8500 for transit virtual interfaces." + }, + { + "type": "feature", + "category": "AWS IoT Core Device Advisor", + "contributor": "", + "description": "AWS IoT Core Device Advisor now supports new Qualification Suite test case list. With this update, customers can more easily create new qualification test suite with an empty rootGroup input." + }, + { + "type": "feature", + "category": "Amazon CloudWatch Logs", + "contributor": "", + "description": "This change adds support for account level data protection policies using 3 new APIs, PutAccountPolicy, DeleteAccountPolicy and DescribeAccountPolicy. DescribeLogGroup API has been modified to indicate if account level policy is applied to the LogGroup via \"inheritedProperties\" list in the response." + }, + { + "type": "feature", + "category": "Amazon Connect Customer Profiles", + "contributor": "", + "description": "This release introduces event stream related APIs." + }, + { + "type": "feature", + "category": "Amazon EMR Containers", + "contributor": "", + "description": "EMR on EKS adds support for log rotation of Spark container logs with EMR-6.11.0 onwards, to the StartJobRun API." + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/feature-AWSCloudFormation-7d1f406.json b/.changes/next-release/feature-AWSCloudFormation-7d1f406.json deleted file mode 100644 index c6aec22ad8fd..000000000000 --- a/.changes/next-release/feature-AWSCloudFormation-7d1f406.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS CloudFormation", - "contributor": "", - "description": "AWS CloudFormation StackSets is updating the deployment experience for all stackset operations to skip suspended AWS accounts during deployments. StackSets will skip target AWS accounts that are suspended and set the Detailed Status of the corresponding stack instances as SKIPPED_SUSPENDED_ACCOUNT" -} diff --git a/.changes/next-release/feature-AWSDirectConnect-b05812c.json b/.changes/next-release/feature-AWSDirectConnect-b05812c.json deleted file mode 100644 index 2df35f81601e..000000000000 --- a/.changes/next-release/feature-AWSDirectConnect-b05812c.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Direct Connect", - "contributor": "", - "description": "This update corrects the jumbo frames mtu values from 9100 to 8500 for transit virtual interfaces." -} diff --git a/.changes/next-release/feature-AWSIoTCoreDeviceAdvisor-c1301c9.json b/.changes/next-release/feature-AWSIoTCoreDeviceAdvisor-c1301c9.json deleted file mode 100644 index ef2354c65921..000000000000 --- a/.changes/next-release/feature-AWSIoTCoreDeviceAdvisor-c1301c9.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS IoT Core Device Advisor", - "contributor": "", - "description": "AWS IoT Core Device Advisor now supports new Qualification Suite test case list. With this update, customers can more easily create new qualification test suite with an empty rootGroup input." -} diff --git a/.changes/next-release/feature-AmazonCloudWatchLogs-639f2a0.json b/.changes/next-release/feature-AmazonCloudWatchLogs-639f2a0.json deleted file mode 100644 index 7ee2f8dad8be..000000000000 --- a/.changes/next-release/feature-AmazonCloudWatchLogs-639f2a0.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon CloudWatch Logs", - "contributor": "", - "description": "This change adds support for account level data protection policies using 3 new APIs, PutAccountPolicy, DeleteAccountPolicy and DescribeAccountPolicy. DescribeLogGroup API has been modified to indicate if account level policy is applied to the LogGroup via \"inheritedProperties\" list in the response." -} diff --git a/.changes/next-release/feature-AmazonConnectCustomerProfiles-88d780f.json b/.changes/next-release/feature-AmazonConnectCustomerProfiles-88d780f.json deleted file mode 100644 index bbbd0442156d..000000000000 --- a/.changes/next-release/feature-AmazonConnectCustomerProfiles-88d780f.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Connect Customer Profiles", - "contributor": "", - "description": "This release introduces event stream related APIs." -} diff --git a/.changes/next-release/feature-AmazonEMRContainers-7bd72fe.json b/.changes/next-release/feature-AmazonEMRContainers-7bd72fe.json deleted file mode 100644 index 8d57c4d7f172..000000000000 --- a/.changes/next-release/feature-AmazonEMRContainers-7bd72fe.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon EMR Containers", - "contributor": "", - "description": "EMR on EKS adds support for log rotation of Spark container logs with EMR-6.11.0 onwards, to the StartJobRun API." -} diff --git a/CHANGELOG.md b/CHANGELOG.md index a70f3b950245..abffde95d11a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,28 @@ +# __2.20.81__ __2023-06-07__ +## __AWS CloudFormation__ + - ### Features + - AWS CloudFormation StackSets is updating the deployment experience for all stackset operations to skip suspended AWS accounts during deployments. StackSets will skip target AWS accounts that are suspended and set the Detailed Status of the corresponding stack instances as SKIPPED_SUSPENDED_ACCOUNT + +## __AWS Direct Connect__ + - ### Features + - This update corrects the jumbo frames mtu values from 9100 to 8500 for transit virtual interfaces. + +## __AWS IoT Core Device Advisor__ + - ### Features + - AWS IoT Core Device Advisor now supports new Qualification Suite test case list. With this update, customers can more easily create new qualification test suite with an empty rootGroup input. + +## __Amazon CloudWatch Logs__ + - ### Features + - This change adds support for account level data protection policies using 3 new APIs, PutAccountPolicy, DeleteAccountPolicy and DescribeAccountPolicy. DescribeLogGroup API has been modified to indicate if account level policy is applied to the LogGroup via "inheritedProperties" list in the response. + +## __Amazon Connect Customer Profiles__ + - ### Features + - This release introduces event stream related APIs. + +## __Amazon EMR Containers__ + - ### Features + - EMR on EKS adds support for log rotation of Spark container logs with EMR-6.11.0 onwards, to the StartJobRun API. + # __2.20.80__ __2023-06-06__ ## __AWS Identity and Access Management__ - ### Features diff --git a/README.md b/README.md index 832ed3c49a1e..7a19156ce2bc 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ To automatically manage module versions (currently all modules have the same verThe KMS key that is used to encrypt the user's data stores in Athena.
" } }, - "documentation":"Specifies the KMS key that is used to encrypt the user's data stores in Athena.
" + "documentation":"Specifies the KMS key that is used to encrypt the user's data stores in Athena. This setting does not apply to Athena SQL workgroups.
" }, "DataCatalog":{ "type":"structure", @@ -2019,6 +2019,10 @@ "AdditionalConfigs":{ "shape":"ParametersMap", "documentation":"Contains additional notebook engine MAP<string, string> parameter mappings in the form of key-value pairs. To specify an Athena notebook that the Jupyter server will download and serve, specify a value for the StartSessionRequest$NotebookVersion field, and then add a key named NotebookId to AdditionalConfigs that has the value of the Athena notebook ID.
Specifies custom jar files and Spark properties for use cases like cluster encryption, table formats, and general Spark tuning.
" } }, "documentation":"Contains data processing unit (DPU) configuration settings and parameter mappings for a notebook engine.
" @@ -4783,7 +4787,7 @@ }, "CustomerContentEncryptionConfiguration":{ "shape":"CustomerContentEncryptionConfiguration", - "documentation":"Specifies the KMS key that is used to encrypt the user's data stores in Athena.
" + "documentation":"Specifies the KMS key that is used to encrypt the user's data stores in Athena. This setting does not apply to Athena SQL workgroups.
" }, "EnableMinimumEncryptionConfiguration":{ "shape":"BoxedBoolean", @@ -4825,7 +4829,7 @@ }, "RemoveCustomerContentEncryptionConfiguration":{ "shape":"BoxedBoolean", - "documentation":"Removes content encryption configuration for a workgroup.
" + "documentation":"Removes content encryption configuration from an Apache Spark-enabled Athena workgroup.
" }, "AdditionalConfiguration":{ "shape":"NameString", From f93f9a43d18bcb6b60c999f6d559367c9baa1621 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 8 Jun 2023 18:06:22 +0000 Subject: [PATCH 065/317] Payment Cryptography Control Plane Update: Initial release of AWS Payment Cryptography Control Plane service for creating and managing cryptographic keys used during card payment processing. --- ...ymentCryptographyControlPlane-201dbc1.json | 6 + services/paymentcryptography/pom.xml | 60 + .../codegen-resources/endpoint-rule-set.json | 350 ++++ .../codegen-resources/endpoint-tests.json | 295 +++ .../codegen-resources/paginators-1.json | 22 + .../codegen-resources/service-2.json | 1640 +++++++++++++++++ 6 files changed, 2373 insertions(+) create mode 100644 .changes/next-release/feature-PaymentCryptographyControlPlane-201dbc1.json create mode 100644 services/paymentcryptography/pom.xml create mode 100644 services/paymentcryptography/src/main/resources/codegen-resources/endpoint-rule-set.json create mode 100644 services/paymentcryptography/src/main/resources/codegen-resources/endpoint-tests.json create mode 100644 services/paymentcryptography/src/main/resources/codegen-resources/paginators-1.json create mode 100644 services/paymentcryptography/src/main/resources/codegen-resources/service-2.json diff --git a/.changes/next-release/feature-PaymentCryptographyControlPlane-201dbc1.json b/.changes/next-release/feature-PaymentCryptographyControlPlane-201dbc1.json new file mode 100644 index 000000000000..9e5b26bb2681 --- /dev/null +++ b/.changes/next-release/feature-PaymentCryptographyControlPlane-201dbc1.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Payment Cryptography Control Plane", + "contributor": "", + "description": "Initial release of AWS Payment Cryptography Control Plane service for creating and managing cryptographic keys used during card payment processing." +} diff --git a/services/paymentcryptography/pom.xml b/services/paymentcryptography/pom.xml new file mode 100644 index 000000000000..4326a270872f --- /dev/null +++ b/services/paymentcryptography/pom.xml @@ -0,0 +1,60 @@ + + + +Creates an alias, or a friendly name, for an Amazon Web Services Payment Cryptography key. You can use an alias to identify a key in the console and when you call cryptographic operations such as EncryptData or DecryptData.
You can associate the alias with any key in the same Amazon Web Services Region. Each alias is associated with only one key at a time, but a key can have multiple aliases. You can't create an alias without a key. The alias must be unique in the account and Amazon Web Services Region, but you can create another alias with the same name in a different Amazon Web Services Region.
To change the key that's associated with the alias, call UpdateAlias. To delete the alias, call DeleteAlias. These operations don't affect the underlying key. To get the alias that you created, call ListAliases.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "CreateKey":{ + "name":"CreateKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateKeyInput"}, + "output":{"shape":"CreateKeyOutput"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Creates an Amazon Web Services Payment Cryptography key, a logical representation of a cryptographic key, that is unique in your account and Amazon Web Services Region. You use keys for cryptographic functions such as encryption and decryption.
In addition to the key material used in cryptographic operations, an Amazon Web Services Payment Cryptography key includes metadata such as the key ARN, key usage, key origin, creation date, description, and key state.
When you create a key, you specify both immutable and mutable data about the key. The immutable data contains key attributes that defines the scope and cryptographic operations that you can perform using the key, for example key class (example: SYMMETRIC_KEY), key algorithm (example: TDES_2KEY), key usage (example: TR31_P0_PIN_ENCRYPTION_KEY) and key modes of use (example: Encrypt). For information about valid combinations of key attributes, see Understanding key attributes in the Amazon Web Services Payment Cryptography User Guide. The mutable data contained within a key includes usage timestamp and key deletion timestamp and can be modified after creation.
Amazon Web Services Payment Cryptography binds key attributes to keys using key blocks when you store or export them. Amazon Web Services Payment Cryptography stores the key contents wrapped and never stores or transmits them in the clear.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "DeleteAlias":{ + "name":"DeleteAlias", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteAliasInput"}, + "output":{"shape":"DeleteAliasOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Deletes the alias, but doesn't affect the underlying key.
Each key can have multiple aliases. To get the aliases of all keys, use the ListAliases operation. To change the alias of a key, first use DeleteAlias to delete the current alias and then use CreateAlias to create a new alias. To associate an existing alias with a different key, call UpdateAlias.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "DeleteKey":{ + "name":"DeleteKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteKeyInput"}, + "output":{"shape":"DeleteKeyOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Deletes the key material and all metadata associated with Amazon Web Services Payment Cryptography key.
Key deletion is irreversible. After a key is deleted, you can't perform cryptographic operations using the key. For example, you can't decrypt data that was encrypted by a deleted Amazon Web Services Payment Cryptography key, and the data may become unrecoverable. Because key deletion is destructive, Amazon Web Services Payment Cryptography has a safety mechanism to prevent accidental deletion of a key. When you call this operation, Amazon Web Services Payment Cryptography disables the specified key but doesn't delete it until after a waiting period. The default waiting period is 7 days. To set a different waiting period, set DeleteKeyInDays. During the waiting period, the KeyState is DELETE_PENDING. After the key is deleted, the KeyState is DELETE_COMPLETE.
If you delete key material, you can use ImportKey to reimport the same key material into the Amazon Web Services Payment Cryptography key.
You should delete a key only when you are sure that you don't need to use it anymore and no other parties are utilizing this key. If you aren't sure, consider deactivating it instead by calling StopKeyUsage.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "ExportKey":{ + "name":"ExportKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ExportKeyInput"}, + "output":{"shape":"ExportKeyOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Exports a key from Amazon Web Services Payment Cryptography using either ANSI X9 TR-34 or TR-31 key export standard.
Amazon Web Services Payment Cryptography simplifies main or root key exchange process by eliminating the need of a paper-based key exchange process. It takes a modern and secure approach based of the ANSI X9 TR-34 key exchange standard.
You can use ExportKey to export main or root keys such as KEK (Key Encryption Key), using asymmetric key exchange technique following ANSI X9 TR-34 standard. The ANSI X9 TR-34 standard uses asymmetric keys to establishes bi-directional trust between the two parties exchanging keys. After which you can export working keys using the ANSI X9 TR-31 symmetric key exchange standard as mandated by PCI PIN. Using this operation, you can share your Amazon Web Services Payment Cryptography generated keys with other service partners to perform cryptographic operations outside of Amazon Web Services Payment Cryptography
TR-34 key export
Amazon Web Services Payment Cryptography uses TR-34 asymmetric key exchange standard to export main keys such as KEK. In TR-34 terminology, the sending party of the key is called Key Distribution Host (KDH) and the receiving party of the key is called Key Receiving Host (KRH). In key export process, KDH is Amazon Web Services Payment Cryptography which initiates key export. KRH is the user receiving the key. Before you initiate TR-34 key export, you must obtain an export token by calling GetParametersForExport. This operation also returns the signing key certificate that KDH uses to sign the wrapped key to generate a TR-34 wrapped key block. The export token expires after 7 days.
Set the following parameters:
The KeyARN of the certificate chain that will sign the wrapping key certificate. This must exist within Amazon Web Services Payment Cryptography before you initiate TR-34 key export. If it does not exist, you can import it by calling ImportKey for RootCertificatePublicKey.
Obtained from KDH by calling GetParametersForExport.
Amazon Web Services Payment Cryptography uses this to wrap the key under export.
When this operation is successful, Amazon Web Services Payment Cryptography returns the TR-34 wrapped key block.
TR-31 key export
Amazon Web Services Payment Cryptography uses TR-31 symmetric key exchange standard to export working keys. In TR-31, you must use a main key such as KEK to encrypt or wrap the key under export. To establish a KEK, you can use CreateKey or ImportKey. When this operation is successful, Amazon Web Services Payment Cryptography returns a TR-31 wrapped key block.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "GetAlias":{ + "name":"GetAlias", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetAliasInput"}, + "output":{"shape":"GetAliasOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets the Amazon Web Services Payment Cryptography key associated with the alias.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "GetKey":{ + "name":"GetKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetKeyInput"}, + "output":{"shape":"GetKeyOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets the key material for an Amazon Web Services Payment Cryptography key, including the immutable and mutable data specified when the key was created.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "GetParametersForExport":{ + "name":"GetParametersForExport", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetParametersForExportInput"}, + "output":{"shape":"GetParametersForExportOutput"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets the export token and the signing key certificate to initiate a TR-34 key export from Amazon Web Services Payment Cryptography.
The signing key certificate signs the wrapped key under export within the TR-34 key payload. The export token and signing key certificate must be in place and operational before calling ExportKey. The export token expires in 7 days. You can use the same export token to export multiple keys from your service account.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "GetParametersForImport":{ + "name":"GetParametersForImport", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetParametersForImportInput"}, + "output":{"shape":"GetParametersForImportOutput"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets the import token and the wrapping key certificate to initiate a TR-34 key import into Amazon Web Services Payment Cryptography.
The wrapping key certificate wraps the key under import within the TR-34 key payload. The import token and wrapping key certificate must be in place and operational before calling ImportKey. The import token expires in 7 days. The same import token can be used to import multiple keys into your service account.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "GetPublicKeyCertificate":{ + "name":"GetPublicKeyCertificate", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetPublicKeyCertificateInput"}, + "output":{"shape":"GetPublicKeyCertificateOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Gets the public key certificate of the asymmetric key pair that exists within Amazon Web Services Payment Cryptography.
Unlike the private key of an asymmetric key, which never leaves Amazon Web Services Payment Cryptography unencrypted, callers with GetPublicKeyCertificate permission can download the public key certificate of the asymmetric key. You can share the public key certificate to allow others to encrypt messages and verify signatures outside of Amazon Web Services Payment Cryptography
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
" + }, + "ImportKey":{ + "name":"ImportKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ImportKeyInput"}, + "output":{"shape":"ImportKeyOutput"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Imports keys and public key certificates into Amazon Web Services Payment Cryptography.
Amazon Web Services Payment Cryptography simplifies main or root key exchange process by eliminating the need of a paper-based key exchange process. It takes a modern and secure approach based of the ANSI X9 TR-34 key exchange standard.
You can use ImportKey to import main or root keys such as KEK (Key Encryption Key) using asymmetric key exchange technique following the ANSI X9 TR-34 standard. The ANSI X9 TR-34 standard uses asymmetric keys to establishes bi-directional trust between the two parties exchanging keys.
After you have imported a main or root key, you can import working keys to perform various cryptographic operations within Amazon Web Services Payment Cryptography using the ANSI X9 TR-31 symmetric key exchange standard as mandated by PCI PIN.
You can also import a root public key certificate, a self-signed certificate used to sign other public key certificates, or a trusted public key certificate under an already established root public key certificate.
To import a public root key certificate
Using this operation, you can import the public component (in PEM cerificate format) of your private root key. You can use the imported public root key certificate for digital signatures, for example signing wrapping key or signing key in TR-34, within your Amazon Web Services Payment Cryptography account.
Set the following parameters:
KeyMaterial: RootCertificatePublicKey
KeyClass: PUBLIC_KEY
KeyModesOfUse: Verify
KeyUsage: TR31_S0_ASYMMETRIC_KEY_FOR_DIGITAL_SIGNATURE
PublicKeyCertificate: The certificate authority used to sign the root public key certificate.
To import a trusted public key certificate
The root public key certificate must be in place and operational before you import a trusted public key certificate. Set the following parameters:
KeyMaterial: TrustedCertificatePublicKey
CertificateAuthorityPublicKeyIdentifier: KeyArn of the RootCertificatePublicKey.
KeyModesOfUse and KeyUsage: Corresponding to the cryptographic operations such as wrap, sign, or encrypt that you will allow the trusted public key certificate to perform.
PublicKeyCertificate: The certificate authority used to sign the trusted public key certificate.
Import main keys
Amazon Web Services Payment Cryptography uses TR-34 asymmetric key exchange standard to import main keys such as KEK. In TR-34 terminology, the sending party of the key is called Key Distribution Host (KDH) and the receiving party of the key is called Key Receiving Host (KRH). During the key import process, KDH is the user who initiates the key import and KRH is Amazon Web Services Payment Cryptography who receives the key. Before initiating TR-34 key import, you must obtain an import token by calling GetParametersForImport. This operation also returns the wrapping key certificate that KDH uses wrap key under import to generate a TR-34 wrapped key block. The import token expires after 7 days.
Set the following parameters:
CertificateAuthorityPublicKeyIdentifier: The KeyArn of the certificate chain that will sign the signing key certificate and should exist within Amazon Web Services Payment Cryptography before initiating TR-34 key import. If it does not exist, you can import it by calling by calling ImportKey for RootCertificatePublicKey.
ImportToken: Obtained from KRH by calling GetParametersForImport.
WrappedKeyBlock: The TR-34 wrapped key block from KDH. It contains the KDH key under import, wrapped with KRH provided wrapping key certificate and signed by the KDH private signing key. This TR-34 key block is generated by the KDH Hardware Security Module (HSM) outside of Amazon Web Services Payment Cryptography.
SigningKeyCertificate: The public component of the private key that signed the KDH TR-34 wrapped key block. In PEM certificate format.
TR-34 is intended primarily to exchange 3DES keys. Your ability to export AES-128 and larger AES keys may be dependent on your source system.
Import working keys
Amazon Web Services Payment Cryptography uses TR-31 symmetric key exchange standard to import working keys. A KEK must be established within Amazon Web Services Payment Cryptography by using TR-34 key import. To initiate a TR-31 key import, set the following parameters:
WrappedKeyBlock: The key under import and encrypted using KEK. The TR-31 key block generated by your HSM outside of Amazon Web Services Payment Cryptography.
WrappingKeyIdentifier: The KeyArn of the KEK that Amazon Web Services Payment Cryptography uses to decrypt or unwrap the key under import.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "ListAliases":{ + "name":"ListAliases", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListAliasesInput"}, + "output":{"shape":"ListAliasesOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Lists the aliases for all keys in the caller's Amazon Web Services account and Amazon Web Services Region. You can filter the list of aliases. For more information, see Using aliases in the Amazon Web Services Payment Cryptography User Guide.
This is a paginated operation, which means that each response might contain only a subset of all the aliases. When the response contains only a subset of aliases, it includes a NextToken value. Use this value in a subsequent ListAliases request to get more aliases. When you receive a response with no NextToken (or an empty or null value), that means there are no more aliases to get.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "ListKeys":{ + "name":"ListKeys", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListKeysInput"}, + "output":{"shape":"ListKeysOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Lists the keys in the caller's Amazon Web Services account and Amazon Web Services Region. You can filter the list of keys.
This is a paginated operation, which means that each response might contain only a subset of all the keys. When the response contains only a subset of keys, it includes a NextToken value. Use this value in a subsequent ListKeys request to get more keys. When you receive a response with no NextToken (or an empty or null value), that means there are no more keys to get.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTagsForResourceInput"}, + "output":{"shape":"ListTagsForResourceOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Lists the tags for an Amazon Web Services resource.
This is a paginated operation, which means that each response might contain only a subset of all the tags. When the response contains only a subset of tags, it includes a NextToken value. Use this value in a subsequent ListTagsForResource request to get more tags. When you receive a response with no NextToken (or an empty or null value), that means there are no more tags to get.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "RestoreKey":{ + "name":"RestoreKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RestoreKeyInput"}, + "output":{"shape":"RestoreKeyOutput"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Cancels a scheduled key deletion during the waiting period. Use this operation to restore a Key that is scheduled for deletion.
During the waiting period, the KeyState is DELETE_PENDING and deletePendingTimestamp contains the date and time after which the Key will be deleted. After Key is restored, the KeyState is CREATE_COMPLETE, and the value for deletePendingTimestamp is removed.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "StartKeyUsage":{ + "name":"StartKeyUsage", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartKeyUsageInput"}, + "output":{"shape":"StartKeyUsageOutput"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Enables an Amazon Web Services Payment Cryptography key, which makes it active for cryptographic operations within Amazon Web Services Payment Cryptography
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "StopKeyUsage":{ + "name":"StopKeyUsage", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StopKeyUsageInput"}, + "output":{"shape":"StopKeyUsageOutput"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Disables an Amazon Web Services Payment Cryptography key, which makes it inactive within Amazon Web Services Payment Cryptography.
You can use this operation instead of DeleteKey to deactivate a key. You can enable the key in the future by calling StartKeyUsage.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "TagResource":{ + "name":"TagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"TagResourceInput"}, + "output":{"shape":"TagResourceOutput"}, + "errors":[ + {"shape":"ServiceQuotaExceededException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Adds or edits tags on an Amazon Web Services Payment Cryptography key.
Tagging or untagging an Amazon Web Services Payment Cryptography key can allow or deny permission to the key.
Each tag consists of a tag key and a tag value, both of which are case-sensitive strings. The tag value can be an empty (null) string. To add a tag, specify a new tag key and a tag value. To edit a tag, specify an existing tag key and a new tag value. You can also add tags to an Amazon Web Services Payment Cryptography key when you create it with CreateKey.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "UntagResource":{ + "name":"UntagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UntagResourceInput"}, + "output":{"shape":"UntagResourceOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Deletes a tag from an Amazon Web Services Payment Cryptography key.
Tagging or untagging an Amazon Web Services Payment Cryptography key can allow or deny permission to the key.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "UpdateAlias":{ + "name":"UpdateAlias", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateAliasInput"}, + "output":{"shape":"UpdateAliasOutput"}, + "errors":[ + {"shape":"ServiceUnavailableException"}, + {"shape":"ValidationException"}, + {"shape":"ConflictException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Associates an existing Amazon Web Services Payment Cryptography alias with a different key. Each alias is associated with only one Amazon Web Services Payment Cryptography key at a time, although a key can have multiple aliases. The alias and the Amazon Web Services Payment Cryptography key must be in the same Amazon Web Services account and Amazon Web Services Region
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + } + }, + "shapes":{ + "AccessDeniedException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"You do not have sufficient access to perform this action.
", + "exception":true + }, + "Alias":{ + "type":"structure", + "required":["AliasName"], + "members":{ + "AliasName":{ + "shape":"AliasName", + "documentation":"A friendly name that you can use to refer to a key. The value must begin with alias/.
Do not include confidential or sensitive information in this field. This field may be displayed in plaintext in CloudTrail logs and other output.
The KeyARN of the key associated with the alias.
Contains information about an alias.
" + }, + "AliasName":{ + "type":"string", + "max":256, + "min":7, + "pattern":"^alias/[a-zA-Z0-9/_-]+$" + }, + "Aliases":{ + "type":"list", + "member":{"shape":"Alias"} + }, + "Boolean":{ + "type":"boolean", + "box":true + }, + "CertificateType":{ + "type":"string", + "max":32768, + "min":1, + "pattern":"^[^\\[;\\]<>]+$", + "sensitive":true + }, + "ConflictException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"This request can cause an inconsistent state for the resource.
", + "exception":true + }, + "CreateAliasInput":{ + "type":"structure", + "required":["AliasName"], + "members":{ + "AliasName":{ + "shape":"AliasName", + "documentation":"A friendly name that you can use to refer a key. An alias must begin with alias/ followed by a name, for example alias/ExampleAlias. It can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-).
Don't include confidential or sensitive information in this field. This field may be displayed in plaintext in CloudTrail logs and other output.
The KeyARN of the key to associate with the alias.
The alias for the key.
" + } + } + }, + "CreateKeyInput":{ + "type":"structure", + "required":[ + "Exportable", + "KeyAttributes" + ], + "members":{ + "Enabled":{ + "shape":"Boolean", + "documentation":"Specifies whether to enable the key. If the key is enabled, it is activated for use within the service. If the key not enabled, then it is created but not activated. The default value is enabled.
" + }, + "Exportable":{ + "shape":"Boolean", + "documentation":"Specifies whether the key is exportable from the service.
" + }, + "KeyAttributes":{ + "shape":"KeyAttributes", + "documentation":"The role of the key, the algorithm it supports, and the cryptographic operations allowed with the key. This data is immutable after the key is created.
" + }, + "KeyCheckValueAlgorithm":{ + "shape":"KeyCheckValueAlgorithm", + "documentation":"The algorithm that Amazon Web Services Payment Cryptography uses to calculate the key check value (KCV) for DES and AES keys.
For DES key, the KCV is computed by encrypting 8 bytes, each with value '00', with the key to be checked and retaining the 3 highest order bytes of the encrypted result. For AES key, the KCV is computed by encrypting 8 bytes, each with value '01', with the key to be checked and retaining the 3 highest order bytes of the encrypted result.
" + }, + "Tags":{ + "shape":"Tags", + "documentation":"The tags to attach to the key. Each tag consists of a tag key and a tag value. Both the tag key and the tag value are required, but the tag value can be an empty (null) string. You can't have more than one tag on an Amazon Web Services Payment Cryptography key with the same tag key.
To use this parameter, you must have TagResource permission.
Don't include confidential or sensitive information in this field. This field may be displayed in plaintext in CloudTrail logs and other output.
Tagging or untagging an Amazon Web Services Payment Cryptography key can allow or deny permission to the key.
The key material that contains all the key attributes.
" + } + } + }, + "DeleteAliasInput":{ + "type":"structure", + "required":["AliasName"], + "members":{ + "AliasName":{ + "shape":"AliasName", + "documentation":"A friendly name that you can use to refer Amazon Web Services Payment Cryptography key. This value must begin with alias/ followed by a name, such as alias/ExampleAlias.
The waiting period for key deletion. The default value is seven days.
" + }, + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the key that is scheduled for deletion.
The KeyARN of the key that is scheduled for deletion.
The KeyARN of the key under export from Amazon Web Services Payment Cryptography.
The key block format type, for example, TR-34 or TR-31, to use during key material export.
" + } + } + }, + "ExportKeyMaterial":{ + "type":"structure", + "members":{ + "Tr31KeyBlock":{ + "shape":"ExportTr31KeyBlock", + "documentation":"Parameter information for key material export using TR-31 standard.
" + }, + "Tr34KeyBlock":{ + "shape":"ExportTr34KeyBlock", + "documentation":"Parameter information for key material export using TR-34 standard.
" + } + }, + "documentation":"Parameter information for key material export from Amazon Web Services Payment Cryptography.
", + "union":true + }, + "ExportKeyOutput":{ + "type":"structure", + "members":{ + "WrappedKey":{ + "shape":"WrappedKey", + "documentation":"The key material under export as a TR-34 or TR-31 wrapped key block.
" + } + } + }, + "ExportTokenId":{ + "type":"string", + "pattern":"^export-token-[0-9a-zA-Z]{16,64}$" + }, + "ExportTr31KeyBlock":{ + "type":"structure", + "required":["WrappingKeyIdentifier"], + "members":{ + "WrappingKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the the wrapping key. This key encrypts or wraps the key under export for TR-31 key block generation.
Parameter information for key material export using TR-31 standard.
" + }, + "ExportTr34KeyBlock":{ + "type":"structure", + "required":[ + "CertificateAuthorityPublicKeyIdentifier", + "ExportToken", + "KeyBlockFormat", + "WrappingKeyCertificate" + ], + "members":{ + "CertificateAuthorityPublicKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the certificate chain that signs the wrapping key certificate during TR-34 key export.
The export token to initiate key export from Amazon Web Services Payment Cryptography. It also contains the signing key certificate that will sign the wrapped key during TR-34 key block generation. Call GetParametersForExport to receive an export token. It expires after 7 days. You can use the same export token to export multiple keys from the same service account.
" + }, + "KeyBlockFormat":{ + "shape":"Tr34KeyBlockFormat", + "documentation":"The format of key block that Amazon Web Services Payment Cryptography will use during key export.
" + }, + "RandomNonce":{ + "shape":"HexLength16", + "documentation":"A random number value that is unique to the TR-34 key block generated using 2 pass. The operation will fail, if a random nonce value is not provided for a TR-34 key block generated using 2 pass.
" + }, + "WrappingKeyCertificate":{ + "shape":"CertificateType", + "documentation":"The KeyARN of the wrapping key certificate. Amazon Web Services Payment Cryptography uses this certificate to wrap the key under export.
Parameter information for key material export using TR-34 standard.
" + }, + "GetAliasInput":{ + "type":"structure", + "required":["AliasName"], + "members":{ + "AliasName":{ + "shape":"AliasName", + "documentation":"The alias of the Amazon Web Services Payment Cryptography key.
" + } + } + }, + "GetAliasOutput":{ + "type":"structure", + "required":["Alias"], + "members":{ + "Alias":{ + "shape":"Alias", + "documentation":"The alias of the Amazon Web Services Payment Cryptography key.
" + } + } + }, + "GetKeyInput":{ + "type":"structure", + "required":["KeyIdentifier"], + "members":{ + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the Amazon Web Services Payment Cryptography key.
The key material, including the immutable and mutable data for the key.
" + } + } + }, + "GetParametersForExportInput":{ + "type":"structure", + "required":[ + "KeyMaterialType", + "SigningKeyAlgorithm" + ], + "members":{ + "KeyMaterialType":{ + "shape":"KeyMaterialType", + "documentation":"The key block format type (for example, TR-34 or TR-31) to use during key material export. Export token is only required for a TR-34 key export, TR34_KEY_BLOCK. Export token is not required for TR-31 key export.
The signing key algorithm to generate a signing key certificate. This certificate signs the wrapped key under export within the TR-34 key block cryptogram. RSA_2048 is the only signing key algorithm allowed.
The export token to initiate key export from Amazon Web Services Payment Cryptography. The export token expires after 7 days. You can use the same export token to export multiple keys from the same service account.
" + }, + "ParametersValidUntilTimestamp":{ + "shape":"Timestamp", + "documentation":"The validity period of the export token.
" + }, + "SigningKeyAlgorithm":{ + "shape":"KeyAlgorithm", + "documentation":"The algorithm of the signing key certificate for use in TR-34 key block generation. RSA_2048 is the only signing key algorithm allowed.
The signing key certificate of the public key for signature within the TR-34 key block cryptogram. The certificate expires after 7 days.
" + }, + "SigningKeyCertificateChain":{ + "shape":"CertificateType", + "documentation":"The certificate chain that signed the signing key certificate. This is the root certificate authority (CA) within your service account.
" + } + } + }, + "GetParametersForImportInput":{ + "type":"structure", + "required":[ + "KeyMaterialType", + "WrappingKeyAlgorithm" + ], + "members":{ + "KeyMaterialType":{ + "shape":"KeyMaterialType", + "documentation":"The key block format type such as TR-34 or TR-31 to use during key material import. Import token is only required for TR-34 key import TR34_KEY_BLOCK. Import token is not required for TR-31 key import.
The wrapping key algorithm to generate a wrapping key certificate. This certificate wraps the key under import within the TR-34 key block cryptogram. RSA_2048 is the only wrapping key algorithm allowed.
The import token to initiate key import into Amazon Web Services Payment Cryptography. The import token expires after 7 days. You can use the same import token to import multiple keys to the same service account.
" + }, + "ParametersValidUntilTimestamp":{ + "shape":"Timestamp", + "documentation":"The validity period of the import token.
" + }, + "WrappingKeyAlgorithm":{ + "shape":"KeyAlgorithm", + "documentation":"The algorithm of the wrapping key for use within TR-34 key block. RSA_2048 is the only wrapping key algorithm allowed.
The wrapping key certificate of the wrapping key for use within the TR-34 key block. The certificate expires in 7 days.
" + }, + "WrappingKeyCertificateChain":{ + "shape":"CertificateType", + "documentation":"The Amazon Web Services Payment Cryptography certificate chain that signed the wrapping key certificate. This is the root certificate authority (CA) within your service account.
" + } + } + }, + "GetPublicKeyCertificateInput":{ + "type":"structure", + "required":["KeyIdentifier"], + "members":{ + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the asymmetric key pair.
The public key component of the asymmetric key pair in a certificate (PEM) format. It is signed by the root certificate authority (CA) within your service account. The certificate expires in 90 days.
" + }, + "KeyCertificateChain":{ + "shape":"CertificateType", + "documentation":"The certificate chain that signed the public key certificate of the asymmetric key pair. This is the root certificate authority (CA) within your service account.
" + } + } + }, + "HexLength16":{ + "type":"string", + "max":16, + "min":16, + "pattern":"^[0-9A-F]+$" + }, + "ImportKeyInput":{ + "type":"structure", + "required":["KeyMaterial"], + "members":{ + "Enabled":{ + "shape":"Boolean", + "documentation":"Specifies whether import key is enabled.
" + }, + "KeyCheckValueAlgorithm":{ + "shape":"KeyCheckValueAlgorithm", + "documentation":"The algorithm that Amazon Web Services Payment Cryptography uses to calculate the key check value (KCV) for DES and AES keys.
For DES key, the KCV is computed by encrypting 8 bytes, each with value '00', with the key to be checked and retaining the 3 highest order bytes of the encrypted result. For AES key, the KCV is computed by encrypting 8 bytes, each with value '01', with the key to be checked and retaining the 3 highest order bytes of the encrypted result.
" + }, + "KeyMaterial":{ + "shape":"ImportKeyMaterial", + "documentation":"The key or public key certificate type to use during key material import, for example TR-34 or RootCertificatePublicKey.
" + }, + "Tags":{ + "shape":"Tags", + "documentation":"The tags to attach to the key. Each tag consists of a tag key and a tag value. Both the tag key and the tag value are required, but the tag value can be an empty (null) string. You can't have more than one tag on an Amazon Web Services Payment Cryptography key with the same tag key.
You can't have more than one tag on an Amazon Web Services Payment Cryptography key with the same tag key. If you specify an existing tag key with a different tag value, Amazon Web Services Payment Cryptography replaces the current tag value with the specified one.
To use this parameter, you must have TagResource permission.
Don't include confidential or sensitive information in this field. This field may be displayed in plaintext in CloudTrail logs and other output.
Tagging or untagging an Amazon Web Services Payment Cryptography key can allow or deny permission to the key.
Parameter information for root public key certificate import.
" + }, + "Tr31KeyBlock":{ + "shape":"ImportTr31KeyBlock", + "documentation":"Parameter information for key material import using TR-31 standard.
" + }, + "Tr34KeyBlock":{ + "shape":"ImportTr34KeyBlock", + "documentation":"Parameter information for key material import using TR-34 standard.
" + }, + "TrustedCertificatePublicKey":{ + "shape":"TrustedCertificatePublicKey", + "documentation":"Parameter information for trusted public key certificate import.
" + } + }, + "documentation":"Parameter information for key material import.
", + "union":true + }, + "ImportKeyOutput":{ + "type":"structure", + "required":["Key"], + "members":{ + "Key":{ + "shape":"Key", + "documentation":"The KeyARN of the key material imported within Amazon Web Services Payment Cryptography.
The TR-34 wrapped key block to import.
" + }, + "WrappingKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the key that will decrypt or unwrap a TR-31 key block during import.
Parameter information for key material import using TR-31 standard.
" + }, + "ImportTr34KeyBlock":{ + "type":"structure", + "required":[ + "CertificateAuthorityPublicKeyIdentifier", + "ImportToken", + "KeyBlockFormat", + "SigningKeyCertificate", + "WrappedKeyBlock" + ], + "members":{ + "CertificateAuthorityPublicKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the certificate chain that signs the signing key certificate during TR-34 key import.
The import token that initiates key import into Amazon Web Services Payment Cryptography. It expires after 7 days. You can use the same import token to import multiple keys to the same service account.
" + }, + "KeyBlockFormat":{ + "shape":"Tr34KeyBlockFormat", + "documentation":"The key block format to use during key import. The only value allowed is X9_TR34_2012.
A random number value that is unique to the TR-34 key block generated using 2 pass. The operation will fail, if a random nonce value is not provided for a TR-34 key block generated using 2 pass.
" + }, + "SigningKeyCertificate":{ + "shape":"CertificateType", + "documentation":"The public key component in PEM certificate format of the private key that signs the KDH TR-34 wrapped key block.
" + }, + "WrappedKeyBlock":{ + "shape":"Tr34WrappedKeyBlock", + "documentation":"The TR-34 wrapped key block to import.
" + } + }, + "documentation":"Parameter information for key material import using TR-34 standard.
" + }, + "InternalServerException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"The request processing has failed because of an unknown error, exception, or failure.
", + "exception":true, + "fault":true + }, + "Key":{ + "type":"structure", + "required":[ + "CreateTimestamp", + "Enabled", + "Exportable", + "KeyArn", + "KeyAttributes", + "KeyCheckValue", + "KeyCheckValueAlgorithm", + "KeyOrigin", + "KeyState" + ], + "members":{ + "CreateTimestamp":{ + "shape":"Timestamp", + "documentation":"The date and time when the key was created.
" + }, + "DeletePendingTimestamp":{ + "shape":"Timestamp", + "documentation":"The date and time after which Amazon Web Services Payment Cryptography will delete the key. This value is present only when KeyState is DELETE_PENDING and the key is scheduled for deletion.
The date and time after which Amazon Web Services Payment Cryptography will delete the key. This value is present only when when the KeyState is DELETE_COMPLETE and the Amazon Web Services Payment Cryptography key is deleted.
Specifies whether the key is enabled.
" + }, + "Exportable":{ + "shape":"Boolean", + "documentation":"Specifies whether the key is exportable. This data is immutable after the key is created.
" + }, + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The Amazon Resource Name (ARN) of the key.
" + }, + "KeyAttributes":{ + "shape":"KeyAttributes", + "documentation":"The role of the key, the algorithm it supports, and the cryptographic operations allowed with the key. This data is immutable after the key is created.
" + }, + "KeyCheckValue":{ + "shape":"KeyCheckValue", + "documentation":"The key check value (KCV) is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "KeyCheckValueAlgorithm":{ + "shape":"KeyCheckValueAlgorithm", + "documentation":"The algorithm used for calculating key check value (KCV) for DES and AES keys. For a DES key, Amazon Web Services Payment Cryptography computes the KCV by encrypting 8 bytes, each with value '00', with the key to be checked and retaining the 3 highest order bytes of the encrypted result. For an AES key, Amazon Web Services Payment Cryptography computes the KCV by encrypting 8 bytes, each with value '01', with the key to be checked and retaining the 3 highest order bytes of the encrypted result.
" + }, + "KeyOrigin":{ + "shape":"KeyOrigin", + "documentation":"The source of the key material. For keys created within Amazon Web Services Payment Cryptography, the value is AWS_PAYMENT_CRYPTOGRAPHY. For keys imported into Amazon Web Services Payment Cryptography, the value is EXTERNAL.
The state of key that is being created or deleted.
" + }, + "UsageStartTimestamp":{ + "shape":"Timestamp", + "documentation":"The date and time after which Amazon Web Services Payment Cryptography will start using the key material for cryptographic operations.
" + }, + "UsageStopTimestamp":{ + "shape":"Timestamp", + "documentation":"The date and time after which Amazon Web Services Payment Cryptography will stop using the key material for cryptographic operations.
" + } + }, + "documentation":"Metadata about an Amazon Web Services Payment Cryptography key.
" + }, + "KeyAlgorithm":{ + "type":"string", + "enum":[ + "TDES_2KEY", + "TDES_3KEY", + "AES_128", + "AES_192", + "AES_256", + "RSA_2048", + "RSA_3072", + "RSA_4096" + ] + }, + "KeyArn":{ + "type":"string", + "max":150, + "min":70, + "pattern":"^arn:aws:payment-cryptography:[a-z]{2}-[a-z]{1,16}-[0-9]+:[0-9]{12}:key/[0-9a-zA-Z]{16,64}$" + }, + "KeyArnOrKeyAliasType":{ + "type":"string", + "max":322, + "min":7, + "pattern":"^arn:aws:payment-cryptography:[a-z]{2}-[a-z]{1,16}-[0-9]+:[0-9]{12}:(key/[0-9a-zA-Z]{16,64}|alias/[a-zA-Z0-9/_-]+)$|^alias/[a-zA-Z0-9/_-]+$" + }, + "KeyAttributes":{ + "type":"structure", + "required":[ + "KeyAlgorithm", + "KeyClass", + "KeyModesOfUse", + "KeyUsage" + ], + "members":{ + "KeyAlgorithm":{ + "shape":"KeyAlgorithm", + "documentation":"The key algorithm to be use during creation of an Amazon Web Services Payment Cryptography key.
For symmetric keys, Amazon Web Services Payment Cryptography supports AES and TDES algorithms. For asymmetric keys, Amazon Web Services Payment Cryptography supports RSA and ECC_NIST algorithms.
The type of Amazon Web Services Payment Cryptography key to create, which determines the classification of the cryptographic method and whether Amazon Web Services Payment Cryptography key contains a symmetric key or an asymmetric key pair.
" + }, + "KeyModesOfUse":{ + "shape":"KeyModesOfUse", + "documentation":"The list of cryptographic operations that you can perform using the key.
" + }, + "KeyUsage":{ + "shape":"KeyUsage", + "documentation":"The cryptographic usage of an Amazon Web Services Payment Cryptography key as defined in section A.5.2 of the TR-31 spec.
" + } + }, + "documentation":"The role of the key, the algorithm it supports, and the cryptographic operations allowed with the key. This data is immutable after the key is created.
" + }, + "KeyCheckValue":{ + "type":"string", + "max":16, + "min":4, + "pattern":"^[0-9a-fA-F]+$" + }, + "KeyCheckValueAlgorithm":{ + "type":"string", + "enum":[ + "CMAC", + "ANSI_X9_24" + ] + }, + "KeyClass":{ + "type":"string", + "enum":[ + "SYMMETRIC_KEY", + "ASYMMETRIC_KEY_PAIR", + "PRIVATE_KEY", + "PUBLIC_KEY" + ] + }, + "KeyMaterial":{ + "type":"string", + "max":16384, + "min":48, + "sensitive":true + }, + "KeyMaterialType":{ + "type":"string", + "enum":[ + "TR34_KEY_BLOCK", + "TR31_KEY_BLOCK", + "ROOT_PUBLIC_KEY_CERTIFICATE", + "TRUSTED_PUBLIC_KEY_CERTIFICATE" + ] + }, + "KeyModesOfUse":{ + "type":"structure", + "members":{ + "Decrypt":{ + "shape":"PrimitiveBoolean", + "documentation":"Specifies whether an Amazon Web Services Payment Cryptography key can be used to decrypt data.
" + }, + "DeriveKey":{ + "shape":"PrimitiveBoolean", + "documentation":"Specifies whether an Amazon Web Services Payment Cryptography key can be used to derive new keys.
" + }, + "Encrypt":{ + "shape":"PrimitiveBoolean", + "documentation":"Specifies whether an Amazon Web Services Payment Cryptography key can be used to encrypt data.
" + }, + "Generate":{ + "shape":"PrimitiveBoolean", + "documentation":"Specifies whether an Amazon Web Services Payment Cryptography key can be used to generate and verify other card and PIN verification keys.
" + }, + "NoRestrictions":{ + "shape":"PrimitiveBoolean", + "documentation":"Specifies whether an Amazon Web Services Payment Cryptography key has no special restrictions other than the restrictions implied by KeyUsage.
Specifies whether an Amazon Web Services Payment Cryptography key can be used for signing.
" + }, + "Unwrap":{ + "shape":"PrimitiveBoolean", + "documentation":"Specifies whether an Amazon Web Services Payment Cryptography key can be used to unwrap other keys.
" + }, + "Verify":{ + "shape":"PrimitiveBoolean", + "documentation":"Specifies whether an Amazon Web Services Payment Cryptography key can be used to verify signatures.
" + }, + "Wrap":{ + "shape":"PrimitiveBoolean", + "documentation":"Specifies whether an Amazon Web Services Payment Cryptography key can be used to wrap other keys.
" + } + }, + "documentation":"The list of cryptographic operations that you can perform using the key. The modes of use are defined in section A.5.3 of the TR-31 spec.
" + }, + "KeyOrigin":{ + "type":"string", + "documentation":"Defines the source of a key
", + "enum":[ + "EXTERNAL", + "AWS_PAYMENT_CRYPTOGRAPHY" + ] + }, + "KeyState":{ + "type":"string", + "documentation":"Defines the state of a key
", + "enum":[ + "CREATE_IN_PROGRESS", + "CREATE_COMPLETE", + "DELETE_PENDING", + "DELETE_COMPLETE" + ] + }, + "KeySummary":{ + "type":"structure", + "required":[ + "Enabled", + "Exportable", + "KeyArn", + "KeyAttributes", + "KeyCheckValue", + "KeyState" + ], + "members":{ + "Enabled":{ + "shape":"Boolean", + "documentation":"Specifies whether the key is enabled.
" + }, + "Exportable":{ + "shape":"Boolean", + "documentation":"Specifies whether the key is exportable. This data is immutable after the key is created.
" + }, + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The Amazon Resource Name (ARN) of the key.
" + }, + "KeyAttributes":{ + "shape":"KeyAttributes", + "documentation":"The role of the key, the algorithm it supports, and the cryptographic operations allowed with the key. This data is immutable after the key is created.
" + }, + "KeyCheckValue":{ + "shape":"KeyCheckValue", + "documentation":"The key check value (KCV) is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "KeyState":{ + "shape":"KeyState", + "documentation":"The state of an Amazon Web Services Payment Cryptography that is being created or deleted.
" + } + }, + "documentation":"Metadata about an Amazon Web Services Payment Cryptography key.
" + }, + "KeySummaryList":{ + "type":"list", + "member":{"shape":"KeySummary"} + }, + "KeyUsage":{ + "type":"string", + "enum":[ + "TR31_B0_BASE_DERIVATION_KEY", + "TR31_C0_CARD_VERIFICATION_KEY", + "TR31_D0_SYMMETRIC_DATA_ENCRYPTION_KEY", + "TR31_D1_ASYMMETRIC_KEY_FOR_DATA_ENCRYPTION", + "TR31_E0_EMV_MKEY_APP_CRYPTOGRAMS", + "TR31_E1_EMV_MKEY_CONFIDENTIALITY", + "TR31_E2_EMV_MKEY_INTEGRITY", + "TR31_E4_EMV_MKEY_DYNAMIC_NUMBERS", + "TR31_E5_EMV_MKEY_CARD_PERSONALIZATION", + "TR31_E6_EMV_MKEY_OTHER", + "TR31_K0_KEY_ENCRYPTION_KEY", + "TR31_K1_KEY_BLOCK_PROTECTION_KEY", + "TR31_K3_ASYMMETRIC_KEY_FOR_KEY_AGREEMENT", + "TR31_M3_ISO_9797_3_MAC_KEY", + "TR31_M6_ISO_9797_5_CMAC_KEY", + "TR31_M7_HMAC_KEY", + "TR31_P0_PIN_ENCRYPTION_KEY", + "TR31_P1_PIN_GENERATION_KEY", + "TR31_S0_ASYMMETRIC_KEY_FOR_DIGITAL_SIGNATURE", + "TR31_V1_IBM3624_PIN_VERIFICATION_KEY", + "TR31_V2_VISA_PIN_VERIFICATION_KEY", + "TR31_K2_TR34_ASYMMETRIC_KEY" + ] + }, + "ListAliasesInput":{ + "type":"structure", + "members":{ + "MaxResults":{ + "shape":"MaxResults", + "documentation":"Use this parameter to specify the maximum number of items to return. When this value is present, Amazon Web Services Payment Cryptography does not return more than the specified number of items, but it might return fewer.
This value is optional. If you include a value, it must be between 1 and 100, inclusive. If you do not include a value, it defaults to 50.
" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"Use this parameter in a subsequent request after you receive a response with truncated results. Set it to the value of NextToken from the truncated response you just received.
The list of aliases. Each alias describes the KeyArn contained within.
The token for the next set of results, or an empty or null value if there are no more results.
" + } + } + }, + "ListKeysInput":{ + "type":"structure", + "members":{ + "KeyState":{ + "shape":"KeyState", + "documentation":"The key state of the keys you want to list.
" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"Use this parameter to specify the maximum number of items to return. When this value is present, Amazon Web Services Payment Cryptography does not return more than the specified number of items, but it might return fewer.
" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"Use this parameter in a subsequent request after you receive a response with truncated results. Set it to the value of NextToken from the truncated response you just received.
The list of keys created within the caller's Amazon Web Services account and Amazon Web Services Region.
" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"The token for the next set of results, or an empty or null value if there are no more results.
" + } + } + }, + "ListTagsForResourceInput":{ + "type":"structure", + "required":["ResourceArn"], + "members":{ + "MaxResults":{ + "shape":"MaxResults", + "documentation":"Use this parameter to specify the maximum number of items to return. When this value is present, Amazon Web Services Payment Cryptography does not return more than the specified number of items, but it might return fewer.
" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"Use this parameter in a subsequent request after you receive a response with truncated results. Set it to the value of NextToken from the truncated response you just received.
The KeyARN of the key whose tags you are getting.
The token for the next set of results, or an empty or null value if there are no more results.
" + }, + "Tags":{ + "shape":"Tags", + "documentation":"The list of tags associated with a ResourceArn. Each tag will list the key-value pair contained within that tag.
The string for the exception.
" + } + }, + "documentation":"The request was denied due to an invalid resource error.
", + "exception":true + }, + "RestoreKeyInput":{ + "type":"structure", + "required":["KeyIdentifier"], + "members":{ + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the key to be restored within Amazon Web Services Payment Cryptography.
The key material of the restored key. The KeyState will change to CREATE_COMPLETE and value for DeletePendingTimestamp gets removed.
The role of the key, the algorithm it supports, and the cryptographic operations allowed with the key. This data is immutable after the root public key is imported.
" + }, + "PublicKeyCertificate":{ + "shape":"CertificateType", + "documentation":"Parameter information for root public key certificate import.
" + } + }, + "documentation":"Parameter information for root public key certificate import.
" + }, + "ServiceQuotaExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"This request would cause a service quota to be exceeded.
", + "exception":true + }, + "ServiceUnavailableException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"The service cannot complete the request.
", + "exception":true, + "fault":true + }, + "StartKeyUsageInput":{ + "type":"structure", + "required":["KeyIdentifier"], + "members":{ + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyArn of the key.
The KeyARN of the Amazon Web Services Payment Cryptography key activated for use.
The KeyArn of the key.
The KeyARN of the key.
The key of the tag.
" + }, + "Value":{ + "shape":"TagValue", + "documentation":"The value of the tag.
" + } + }, + "documentation":"A structure that contains information about a tag.
" + }, + "TagKey":{ + "type":"string", + "max":128, + "min":1 + }, + "TagKeys":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":200, + "min":0 + }, + "TagResourceInput":{ + "type":"structure", + "required":[ + "ResourceArn", + "Tags" + ], + "members":{ + "ResourceArn":{ + "shape":"ResourceArn", + "documentation":"The KeyARN of the key whose tags are being updated.
One or more tags. Each tag consists of a tag key and a tag value. The tag value can be an empty (null) string. You can't have more than one tag on an Amazon Web Services Payment Cryptography key with the same tag key. If you specify an existing tag key with a different tag value, Amazon Web Services Payment Cryptography replaces the current tag value with the new one.
Don't include confidential or sensitive information in this field. This field may be displayed in plaintext in CloudTrail logs and other output.
To use this parameter, you must have TagResource permission in an IAM policy.
Don't include confidential or sensitive information in this field. This field may be displayed in plaintext in CloudTrail logs and other output.
The request was denied due to request throttling.
", + "exception":true + }, + "Timestamp":{"type":"timestamp"}, + "Tr31WrappedKeyBlock":{ + "type":"string", + "max":9984, + "min":56, + "pattern":"^[0-9A-Z]+$" + }, + "Tr34KeyBlockFormat":{ + "type":"string", + "enum":["X9_TR34_2012"] + }, + "Tr34WrappedKeyBlock":{ + "type":"string", + "max":4096, + "min":2, + "pattern":"^[0-9A-F]+$" + }, + "TrustedCertificatePublicKey":{ + "type":"structure", + "required":[ + "CertificateAuthorityPublicKeyIdentifier", + "KeyAttributes", + "PublicKeyCertificate" + ], + "members":{ + "CertificateAuthorityPublicKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The KeyARN of the root public key certificate or certificate chain that signs the trusted public key certificate import.
The role of the key, the algorithm it supports, and the cryptographic operations allowed with the key. This data is immutable after a trusted public key is imported.
" + }, + "PublicKeyCertificate":{ + "shape":"CertificateType", + "documentation":"Parameter information for trusted public key certificate import.
" + } + }, + "documentation":"Parameter information for trusted public key certificate import.
" + }, + "UntagResourceInput":{ + "type":"structure", + "required":[ + "ResourceArn", + "TagKeys" + ], + "members":{ + "ResourceArn":{ + "shape":"ResourceArn", + "documentation":"The KeyARN of the key whose tags are being removed.
One or more tag keys. Don't include the tag values.
If the Amazon Web Services Payment Cryptography key doesn't have the specified tag key, Amazon Web Services Payment Cryptography doesn't throw an exception or return a response. To confirm that the operation succeeded, use the ListTagsForResource operation.
" + } + } + }, + "UntagResourceOutput":{ + "type":"structure", + "members":{ + } + }, + "UpdateAliasInput":{ + "type":"structure", + "required":["AliasName"], + "members":{ + "AliasName":{ + "shape":"AliasName", + "documentation":"The alias whose associated key is changing.
" + }, + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The KeyARN for the key that you are updating or removing from the alias.
The alias name.
" + } + } + }, + "ValidationException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"The request was denied due to an invalid request error.
", + "exception":true + }, + "WrappedKey":{ + "type":"structure", + "required":[ + "KeyMaterial", + "WrappedKeyMaterialFormat", + "WrappingKeyArn" + ], + "members":{ + "KeyMaterial":{ + "shape":"KeyMaterial", + "documentation":"Parameter information for generating a wrapped key using TR-31 or TR-34 standard.
" + }, + "WrappedKeyMaterialFormat":{ + "shape":"WrappedKeyMaterialFormat", + "documentation":"The key block format of a wrapped key.
" + }, + "WrappingKeyArn":{ + "shape":"KeyArn", + "documentation":"The KeyARN of the wrapped key.
Parameter information for generating a wrapped key using TR-31 or TR-34 standard.
" + }, + "WrappedKeyMaterialFormat":{ + "type":"string", + "enum":[ + "KEY_CRYPTOGRAM", + "TR31_KEY_BLOCK", + "TR34_KEY_BLOCK" + ] + } + }, + "documentation":"You use the Amazon Web Services Payment Cryptography Control Plane to manage the encryption keys you use for payment-related cryptographic operations. You can create, import, export, share, manage, and delete keys. You can also manage Identity and Access Management (IAM) policies for keys. For more information, see Identity and access management in the Amazon Web Services Payment Cryptography User Guide.
To use encryption keys for payment-related transaction processing and associated cryptographic operations, you use the Amazon Web Services Payment Cryptography Data Plane. You can encrypt, decrypt, generate, verify, and translate payment-related cryptographic operations.
All Amazon Web Services Payment Cryptography API calls must be signed and transmitted using Transport Layer Security (TLS). We recommend you always use the latest supported TLS version for logging API requests.
Amazon Web Services Payment Cryptography supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to Amazon Web Services Payment Cryptography, who made the request, when it was made, and so on. If you don't configure a trail, you can still view the most recent events in the CloudTrail console. For more information, see the CloudTrail User Guide.
" +} From 04e9c366f4da6cc85d4f9827e218959fe77d32f4 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 8 Jun 2023 18:06:23 +0000 Subject: [PATCH 066/317] AWS Service Catalog Update: New parameter added in ServiceCatalog DescribeProvisioningArtifact api - IncludeProvisioningArtifactParameters. This parameter can be used to return information about the parameters used to provision the product --- .../feature-AWSServiceCatalog-aa246b4.json | 6 ++++++ .../main/resources/codegen-resources/service-2.json | 12 ++++++++++-- 2 files changed, 16 insertions(+), 2 deletions(-) create mode 100644 .changes/next-release/feature-AWSServiceCatalog-aa246b4.json diff --git a/.changes/next-release/feature-AWSServiceCatalog-aa246b4.json b/.changes/next-release/feature-AWSServiceCatalog-aa246b4.json new file mode 100644 index 000000000000..1b590a6bc19a --- /dev/null +++ b/.changes/next-release/feature-AWSServiceCatalog-aa246b4.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Service Catalog", + "contributor": "", + "description": "New parameter added in ServiceCatalog DescribeProvisioningArtifact api - IncludeProvisioningArtifactParameters. This parameter can be used to return information about the parameters used to provision the product" +} diff --git a/services/servicecatalog/src/main/resources/codegen-resources/service-2.json b/services/servicecatalog/src/main/resources/codegen-resources/service-2.json index d727481d3a1c..c228fbebef61 100644 --- a/services/servicecatalog/src/main/resources/codegen-resources/service-2.json +++ b/services/servicecatalog/src/main/resources/codegen-resources/service-2.json @@ -1421,11 +1421,11 @@ }, "PrincipalARN":{ "shape":"PrincipalARN", - "documentation":"The ARN of the principal (user, role, or group). The supported value is a fully defined IAM ARN if the PrincipalType is IAM. If the PrincipalType is IAM_PATTERN, the supported value is an IAM ARN without an AccountID in the following format:
arn:partition:iam:::resource-type/resource-id
The resource-id can be either of the following:
Fully formed, for example arn:aws:iam:::role/resource-name or arn:aws:iam:::role/resource-path/resource-name
A wildcard ARN. The wildcard ARN accepts IAM_PATTERN values with a \"*\" or \"?\" in the resource-id segment of the ARN, for example arn:partition:service:::resource-type/resource-path/resource-name. The new symbols are exclusive to the resource-path and resource-name and cannot be used to replace the resource-type or other ARN values.
Examples of an acceptable wildcard ARN:
arn:aws:iam:::role/ResourceName_*
arn:aws:iam:::role/*/ResourceName_?
Examples of an unacceptable wildcard ARN:
arn:aws:iam:::*/ResourceName
You can associate multiple IAM_PATTERNs even if the account has no principal with that name.
The ARN path and principal name allow unlimited wildcard characters.
The \"?\" wildcard character matches zero or one of any character. This is similar to \".?\" in regular regex context.
The \"*\" wildcard character matches any number of any characters. This is similar \".*\" in regular regex context.
In the IAM Principal ARNs format (arn:partition:iam:::resource-type/resource-path/resource-name), valid resource-type values include user/, group/, or role/. The \"?\" and \"*\" are allowed only after the resource-type, in the resource-id segment. You can use special characters anywhere within the resource-id.
The \"*\" also matches the \"/\" character, allowing paths to be formed within the resource-id. For example, arn:aws:iam:::role/*/ResourceName_? matches both arn:aws:iam:::role/pathA/pathB/ResourceName_1 and arn:aws:iam:::role/pathA/ResourceName_1.
The ARN of the principal (user, role, or group). If the PrincipalType is IAM, the supported value is a fully defined IAM Amazon Resource Name (ARN). If the PrincipalType is IAM_PATTERN, the supported value is an IAM ARN without an AccountID in the following format:
arn:partition:iam:::resource-type/resource-id
The ARN resource-id can be either:
A fully formed resource-id. For example, arn:aws:iam:::role/resource-name or arn:aws:iam:::role/resource-path/resource-name
A wildcard ARN. The wildcard ARN accepts IAM_PATTERN values with a \"*\" or \"?\" in the resource-id segment of the ARN. For example arn:partition:service:::resource-type/resource-path/resource-name. The new symbols are exclusive to the resource-path and resource-name and cannot replace the resource-type or other ARN values.
The ARN path and principal name allow unlimited wildcard characters.
Examples of an acceptable wildcard ARN:
arn:aws:iam:::role/ResourceName_*
arn:aws:iam:::role/*/ResourceName_?
Examples of an unacceptable wildcard ARN:
arn:aws:iam:::*/ResourceName
You can associate multiple IAM_PATTERNs even if the account has no principal with that name.
The \"?\" wildcard character matches zero or one of any character. This is similar to \".?\" in regular regex context. The \"*\" wildcard character matches any number of any characters. This is similar to \".*\" in regular regex context.
In the IAM Principal ARN format (arn:partition:iam:::resource-type/resource-path/resource-name), valid resource-type values include user/, group/, or role/. The \"?\" and \"*\" characters are allowed only after the resource-type in the resource-id segment. You can use special characters anywhere within the resource-id.
The \"*\" character also matches the \"/\" character, allowing paths to be formed within the resource-id. For example, arn:aws:iam:::role/*/ResourceName_? matches both arn:aws:iam:::role/pathA/pathB/ResourceName_1 and arn:aws:iam:::role/pathA/ResourceName_1.
" }, "PrincipalType":{ "shape":"PrincipalType", - "documentation":"The principal type. The supported value is IAM if you use a fully defined ARN, or IAM_PATTERN if you use an ARN with no accountID, with or without wildcard characters.
The principal type. The supported value is IAM if you use a fully defined Amazon Resource Name (ARN), or IAM_PATTERN if you use an ARN with no accountID, with or without wildcard characters.
Indicates whether a verbose level of detail is enabled.
" + }, + "IncludeProvisioningArtifactParameters":{ + "shape":"Boolean", + "documentation":"Indicates if the API call response does or does not include additional details about the provisioning parameters.
" } } }, @@ -2768,6 +2772,10 @@ "Status":{ "shape":"Status", "documentation":"The status of the current request.
" + }, + "ProvisioningArtifactParameters":{ + "shape":"ProvisioningArtifactParameters", + "documentation":"Information about the parameters used to provision the product.
" } } }, From bffd03abc0f905420d6feeaaa51e43fceba43d68 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 8 Jun 2023 18:06:22 +0000 Subject: [PATCH 067/317] Payment Cryptography Data Plane Update: Initial release of AWS Payment Cryptography DataPlane Plane service for performing cryptographic operations typically used during card payment processing. --- ...-PaymentCryptographyDataPlane-bac9166.json | 6 + services/paymentcryptographydata/pom.xml | 60 + .../codegen-resources/endpoint-rule-set.json | 350 +++ .../codegen-resources/endpoint-tests.json | 295 +++ .../codegen-resources/paginators-1.json | 4 + .../codegen-resources/service-2.json | 2108 +++++++++++++++++ 6 files changed, 2823 insertions(+) create mode 100644 .changes/next-release/feature-PaymentCryptographyDataPlane-bac9166.json create mode 100644 services/paymentcryptographydata/pom.xml create mode 100644 services/paymentcryptographydata/src/main/resources/codegen-resources/endpoint-rule-set.json create mode 100644 services/paymentcryptographydata/src/main/resources/codegen-resources/endpoint-tests.json create mode 100644 services/paymentcryptographydata/src/main/resources/codegen-resources/paginators-1.json create mode 100644 services/paymentcryptographydata/src/main/resources/codegen-resources/service-2.json diff --git a/.changes/next-release/feature-PaymentCryptographyDataPlane-bac9166.json b/.changes/next-release/feature-PaymentCryptographyDataPlane-bac9166.json new file mode 100644 index 000000000000..75513b30df3f --- /dev/null +++ b/.changes/next-release/feature-PaymentCryptographyDataPlane-bac9166.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Payment Cryptography Data Plane", + "contributor": "", + "description": "Initial release of AWS Payment Cryptography DataPlane Plane service for performing cryptographic operations typically used during card payment processing." +} diff --git a/services/paymentcryptographydata/pom.xml b/services/paymentcryptographydata/pom.xml new file mode 100644 index 000000000000..f1c90cc7997c --- /dev/null +++ b/services/paymentcryptographydata/pom.xml @@ -0,0 +1,60 @@ + + + +Decrypts ciphertext data to plaintext using symmetric, asymmetric, or DUKPT data encryption key. For more information, see Decrypt data in the Amazon Web Services Payment Cryptography User Guide.
You can use an encryption key generated within Amazon Web Services Payment Cryptography, or you can import your own encryption key by calling ImportKey. For this operation, the key must have KeyModesOfUse set to Decrypt. In asymmetric decryption, Amazon Web Services Payment Cryptography decrypts the ciphertext using the private component of the asymmetric encryption key pair. For data encryption outside of Amazon Web Services Payment Cryptography, you can export the public component of the asymmetric key pair by calling GetPublicCertificate.
For symmetric and DUKPT decryption, Amazon Web Services Payment Cryptography supports TDES and AES algorithms. For asymmetric decryption, Amazon Web Services Payment Cryptography supports RSA. When you use DUKPT, for TDES algorithm, the ciphertext data length must be a multiple of 16 bytes. For AES algorithm, the ciphertext data length must be a multiple of 32 bytes.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "EncryptData":{ + "name":"EncryptData", + "http":{ + "method":"POST", + "requestUri":"/keys/{KeyIdentifier}/encrypt", + "responseCode":200 + }, + "input":{"shape":"EncryptDataInput"}, + "output":{"shape":"EncryptDataOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Encrypts plaintext data to ciphertext using symmetric, asymmetric, or DUKPT data encryption key. For more information, see Encrypt data in the Amazon Web Services Payment Cryptography User Guide.
You can generate an encryption key within Amazon Web Services Payment Cryptography by calling CreateKey. You can import your own encryption key by calling ImportKey. For this operation, the key must have KeyModesOfUse set to Encrypt. In asymmetric encryption, plaintext is encrypted using public component. You can import the public component of an asymmetric key pair created outside Amazon Web Services Payment Cryptography by calling ImportKey).
for symmetric and DUKPT encryption, Amazon Web Services Payment Cryptography supports TDES and AES algorithms. For asymmetric encryption, Amazon Web Services Payment Cryptography supports RSA. To encrypt using DUKPT, you must already have a DUKPT key in your account with KeyModesOfUse set to DeriveKey, or you can generate a new DUKPT key by calling CreateKey.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "GenerateCardValidationData":{ + "name":"GenerateCardValidationData", + "http":{ + "method":"POST", + "requestUri":"/cardvalidationdata/generate", + "responseCode":200 + }, + "input":{"shape":"GenerateCardValidationDataInput"}, + "output":{"shape":"GenerateCardValidationDataOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Generates card-related validation data using algorithms such as Card Verification Values (CVV/CVV2), Dynamic Card Verification Values (dCVV/dCVV2), or Card Security Codes (CSC). For more information, see Generate card data in the Amazon Web Services Payment Cryptography User Guide.
This operation generates a CVV or CSC value that is printed on a payment credit or debit card during card production. The CVV or CSC, PAN (Primary Account Number) and expiration date of the card are required to check its validity during transaction processing. To begin this operation, a CVK (Card Verification Key) encryption key is required. You can use CreateKey or ImportKey to establish a CVK within Amazon Web Services Payment Cryptography. The KeyModesOfUse should be set to Generate and Verify for a CVK encryption key.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "GenerateMac":{ + "name":"GenerateMac", + "http":{ + "method":"POST", + "requestUri":"/mac/generate", + "responseCode":200 + }, + "input":{"shape":"GenerateMacInput"}, + "output":{"shape":"GenerateMacOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Generates a Message Authentication Code (MAC) cryptogram within Amazon Web Services Payment Cryptography.
You can use this operation when keys won't be shared but mutual data is present on both ends for validation. In this case, known data values are used to generate a MAC on both ends for comparision without sending or receiving data in ciphertext or plaintext. You can use this operation to generate a DUPKT, HMAC or EMV MAC by setting generation attributes and algorithm to the associated values. The MAC generation encryption key must have valid values for KeyUsage such as TR31_M7_HMAC_KEY for HMAC generation, and they key must have KeyModesOfUse set to Generate and Verify.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "GeneratePinData":{ + "name":"GeneratePinData", + "http":{ + "method":"POST", + "requestUri":"/pindata/generate", + "responseCode":200 + }, + "input":{"shape":"GeneratePinDataInput"}, + "output":{"shape":"GeneratePinDataOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Generates pin-related data such as PIN, PIN Verification Value (PVV), PIN Block, and PIN Offset during new card issuance or reissuance. For more information, see Generate PIN data in the Amazon Web Services Payment Cryptography User Guide.
PIN data is never transmitted in clear to or from Amazon Web Services Payment Cryptography. This operation generates PIN, PVV, or PIN Offset and then encrypts it using Pin Encryption Key (PEK) to create an EncryptedPinBlock for transmission from Amazon Web Services Payment Cryptography. This operation uses a separate Pin Verification Key (PVK) for VISA PVV generation.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "ReEncryptData":{ + "name":"ReEncryptData", + "http":{ + "method":"POST", + "requestUri":"/keys/{IncomingKeyIdentifier}/reencrypt", + "responseCode":200 + }, + "input":{"shape":"ReEncryptDataInput"}, + "output":{"shape":"ReEncryptDataOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Re-encrypt ciphertext using DUKPT, Symmetric and Asymmetric Data Encryption Keys.
You can either generate an encryption key within Amazon Web Services Payment Cryptography by calling CreateKey or import your own encryption key by calling ImportKey. The KeyArn for use with this operation must be in a compatible key state with KeyModesOfUse set to Encrypt. In asymmetric encryption, ciphertext is encrypted using public component (imported by calling ImportKey) of the asymmetric key pair created outside of Amazon Web Services Payment Cryptography.
For symmetric and DUKPT encryption, Amazon Web Services Payment Cryptography supports TDES and AES algorithms. For asymmetric encryption, Amazon Web Services Payment Cryptography supports RSA. To encrypt using DUKPT, a DUKPT key must already exist within your account with KeyModesOfUse set to DeriveKey or a new DUKPT can be generated by calling CreateKey.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "TranslatePinData":{ + "name":"TranslatePinData", + "http":{ + "method":"POST", + "requestUri":"/pindata/translate", + "responseCode":200 + }, + "input":{"shape":"TranslatePinDataInput"}, + "output":{"shape":"TranslatePinDataOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Translates encrypted PIN block from and to ISO 9564 formats 0,1,3,4. For more information, see Translate PIN data in the Amazon Web Services Payment Cryptography User Guide.
PIN block translation involves changing the encrytion of PIN block from one encryption key to another encryption key and changing PIN block format from one to another without PIN block data leaving Amazon Web Services Payment Cryptography. The encryption key transformation can be from PEK (Pin Encryption Key) to BDK (Base Derivation Key) for DUKPT or from BDK for DUKPT to PEK. Amazon Web Services Payment Cryptography supports TDES and AES key derivation type for DUKPT tranlations. You can use this operation for P2PE (Point to Point Encryption) use cases where the encryption keys should change but the processing system either does not need to, or is not permitted to, decrypt the data.
The allowed combinations of PIN block format translations are guided by PCI. It is important to note that not all encrypted PIN block formats (example, format 1) require PAN (Primary Account Number) as input. And as such, PIN block format that requires PAN (example, formats 0,3,4) cannot be translated to a format (format 1) that does not require a PAN for generation.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
At this time, Amazon Web Services Payment Cryptography does not support translations to PIN format 4.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "VerifyAuthRequestCryptogram":{ + "name":"VerifyAuthRequestCryptogram", + "http":{ + "method":"POST", + "requestUri":"/cryptogram/verify", + "responseCode":200 + }, + "input":{"shape":"VerifyAuthRequestCryptogramInput"}, + "output":{"shape":"VerifyAuthRequestCryptogramOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"VerificationFailedException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Verifies Authorization Request Cryptogram (ARQC) for a EMV chip payment card authorization. For more information, see Verify auth request cryptogram in the Amazon Web Services Payment Cryptography User Guide.
ARQC generation is done outside of Amazon Web Services Payment Cryptography and is typically generated on a point of sale terminal for an EMV chip card to obtain payment authorization during transaction time. For ARQC verification, you must first import the ARQC generated outside of Amazon Web Services Payment Cryptography by calling ImportKey. This operation uses the imported ARQC and an major encryption key (DUKPT) created by calling CreateKey to either provide a boolean ARQC verification result or provide an APRC (Authorization Response Cryptogram) response using Method 1 or Method 2. The ARPC_METHOD_1 uses AuthResponseCode to generate ARPC and ARPC_METHOD_2 uses CardStatusUpdate to generate ARPC.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "VerifyCardValidationData":{ + "name":"VerifyCardValidationData", + "http":{ + "method":"POST", + "requestUri":"/cardvalidationdata/verify", + "responseCode":200 + }, + "input":{"shape":"VerifyCardValidationDataInput"}, + "output":{"shape":"VerifyCardValidationDataOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"VerificationFailedException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Verifies card-related validation data using algorithms such as Card Verification Values (CVV/CVV2), Dynamic Card Verification Values (dCVV/dCVV2) and Card Security Codes (CSC). For more information, see Verify card data in the Amazon Web Services Payment Cryptography User Guide.
This operation validates the CVV or CSC codes that is printed on a payment credit or debit card during card payment transaction. The input values are typically provided as part of an inbound transaction to an issuer or supporting platform partner. Amazon Web Services Payment Cryptography uses CVV or CSC, PAN (Primary Account Number) and expiration date of the card to check its validity during transaction processing. In this operation, the CVK (Card Verification Key) encryption key for use with card data verification is same as the one in used for GenerateCardValidationData.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "VerifyMac":{ + "name":"VerifyMac", + "http":{ + "method":"POST", + "requestUri":"/mac/verify", + "responseCode":200 + }, + "input":{"shape":"VerifyMacInput"}, + "output":{"shape":"VerifyMacOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"VerificationFailedException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Verifies a Message Authentication Code (MAC).
You can use this operation when keys won't be shared but mutual data is present on both ends for validation. In this case, known data values are used to generate a MAC on both ends for verification without sending or receiving data in ciphertext or plaintext. You can use this operation to verify a DUPKT, HMAC or EMV MAC by setting generation attributes and algorithm to the associated values. Use the same encryption key for MAC verification as you use for GenerateMac.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + }, + "VerifyPinData":{ + "name":"VerifyPinData", + "http":{ + "method":"POST", + "requestUri":"/pindata/verify", + "responseCode":200 + }, + "input":{"shape":"VerifyPinDataInput"}, + "output":{"shape":"VerifyPinDataOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"VerificationFailedException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ThrottlingException"}, + {"shape":"InternalServerException"} + ], + "documentation":"Verifies pin-related data such as PIN and PIN Offset using algorithms including VISA PVV and IBM3624. For more information, see Verify PIN data in the Amazon Web Services Payment Cryptography User Guide.
This operation verifies PIN data for user payment card. A card holder PIN data is never transmitted in clear to or from Amazon Web Services Payment Cryptography. This operation uses PIN Verification Key (PVK) for PIN or PIN Offset generation and then encrypts it using PIN Encryption Key (PEK) to create an EncryptedPinBlock for transmission from Amazon Web Services Payment Cryptography.
For information about valid keys for this operation, see Understanding key attributes and Key types for specific data operations in the Amazon Web Services Payment Cryptography User Guide.
Cross-account use: This operation can't be used across different Amazon Web Services accounts.
Related operations:
" + } + }, + "shapes":{ + "AccessDeniedException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"You do not have sufficient access to perform this action.
", + "error":{ + "httpStatusCode":403, + "senderFault":true + }, + "exception":true + }, + "AmexCardSecurityCodeVersion1":{ + "type":"structure", + "required":["CardExpiryDate"], + "members":{ + "CardExpiryDate":{ + "shape":"NumberLengthEquals4", + "documentation":"The expiry date of a payment card.
" + } + }, + "documentation":"Card data parameters that are required to generate a Card Security Code (CSC2) for an AMEX payment card.
" + }, + "AmexCardSecurityCodeVersion2":{ + "type":"structure", + "required":[ + "CardExpiryDate", + "ServiceCode" + ], + "members":{ + "CardExpiryDate":{ + "shape":"NumberLengthEquals4", + "documentation":"The expiry date of a payment card.
" + }, + "ServiceCode":{ + "shape":"NumberLengthEquals3", + "documentation":"The service code of the AMEX payment card. This is different from the Card Security Code (CSC).
" + } + }, + "documentation":"Card data parameters that are required to generate a Card Security Code (CSC2) for an AMEX payment card.
" + }, + "AsymmetricEncryptionAttributes":{ + "type":"structure", + "members":{ + "PaddingType":{ + "shape":"PaddingType", + "documentation":"The padding to be included with the data.
" + } + }, + "documentation":"Parameters for plaintext encryption using asymmetric keys.
" + }, + "CardGenerationAttributes":{ + "type":"structure", + "members":{ + "AmexCardSecurityCodeVersion1":{"shape":"AmexCardSecurityCodeVersion1"}, + "AmexCardSecurityCodeVersion2":{ + "shape":"AmexCardSecurityCodeVersion2", + "documentation":"Card data parameters that are required to generate a Card Security Code (CSC2) for an AMEX payment card.
" + }, + "CardHolderVerificationValue":{ + "shape":"CardHolderVerificationValue", + "documentation":"Card data parameters that are required to generate a cardholder verification value for the payment card.
" + }, + "CardVerificationValue1":{ + "shape":"CardVerificationValue1", + "documentation":"Card data parameters that are required to generate Card Verification Value (CVV) for the payment card.
" + }, + "CardVerificationValue2":{ + "shape":"CardVerificationValue2", + "documentation":"Card data parameters that are required to generate Card Verification Value (CVV2) for the payment card.
" + }, + "DynamicCardVerificationCode":{ + "shape":"DynamicCardVerificationCode", + "documentation":"Card data parameters that are required to generate CDynamic Card Verification Code (dCVC) for the payment card.
" + }, + "DynamicCardVerificationValue":{ + "shape":"DynamicCardVerificationValue", + "documentation":"Card data parameters that are required to generate CDynamic Card Verification Value (dCVV) for the payment card.
" + } + }, + "documentation":"Card data parameters that are required to generate Card Verification Values (CVV/CVV2), Dynamic Card Verification Values (dCVV/dCVV2), or Card Security Codes (CSC).
", + "union":true + }, + "CardHolderVerificationValue":{ + "type":"structure", + "required":[ + "ApplicationTransactionCounter", + "PanSequenceNumber", + "UnpredictableNumber" + ], + "members":{ + "ApplicationTransactionCounter":{ + "shape":"HexLengthBetween2And4", + "documentation":"The transaction counter value that comes from a point of sale terminal.
" + }, + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "UnpredictableNumber":{ + "shape":"HexLengthBetween2And8", + "documentation":"A random number generated by the issuer.
" + } + }, + "documentation":"Card data parameters that are required to generate a cardholder verification value for the payment card.
" + }, + "CardVerificationAttributes":{ + "type":"structure", + "members":{ + "AmexCardSecurityCodeVersion1":{"shape":"AmexCardSecurityCodeVersion1"}, + "AmexCardSecurityCodeVersion2":{ + "shape":"AmexCardSecurityCodeVersion2", + "documentation":"Card data parameters that are required to verify a Card Security Code (CSC2) for an AMEX payment card.
" + }, + "CardHolderVerificationValue":{ + "shape":"CardHolderVerificationValue", + "documentation":"Card data parameters that are required to verify a cardholder verification value for the payment card.
" + }, + "CardVerificationValue1":{ + "shape":"CardVerificationValue1", + "documentation":"Card data parameters that are required to verify Card Verification Value (CVV) for the payment card.
" + }, + "CardVerificationValue2":{ + "shape":"CardVerificationValue2", + "documentation":"Card data parameters that are required to verify Card Verification Value (CVV2) for the payment card.
" + }, + "DiscoverDynamicCardVerificationCode":{ + "shape":"DiscoverDynamicCardVerificationCode", + "documentation":"Card data parameters that are required to verify CDynamic Card Verification Code (dCVC) for the payment card.
" + }, + "DynamicCardVerificationCode":{ + "shape":"DynamicCardVerificationCode", + "documentation":"Card data parameters that are required to verify CDynamic Card Verification Code (dCVC) for the payment card.
" + }, + "DynamicCardVerificationValue":{ + "shape":"DynamicCardVerificationValue", + "documentation":"Card data parameters that are required to verify CDynamic Card Verification Value (dCVV) for the payment card.
" + } + }, + "documentation":"Card data parameters that are requried to verify Card Verification Values (CVV/CVV2), Dynamic Card Verification Values (dCVV/dCVV2), or Card Security Codes (CSC).
", + "union":true + }, + "CardVerificationValue1":{ + "type":"structure", + "required":[ + "CardExpiryDate", + "ServiceCode" + ], + "members":{ + "CardExpiryDate":{ + "shape":"NumberLengthEquals4", + "documentation":"The expiry date of a payment card.
" + }, + "ServiceCode":{ + "shape":"NumberLengthEquals3", + "documentation":"The service code of the payment card. This is different from Card Security Code (CSC).
" + } + }, + "documentation":"Card data parameters that are required to verify CVV (Card Verification Value) for the payment card.
" + }, + "CardVerificationValue2":{ + "type":"structure", + "required":["CardExpiryDate"], + "members":{ + "CardExpiryDate":{ + "shape":"NumberLengthEquals4", + "documentation":"The expiry date of a payment card.
" + } + }, + "documentation":"Card data parameters that are required to verify Card Verification Value (CVV2) for the payment card.
" + }, + "CryptogramAuthResponse":{ + "type":"structure", + "members":{ + "ArpcMethod1":{ + "shape":"CryptogramVerificationArpcMethod1", + "documentation":"Parameters that are required for ARPC response generation using method1 after ARQC verification is successful.
" + }, + "ArpcMethod2":{ + "shape":"CryptogramVerificationArpcMethod2", + "documentation":"Parameters that are required for ARPC response generation using method2 after ARQC verification is successful.
" + } + }, + "documentation":"Parameters that are required for Authorization Response Cryptogram (ARPC) generation after Authorization Request Cryptogram (ARQC) verification is successful.
", + "union":true + }, + "CryptogramVerificationArpcMethod1":{ + "type":"structure", + "required":["AuthResponseCode"], + "members":{ + "AuthResponseCode":{ + "shape":"HexLengthEquals4", + "documentation":"The auth code used to calculate APRC after ARQC verification is successful. This is the same auth code used for ARQC generation outside of Amazon Web Services Payment Cryptography.
" + } + }, + "documentation":"Parameters that are required for ARPC response generation using method1 after ARQC verification is successful.
" + }, + "CryptogramVerificationArpcMethod2":{ + "type":"structure", + "required":["CardStatusUpdate"], + "members":{ + "CardStatusUpdate":{ + "shape":"HexLengthEquals8", + "documentation":"The data indicating whether the issuer approves or declines an online transaction using an EMV chip card.
" + }, + "ProprietaryAuthenticationData":{ + "shape":"HexLengthBetween1And16", + "documentation":"The proprietary authentication data used by issuer for communication during online transaction using an EMV chip card.
" + } + }, + "documentation":"Parameters that are required for ARPC response generation using method2 after ARQC verification is successful.
" + }, + "DecryptDataInput":{ + "type":"structure", + "required":[ + "CipherText", + "DecryptionAttributes", + "KeyIdentifier" + ], + "members":{ + "CipherText":{ + "shape":"HexEvenLengthBetween16And4096", + "documentation":"The ciphertext to decrypt.
" + }, + "DecryptionAttributes":{ + "shape":"EncryptionDecryptionAttributes", + "documentation":"The encryption key type and attributes for ciphertext decryption.
" + }, + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the encryption key that Amazon Web Services Payment Cryptography uses for ciphertext decryption.
The keyARN of the encryption key that Amazon Web Services Payment Cryptography uses for ciphertext decryption.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "PlainText":{ + "shape":"HexEvenLengthBetween16And4096", + "documentation":"The decrypted plaintext data.
" + } + } + }, + "DiscoverDynamicCardVerificationCode":{ + "type":"structure", + "required":[ + "ApplicationTransactionCounter", + "CardExpiryDate", + "UnpredictableNumber" + ], + "members":{ + "ApplicationTransactionCounter":{ + "shape":"HexLengthBetween2And4", + "documentation":"The transaction counter value that comes from the terminal.
" + }, + "CardExpiryDate":{ + "shape":"NumberLengthEquals4", + "documentation":"The expiry date of a payment card.
" + }, + "UnpredictableNumber":{ + "shape":"HexLengthBetween2And8", + "documentation":"A random number that is generated by the issuer.
" + } + }, + "documentation":"Parameters that are required to generate or verify dCVC (Dynamic Card Verification Code).
" + }, + "DukptAttributes":{ + "type":"structure", + "required":[ + "DukptDerivationType", + "KeySerialNumber" + ], + "members":{ + "DukptDerivationType":{ + "shape":"DukptDerivationType", + "documentation":"The key type derived using DUKPT from a Base Derivation Key (BDK) and Key Serial Number (KSN). This must be less than or equal to the strength of the BDK. For example, you can't use AES_128 as a derivation type for a BDK of AES_128 or TDES_2KEY.
The unique identifier known as Key Serial Number (KSN) that comes from an encrypting device using DUKPT encryption method. The KSN is derived from the encrypting device unique identifier and an internal transaction counter.
" + } + }, + "documentation":"Parameters that are used for Derived Unique Key Per Transaction (DUKPT) derivation algorithm.
" + }, + "DukptDerivationAttributes":{ + "type":"structure", + "required":["KeySerialNumber"], + "members":{ + "DukptKeyDerivationType":{ + "shape":"DukptDerivationType", + "documentation":"The key type derived using DUKPT from a Base Derivation Key (BDK) and Key Serial Number (KSN). This must be less than or equal to the strength of the BDK. For example, you can't use AES_128 as a derivation type for a BDK of AES_128 or TDES_2KEY
The type of use of DUKPT, which can be for incoming data decryption, outgoing data encryption, or both.
" + }, + "KeySerialNumber":{ + "shape":"HexLengthBetween10And24", + "documentation":"The unique identifier known as Key Serial Number (KSN) that comes from an encrypting device using DUKPT encryption method. The KSN is derived from the encrypting device unique identifier and an internal transaction counter.
" + } + }, + "documentation":"Parameters required for encryption or decryption of data using DUKPT.
" + }, + "DukptDerivationType":{ + "type":"string", + "enum":[ + "TDES_2KEY", + "TDES_3KEY", + "AES_128", + "AES_192", + "AES_256" + ] + }, + "DukptEncryptionAttributes":{ + "type":"structure", + "required":["KeySerialNumber"], + "members":{ + "DukptKeyDerivationType":{ + "shape":"DukptDerivationType", + "documentation":"The key type encrypted using DUKPT from a Base Derivation Key (BDK) and Key Serial Number (KSN). This must be less than or equal to the strength of the BDK. For example, you can't use AES_128 as a derivation type for a BDK of AES_128 or TDES_2KEY
The type of use of DUKPT, which can be incoming data decryption, outgoing data encryption, or both.
" + }, + "InitializationVector":{ + "shape":"HexLength16Or32", + "documentation":"An input to cryptographic primitive used to provide the intial state. Typically the InitializationVector must have a random or psuedo-random value, but sometimes it only needs to be unpredictable or unique. If you don't provide a value, Amazon Web Services Payment Cryptography generates a random value.
The unique identifier known as Key Serial Number (KSN) that comes from an encrypting device using DUKPT encryption method. The KSN is derived from the encrypting device unique identifier and an internal transaction counter.
" + }, + "Mode":{ + "shape":"DukptEncryptionMode", + "documentation":"The block cipher mode of operation. Block ciphers are designed to encrypt a block of data of fixed size, for example, 128 bits. The size of the input block is usually same as the size of the encrypted output block, while the key length can be different. A mode of operation describes how to repeatedly apply a cipher's single-block operation to securely transform amounts of data larger than a block.
The default is CBC.
" + } + }, + "documentation":"Parameters that are required to encrypt plaintext data using DUKPT.
" + }, + "DukptEncryptionMode":{ + "type":"string", + "enum":[ + "ECB", + "CBC" + ] + }, + "DukptKeyVariant":{ + "type":"string", + "enum":[ + "BIDIRECTIONAL", + "REQUEST", + "RESPONSE" + ] + }, + "DynamicCardVerificationCode":{ + "type":"structure", + "required":[ + "ApplicationTransactionCounter", + "PanSequenceNumber", + "TrackData", + "UnpredictableNumber" + ], + "members":{ + "ApplicationTransactionCounter":{ + "shape":"HexLengthBetween2And4", + "documentation":"The transaction counter value that comes from the terminal.
" + }, + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "TrackData":{ + "shape":"HexLengthBetween2And160", + "documentation":"The data on the two tracks of magnetic cards used for financial transactions. This includes the cardholder name, PAN, expiration date, bank ID (BIN) and several other numbers the issuing bank uses to validate the data received.
" + }, + "UnpredictableNumber":{ + "shape":"HexLengthBetween2And8", + "documentation":"A random number generated by the issuer.
" + } + }, + "documentation":"Parameters that are required to generate or verify Dynamic Card Verification Value (dCVV).
" + }, + "DynamicCardVerificationValue":{ + "type":"structure", + "required":[ + "ApplicationTransactionCounter", + "CardExpiryDate", + "PanSequenceNumber", + "ServiceCode" + ], + "members":{ + "ApplicationTransactionCounter":{ + "shape":"HexLengthBetween2And4", + "documentation":"The transaction counter value that comes from the terminal.
" + }, + "CardExpiryDate":{ + "shape":"NumberLengthEquals4", + "documentation":"The expiry date of a payment card.
" + }, + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "ServiceCode":{ + "shape":"NumberLengthEquals3", + "documentation":"The service code of the payment card. This is different from Card Security Code (CSC).
" + } + }, + "documentation":"Parameters that are required to generate or verify Dynamic Card Verification Value (dCVV).
" + }, + "EncryptDataInput":{ + "type":"structure", + "required":[ + "EncryptionAttributes", + "KeyIdentifier", + "PlainText" + ], + "members":{ + "EncryptionAttributes":{ + "shape":"EncryptionDecryptionAttributes", + "documentation":"The encryption key type and attributes for plaintext encryption.
" + }, + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the encryption key that Amazon Web Services Payment Cryptography uses for plaintext encryption.
The plaintext to be encrypted.
" + } + } + }, + "EncryptDataOutput":{ + "type":"structure", + "required":[ + "CipherText", + "KeyArn", + "KeyCheckValue" + ], + "members":{ + "CipherText":{ + "shape":"HexEvenLengthBetween16And4096", + "documentation":"The encrypted ciphertext.
" + }, + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the encryption key that Amazon Web Services Payment Cryptography uses for plaintext encryption.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + } + } + }, + "EncryptionDecryptionAttributes":{ + "type":"structure", + "members":{ + "Asymmetric":{"shape":"AsymmetricEncryptionAttributes"}, + "Dukpt":{"shape":"DukptEncryptionAttributes"}, + "Symmetric":{ + "shape":"SymmetricEncryptionAttributes", + "documentation":"Parameters that are required to perform encryption and decryption using symmetric keys.
" + } + }, + "documentation":"Parameters that are required to perform encryption and decryption operations.
", + "union":true + }, + "EncryptionMode":{ + "type":"string", + "enum":[ + "ECB", + "CBC", + "CFB", + "CFB1", + "CFB8", + "CFB64", + "CFB128", + "OFB" + ] + }, + "GenerateCardValidationDataInput":{ + "type":"structure", + "required":[ + "GenerationAttributes", + "KeyIdentifier", + "PrimaryAccountNumber" + ], + "members":{ + "GenerationAttributes":{ + "shape":"CardGenerationAttributes", + "documentation":"The algorithm for generating CVV or CSC values for the card within Amazon Web Services Payment Cryptography.
" + }, + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the CVK encryption key that Amazon Web Services Payment Cryptography uses to generate card data.
The Primary Account Number (PAN), a unique identifier for a payment credit or debit card that associates the card with a specific account holder.
" + }, + "ValidationDataLength":{ + "shape":"IntegerRangeBetween3And5Type", + "documentation":"The length of the CVV or CSC to be generated. The default value is 3.
" + } + } + }, + "GenerateCardValidationDataOutput":{ + "type":"structure", + "required":[ + "KeyArn", + "KeyCheckValue", + "ValidationData" + ], + "members":{ + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the CVK encryption key that Amazon Web Services Payment Cryptography uses to generate CVV or CSC.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "ValidationData":{ + "shape":"NumberLengthBetween3And5", + "documentation":"The CVV or CSC value that Amazon Web Services Payment Cryptography generates for the card.
" + } + } + }, + "GenerateMacInput":{ + "type":"structure", + "required":[ + "GenerationAttributes", + "KeyIdentifier", + "MessageData" + ], + "members":{ + "GenerationAttributes":{ + "shape":"MacAttributes", + "documentation":"The attributes and data values to use for MAC generation within Amazon Web Services Payment Cryptography.
" + }, + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the MAC generation encryption key.
The length of a MAC under generation.
" + }, + "MessageData":{ + "shape":"HexLengthBetween2And4096", + "documentation":"The data for which a MAC is under generation.
" + } + } + }, + "GenerateMacOutput":{ + "type":"structure", + "required":[ + "KeyArn", + "KeyCheckValue", + "Mac" + ], + "members":{ + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the encryption key that Amazon Web Services Payment Cryptography uses for MAC generation.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "Mac":{ + "shape":"HexLengthBetween4And128", + "documentation":"The MAC cryptogram generated within Amazon Web Services Payment Cryptography.
" + } + } + }, + "GeneratePinDataInput":{ + "type":"structure", + "required":[ + "EncryptionKeyIdentifier", + "GenerationAttributes", + "GenerationKeyIdentifier", + "PinBlockFormat", + "PrimaryAccountNumber" + ], + "members":{ + "EncryptionKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the PEK that Amazon Web Services Payment Cryptography uses to encrypt the PIN Block.
The attributes and values to use for PIN, PVV, or PIN Offset generation.
" + }, + "GenerationKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the PEK that Amazon Web Services Payment Cryptography uses for pin data generation.
The PIN encoding format for pin data generation as specified in ISO 9564. Amazon Web Services Payment Cryptography supports ISO_Format_0 and ISO_Format_3.
The ISO_Format_0 PIN block format is equivalent to the ANSI X9.8, VISA-1, and ECI-1 PIN block formats. It is similar to a VISA-4 PIN block format. It supports a PIN from 4 to 12 digits in length.
The ISO_Format_3 PIN block format is the same as ISO_Format_0 except that the fill digits are random values from 10 to 15.
The length of PIN under generation.
", + "box":true + }, + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN), a unique identifier for a payment credit or debit card that associates the card with a specific account holder.
" + } + } + }, + "GeneratePinDataOutput":{ + "type":"structure", + "required":[ + "EncryptedPinBlock", + "EncryptionKeyArn", + "EncryptionKeyCheckValue", + "GenerationKeyArn", + "GenerationKeyCheckValue", + "PinData" + ], + "members":{ + "EncryptedPinBlock":{ + "shape":"HexLengthBetween16And32", + "documentation":"The PIN block encrypted under PEK from Amazon Web Services Payment Cryptography. The encrypted PIN block is a composite of PAN (Primary Account Number) and PIN (Personal Identification Number), generated in accordance with ISO 9564 standard.
" + }, + "EncryptionKeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the PEK that Amazon Web Services Payment Cryptography uses for encrypted pin block generation.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "GenerationKeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the pin data generation key that Amazon Web Services Payment Cryptography uses for PIN, PVV or PIN Offset generation.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "PinData":{ + "shape":"PinData", + "documentation":"The attributes and values Amazon Web Services Payment Cryptography uses for pin data generation.
" + } + } + }, + "HexEvenLengthBetween16And4064":{ + "type":"string", + "max":4064, + "min":16, + "pattern":"^(?:[0-9a-fA-F][0-9a-fA-F])+$", + "sensitive":true + }, + "HexEvenLengthBetween16And4096":{ + "type":"string", + "max":4096, + "min":16, + "pattern":"^(?:[0-9a-fA-F][0-9a-fA-F])+$", + "sensitive":true + }, + "HexLength16Or32":{ + "type":"string", + "max":32, + "min":16, + "pattern":"^(?:[0-9a-fA-F]{16}|[0-9a-fA-F]{32})$", + "sensitive":true + }, + "HexLengthBetween10And24":{ + "type":"string", + "max":24, + "min":10, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthBetween16And32":{ + "type":"string", + "max":32, + "min":16, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthBetween1And16":{ + "type":"string", + "max":16, + "min":1, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthBetween2And1024":{ + "type":"string", + "max":1024, + "min":2, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthBetween2And160":{ + "type":"string", + "max":160, + "min":2, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthBetween2And4":{ + "type":"string", + "max":4, + "min":2, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthBetween2And4096":{ + "type":"string", + "max":4096, + "min":2, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthBetween2And8":{ + "type":"string", + "max":8, + "min":2, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthBetween4And128":{ + "type":"string", + "max":128, + "min":4, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthEquals1":{ + "type":"string", + "max":1, + "min":1, + "pattern":"^[0-9A-F]+$" + }, + "HexLengthEquals16":{ + "type":"string", + "max":16, + "min":16, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthEquals2":{ + "type":"string", + "max":2, + "min":2, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthEquals4":{ + "type":"string", + "max":4, + "min":4, + "pattern":"^[0-9a-fA-F]+$" + }, + "HexLengthEquals8":{ + "type":"string", + "max":8, + "min":8, + "pattern":"^[0-9a-fA-F]+$" + }, + "Ibm3624NaturalPin":{ + "type":"structure", + "required":[ + "DecimalizationTable", + "PinValidationData", + "PinValidationDataPadCharacter" + ], + "members":{ + "DecimalizationTable":{ + "shape":"NumberLengthEquals16", + "documentation":"The decimalization table to use for IBM 3624 PIN algorithm. The table is used to convert the algorithm intermediate result from hexadecimal characters to decimal.
" + }, + "PinValidationData":{ + "shape":"NumberLengthBetween4And16", + "documentation":"The unique data for cardholder identification.
" + }, + "PinValidationDataPadCharacter":{ + "shape":"HexLengthEquals1", + "documentation":"The padding character for validation data.
" + } + }, + "documentation":"Parameters that are required to generate or verify Ibm3624 natural PIN.
" + }, + "Ibm3624PinFromOffset":{ + "type":"structure", + "required":[ + "DecimalizationTable", + "PinOffset", + "PinValidationData", + "PinValidationDataPadCharacter" + ], + "members":{ + "DecimalizationTable":{ + "shape":"NumberLengthEquals16", + "documentation":"The decimalization table to use for IBM 3624 PIN algorithm. The table is used to convert the algorithm intermediate result from hexadecimal characters to decimal.
" + }, + "PinOffset":{ + "shape":"NumberLengthBetween4And12", + "documentation":"The PIN offset value.
" + }, + "PinValidationData":{ + "shape":"NumberLengthBetween4And16", + "documentation":"The unique data for cardholder identification.
" + }, + "PinValidationDataPadCharacter":{ + "shape":"HexLengthEquals1", + "documentation":"The padding character for validation data.
" + } + }, + "documentation":"Parameters that are required to generate or verify Ibm3624 PIN from offset PIN.
" + }, + "Ibm3624PinOffset":{ + "type":"structure", + "required":[ + "DecimalizationTable", + "EncryptedPinBlock", + "PinValidationData", + "PinValidationDataPadCharacter" + ], + "members":{ + "DecimalizationTable":{ + "shape":"NumberLengthEquals16", + "documentation":"The decimalization table to use for IBM 3624 PIN algorithm. The table is used to convert the algorithm intermediate result from hexadecimal characters to decimal.
" + }, + "EncryptedPinBlock":{ + "shape":"HexLengthBetween16And32", + "documentation":"The encrypted PIN block data. According to ISO 9564 standard, a PIN Block is an encoded representation of a payment card Personal Account Number (PAN) and the cardholder Personal Identification Number (PIN).
" + }, + "PinValidationData":{ + "shape":"NumberLengthBetween4And16", + "documentation":"The unique data for cardholder identification.
" + }, + "PinValidationDataPadCharacter":{ + "shape":"HexLengthEquals1", + "documentation":"The padding character for validation data.
" + } + }, + "documentation":"Pparameters that are required to generate or verify Ibm3624 PIN offset PIN.
" + }, + "Ibm3624PinVerification":{ + "type":"structure", + "required":[ + "DecimalizationTable", + "PinOffset", + "PinValidationData", + "PinValidationDataPadCharacter" + ], + "members":{ + "DecimalizationTable":{ + "shape":"NumberLengthEquals16", + "documentation":"The decimalization table to use for IBM 3624 PIN algorithm. The table is used to convert the algorithm intermediate result from hexadecimal characters to decimal.
" + }, + "PinOffset":{ + "shape":"NumberLengthBetween4And12", + "documentation":"The PIN offset value.
" + }, + "PinValidationData":{ + "shape":"NumberLengthBetween4And16", + "documentation":"The unique data for cardholder identification.
" + }, + "PinValidationDataPadCharacter":{ + "shape":"HexLengthEquals1", + "documentation":"The padding character for validation data.
" + } + }, + "documentation":"Parameters that are required to generate or verify Ibm3624 PIN verification PIN.
" + }, + "Ibm3624RandomPin":{ + "type":"structure", + "required":[ + "DecimalizationTable", + "PinValidationData", + "PinValidationDataPadCharacter" + ], + "members":{ + "DecimalizationTable":{ + "shape":"NumberLengthEquals16", + "documentation":"The decimalization table to use for IBM 3624 PIN algorithm. The table is used to convert the algorithm intermediate result from hexadecimal characters to decimal.
" + }, + "PinValidationData":{ + "shape":"NumberLengthBetween4And16", + "documentation":"The unique data for cardholder identification.
" + }, + "PinValidationDataPadCharacter":{ + "shape":"HexLengthEquals1", + "documentation":"The padding character for validation data.
" + } + }, + "documentation":"Parameters that are required to generate or verify Ibm3624 random PIN.
" + }, + "IntegerRangeBetween0And9":{ + "type":"integer", + "box":true, + "max":9, + "min":0 + }, + "IntegerRangeBetween3And5Type":{ + "type":"integer", + "box":true, + "max":5, + "min":3 + }, + "IntegerRangeBetween4And12":{ + "type":"integer", + "max":12, + "min":4 + }, + "IntegerRangeBetween4And16":{ + "type":"integer", + "box":true, + "max":16, + "min":4 + }, + "InternalServerException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"The request processing has failed because of an unknown error, exception, or failure.
", + "error":{"httpStatusCode":500}, + "exception":true, + "fault":true + }, + "KeyArn":{ + "type":"string", + "max":150, + "min":70, + "pattern":"^arn:aws:payment-cryptography:[a-z]{2}-[a-z]{1,16}-[0-9]+:[0-9]{12}:key/[0-9a-zA-Z]{16,64}$" + }, + "KeyArnOrKeyAliasType":{ + "type":"string", + "max":322, + "min":7, + "pattern":"^arn:aws:payment-cryptography:[a-z]{2}-[a-z]{1,16}-[0-9]+:[0-9]{12}:(key/[0-9a-zA-Z]{16,64}|alias/[a-zA-Z0-9/_-]+)$|^alias/[a-zA-Z0-9/_-]+$" + }, + "KeyCheckValue":{ + "type":"string", + "max":16, + "min":4, + "pattern":"^[0-9a-fA-F]+$" + }, + "MacAlgorithm":{ + "type":"string", + "enum":[ + "ISO9797_ALGORITHM1", + "ISO9797_ALGORITHM3", + "CMAC", + "HMAC_SHA224", + "HMAC_SHA256", + "HMAC_SHA384", + "HMAC_SHA512" + ] + }, + "MacAlgorithmDukpt":{ + "type":"structure", + "required":[ + "DukptKeyVariant", + "KeySerialNumber" + ], + "members":{ + "DukptDerivationType":{ + "shape":"DukptDerivationType", + "documentation":"The key type derived using DUKPT from a Base Derivation Key (BDK) and Key Serial Number (KSN). This must be less than or equal to the strength of the BDK. For example, you can't use AES_128 as a derivation type for a BDK of AES_128 or TDES_2KEY.
The type of use of DUKPT, which can be MAC generation, MAC verification, or both.
" + }, + "KeySerialNumber":{ + "shape":"HexLengthBetween10And24", + "documentation":"The unique identifier known as Key Serial Number (KSN) that comes from an encrypting device using DUKPT encryption method. The KSN is derived from the encrypting device unique identifier and an internal transaction counter.
" + } + }, + "documentation":"Parameters required for DUKPT MAC generation and verification.
" + }, + "MacAlgorithmEmv":{ + "type":"structure", + "required":[ + "MajorKeyDerivationMode", + "PanSequenceNumber", + "PrimaryAccountNumber", + "SessionKeyDerivationMode", + "SessionKeyDerivationValue" + ], + "members":{ + "MajorKeyDerivationMode":{ + "shape":"MajorKeyDerivationMode", + "documentation":"The method to use when deriving the master key for EMV MAC generation or verification.
" + }, + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN), a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" + }, + "SessionKeyDerivationMode":{ + "shape":"SessionKeyDerivationMode", + "documentation":"The method of deriving a session key for EMV MAC generation or verification.
" + }, + "SessionKeyDerivationValue":{ + "shape":"SessionKeyDerivationValue", + "documentation":"Parameters that are required to generate session key for EMV generation and verification.
" + } + }, + "documentation":"Parameters that are required for EMV MAC generation and verification.
" + }, + "MacAttributes":{ + "type":"structure", + "members":{ + "Algorithm":{ + "shape":"MacAlgorithm", + "documentation":"The encryption algorithm for MAC generation or verification.
" + }, + "DukptCmac":{ + "shape":"MacAlgorithmDukpt", + "documentation":"Parameters that are required for MAC generation or verification using DUKPT CMAC algorithm.
" + }, + "DukptIso9797Algorithm1":{ + "shape":"MacAlgorithmDukpt", + "documentation":"Parameters that are required for MAC generation or verification using DUKPT ISO 9797 algorithm1.
" + }, + "DukptIso9797Algorithm3":{ + "shape":"MacAlgorithmDukpt", + "documentation":"Parameters that are required for MAC generation or verification using DUKPT ISO 9797 algorithm2.
" + }, + "EmvMac":{ + "shape":"MacAlgorithmEmv", + "documentation":"Parameters that are required for MAC generation or verification using EMV MAC algorithm.
" + } + }, + "documentation":"Parameters that are required for DUKPT, HMAC, or EMV MAC generation or verification.
", + "union":true + }, + "MajorKeyDerivationMode":{ + "type":"string", + "enum":[ + "EMV_OPTION_A", + "EMV_OPTION_B" + ] + }, + "NumberLengthBetween12And19":{ + "type":"string", + "max":19, + "min":12, + "pattern":"^[0-9]+$", + "sensitive":true + }, + "NumberLengthBetween3And5":{ + "type":"string", + "max":5, + "min":3, + "pattern":"^[0-9]+$" + }, + "NumberLengthBetween4And12":{ + "type":"string", + "max":12, + "min":4, + "pattern":"^[0-9]+$" + }, + "NumberLengthBetween4And16":{ + "type":"string", + "max":16, + "min":4, + "pattern":"^[0-9]+$" + }, + "NumberLengthEquals16":{ + "type":"string", + "max":16, + "min":16, + "pattern":"^[0-9]+$" + }, + "NumberLengthEquals3":{ + "type":"string", + "max":3, + "min":3, + "pattern":"^[0-9]+$" + }, + "NumberLengthEquals4":{ + "type":"string", + "max":4, + "min":4, + "pattern":"^[0-9]+$" + }, + "PaddingType":{ + "type":"string", + "enum":[ + "PKCS1", + "OAEP_SHA1", + "OAEP_SHA256", + "OAEP_SHA512" + ] + }, + "PinBlockFormatForPinData":{ + "type":"string", + "enum":[ + "ISO_FORMAT_0", + "ISO_FORMAT_3" + ] + }, + "PinData":{ + "type":"structure", + "members":{ + "PinOffset":{ + "shape":"NumberLengthBetween4And12", + "documentation":"The PIN offset value.
" + }, + "VerificationValue":{ + "shape":"NumberLengthBetween4And12", + "documentation":"The unique data to identify a cardholder. In most cases, this is the same as cardholder's Primary Account Number (PAN). If a value is not provided, it defaults to PAN.
" + } + }, + "documentation":"Parameters that are required to generate, translate, or verify PIN data.
", + "union":true + }, + "PinGenerationAttributes":{ + "type":"structure", + "members":{ + "Ibm3624NaturalPin":{ + "shape":"Ibm3624NaturalPin", + "documentation":"Parameters that are required to generate or verify Ibm3624 natural PIN.
" + }, + "Ibm3624PinFromOffset":{ + "shape":"Ibm3624PinFromOffset", + "documentation":"Parameters that are required to generate or verify Ibm3624 PIN from offset PIN.
" + }, + "Ibm3624PinOffset":{ + "shape":"Ibm3624PinOffset", + "documentation":"Parameters that are required to generate or verify Ibm3624 PIN offset PIN.
" + }, + "Ibm3624RandomPin":{ + "shape":"Ibm3624RandomPin", + "documentation":"Parameters that are required to generate or verify Ibm3624 random PIN.
" + }, + "VisaPin":{ + "shape":"VisaPin", + "documentation":"Parameters that are required to generate or verify Visa PIN.
" + }, + "VisaPinVerificationValue":{ + "shape":"VisaPinVerificationValue", + "documentation":"Parameters that are required to generate or verify Visa PIN Verification Value (PVV).
" + } + }, + "documentation":"Parameters that are required for PIN data generation.
", + "union":true + }, + "PinVerificationAttributes":{ + "type":"structure", + "members":{ + "Ibm3624Pin":{ + "shape":"Ibm3624PinVerification", + "documentation":"Parameters that are required to generate or verify Ibm3624 PIN.
" + }, + "VisaPin":{ + "shape":"VisaPinVerification", + "documentation":"Parameters that are required to generate or verify Visa PIN.
" + } + }, + "documentation":"Parameters that are required for PIN data verification.
", + "union":true + }, + "ReEncryptDataInput":{ + "type":"structure", + "required":[ + "CipherText", + "IncomingEncryptionAttributes", + "IncomingKeyIdentifier", + "OutgoingEncryptionAttributes", + "OutgoingKeyIdentifier" + ], + "members":{ + "CipherText":{ + "shape":"HexEvenLengthBetween16And4096", + "documentation":"Ciphertext to be encrypted. The minimum allowed length is 16 bytes and maximum allowed length is 4096 bytes.
" + }, + "IncomingEncryptionAttributes":{ + "shape":"ReEncryptionAttributes", + "documentation":"The attributes and values for incoming ciphertext.
" + }, + "IncomingKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the encryption key of incoming ciphertext data.
The attributes and values for outgoing ciphertext data after encryption by Amazon Web Services Payment Cryptography.
" + }, + "OutgoingKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the encryption key of outgoing ciphertext data after encryption by Amazon Web Services Payment Cryptography.
The encrypted ciphertext.
" + }, + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN (Amazon Resource Name) of the encryption key that Amazon Web Services Payment Cryptography uses for plaintext encryption.
" + }, + "KeyCheckValue":{ + "shape":"KeyCheckValue", + "documentation":"The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + } + } + }, + "ReEncryptionAttributes":{ + "type":"structure", + "members":{ + "Dukpt":{"shape":"DukptEncryptionAttributes"}, + "Symmetric":{ + "shape":"SymmetricEncryptionAttributes", + "documentation":"Parameters that are required to encrypt data using symmetric keys.
" + } + }, + "documentation":"Parameters that are required to perform reencryption operation.
", + "union":true + }, + "ResourceNotFoundException":{ + "type":"structure", + "members":{ + "ResourceId":{ + "shape":"String", + "documentation":"The resource that is missing.
" + } + }, + "documentation":"The request was denied due to an invalid resource error.
", + "error":{ + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, + "SessionKeyAmex":{ + "type":"structure", + "required":[ + "PanSequenceNumber", + "PrimaryAccountNumber" + ], + "members":{ + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN) of the cardholder. A PAN is a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" + } + }, + "documentation":"Parameters to derive session key for an Amex payment card.
" + }, + "SessionKeyDerivation":{ + "type":"structure", + "members":{ + "Amex":{ + "shape":"SessionKeyAmex", + "documentation":"Parameters to derive session key for an Amex payment card for ARQC verification.
" + }, + "Emv2000":{ + "shape":"SessionKeyEmv2000", + "documentation":"Parameters to derive session key for an Emv2000 payment card for ARQC verification.
" + }, + "EmvCommon":{ + "shape":"SessionKeyEmvCommon", + "documentation":"Parameters to derive session key for an Emv common payment card for ARQC verification.
" + }, + "Mastercard":{ + "shape":"SessionKeyMastercard", + "documentation":"Parameters to derive session key for a Mastercard payment card for ARQC verification.
" + }, + "Visa":{ + "shape":"SessionKeyVisa", + "documentation":"Parameters to derive session key for a Visa payment cardfor ARQC verification.
" + } + }, + "documentation":"Parameters to derive a session key for Authorization Response Cryptogram (ARQC) verification.
", + "union":true + }, + "SessionKeyDerivationMode":{ + "type":"string", + "enum":[ + "EMV_COMMON_SESSION_KEY", + "EMV2000", + "AMEX", + "MASTERCARD_SESSION_KEY", + "VISA" + ] + }, + "SessionKeyDerivationValue":{ + "type":"structure", + "members":{ + "ApplicationCryptogram":{ + "shape":"HexLengthEquals16", + "documentation":"The cryptogram provided by the terminal during transaction processing.
" + }, + "ApplicationTransactionCounter":{ + "shape":"HexLengthBetween2And4", + "documentation":"The transaction counter that is provided by the terminal during transaction processing.
" + } + }, + "documentation":"Parameters to derive session key value using a MAC EMV algorithm.
", + "union":true + }, + "SessionKeyEmv2000":{ + "type":"structure", + "required":[ + "ApplicationTransactionCounter", + "PanSequenceNumber", + "PrimaryAccountNumber" + ], + "members":{ + "ApplicationTransactionCounter":{ + "shape":"HexLengthBetween2And4", + "documentation":"The transaction counter that is provided by the terminal during transaction processing.
" + }, + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN) of the cardholder. A PAN is a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" + } + }, + "documentation":"Parameters to derive session key for an Emv2000 payment card for ARQC verification.
" + }, + "SessionKeyEmvCommon":{ + "type":"structure", + "required":[ + "ApplicationTransactionCounter", + "PanSequenceNumber", + "PrimaryAccountNumber" + ], + "members":{ + "ApplicationTransactionCounter":{ + "shape":"HexLengthBetween2And4", + "documentation":"The transaction counter that is provided by the terminal during transaction processing.
" + }, + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN) of the cardholder. A PAN is a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" + } + }, + "documentation":"Parameters to derive session key for an Emv common payment card for ARQC verification.
" + }, + "SessionKeyMastercard":{ + "type":"structure", + "required":[ + "ApplicationTransactionCounter", + "PanSequenceNumber", + "PrimaryAccountNumber", + "UnpredictableNumber" + ], + "members":{ + "ApplicationTransactionCounter":{ + "shape":"HexLengthBetween2And4", + "documentation":"The transaction counter that is provided by the terminal during transaction processing.
" + }, + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN) of the cardholder. A PAN is a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" + }, + "UnpredictableNumber":{ + "shape":"HexLengthBetween2And8", + "documentation":"A random number generated by the issuer.
" + } + }, + "documentation":"Parameters to derive session key for Mastercard payment card for ARQC verification.
" + }, + "SessionKeyVisa":{ + "type":"structure", + "required":[ + "PanSequenceNumber", + "PrimaryAccountNumber" + ], + "members":{ + "PanSequenceNumber":{ + "shape":"HexLengthEquals2", + "documentation":"A number that identifies and differentiates payment cards with the same Primary Account Number (PAN).
" + }, + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN) of the cardholder. A PAN is a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" + } + }, + "documentation":"Parameters to derive session key for Visa payment card for ARQC verification.
" + }, + "String":{"type":"string"}, + "SymmetricEncryptionAttributes":{ + "type":"structure", + "required":["Mode"], + "members":{ + "InitializationVector":{ + "shape":"HexLength16Or32", + "documentation":"An input to cryptographic primitive used to provide the intial state. The InitializationVector is typically required have a random or psuedo-random value, but sometimes it only needs to be unpredictable or unique. If a value is not provided, Amazon Web Services Payment Cryptography generates a random value.
The block cipher mode of operation. Block ciphers are designed to encrypt a block of data of fixed size (for example, 128 bits). The size of the input block is usually same as the size of the encrypted output block, while the key length can be different. A mode of operation describes how to repeatedly apply a cipher's single-block operation to securely transform amounts of data larger than a block.
" + }, + "PaddingType":{ + "shape":"PaddingType", + "documentation":"The padding to be included with the data.
" + } + }, + "documentation":"Parameters requried to encrypt plaintext data using symmetric keys.
" + }, + "ThrottlingException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"The request was denied due to request throttling.
", + "error":{ + "httpStatusCode":429, + "senderFault":true + }, + "exception":true + }, + "TranslatePinDataInput":{ + "type":"structure", + "required":[ + "EncryptedPinBlock", + "IncomingKeyIdentifier", + "IncomingTranslationAttributes", + "OutgoingKeyIdentifier", + "OutgoingTranslationAttributes" + ], + "members":{ + "EncryptedPinBlock":{ + "shape":"HexLengthBetween16And32", + "documentation":"The encrypted PIN block data that Amazon Web Services Payment Cryptography translates.
" + }, + "IncomingDukptAttributes":{ + "shape":"DukptDerivationAttributes", + "documentation":"The attributes and values to use for incoming DUKPT encryption key for PIN block tranlation.
" + }, + "IncomingKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the encryption key under which incoming PIN block data is encrypted. This key type can be PEK or BDK.
The format of the incoming PIN block data for tranlation within Amazon Web Services Payment Cryptography.
" + }, + "OutgoingDukptAttributes":{ + "shape":"DukptDerivationAttributes", + "documentation":"The attributes and values to use for outgoing DUKPT encryption key after PIN block translation.
" + }, + "OutgoingKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the encryption key for encrypting outgoing PIN block data. This key type can be PEK or BDK.
The format of the outgoing PIN block data after tranlation by Amazon Web Services Payment Cryptography.
" + } + } + }, + "TranslatePinDataOutput":{ + "type":"structure", + "required":[ + "KeyArn", + "KeyCheckValue", + "PinBlock" + ], + "members":{ + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the encryption key that Amazon Web Services Payment Cryptography uses to encrypt outgoing PIN block data after translation.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "PinBlock":{ + "shape":"HexLengthBetween16And32", + "documentation":"The ougoing encrypted PIN block data after tranlation.
" + } + } + }, + "TranslationIsoFormats":{ + "type":"structure", + "members":{ + "IsoFormat0":{ + "shape":"TranslationPinDataIsoFormat034", + "documentation":"Parameters that are required for ISO9564 PIN format 0 tranlation.
" + }, + "IsoFormat1":{ + "shape":"TranslationPinDataIsoFormat1", + "documentation":"Parameters that are required for ISO9564 PIN format 1 tranlation.
" + }, + "IsoFormat3":{ + "shape":"TranslationPinDataIsoFormat034", + "documentation":"Parameters that are required for ISO9564 PIN format 3 tranlation.
" + }, + "IsoFormat4":{ + "shape":"TranslationPinDataIsoFormat034", + "documentation":"Parameters that are required for ISO9564 PIN format 4 tranlation.
" + } + }, + "documentation":"Parameters that are required for translation between ISO9564 PIN block formats 0,1,3,4.
", + "union":true + }, + "TranslationPinDataIsoFormat034":{ + "type":"structure", + "required":["PrimaryAccountNumber"], + "members":{ + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN) of the cardholder. A PAN is a unique identifier for a payment credit or debit card and associates the card to a specific account holder.
" + } + }, + "documentation":"Parameters that are required for tranlation between ISO9564 PIN format 0,3,4 tranlation.
" + }, + "TranslationPinDataIsoFormat1":{ + "type":"structure", + "members":{ + }, + "documentation":"Parameters that are required for ISO9564 PIN format 1 tranlation.
" + }, + "ValidationException":{ + "type":"structure", + "required":["message"], + "members":{ + "fieldList":{ + "shape":"ValidationExceptionFieldList", + "documentation":"The request was denied due to an invalid request error.
" + }, + "message":{"shape":"String"} + }, + "documentation":"The request was denied due to an invalid request error.
", + "exception":true + }, + "ValidationExceptionField":{ + "type":"structure", + "required":[ + "message", + "path" + ], + "members":{ + "message":{ + "shape":"String", + "documentation":"The request was denied due to an invalid request error.
" + }, + "path":{ + "shape":"String", + "documentation":"The request was denied due to an invalid request error.
" + } + }, + "documentation":"The request was denied due to an invalid request error.
" + }, + "ValidationExceptionFieldList":{ + "type":"list", + "member":{"shape":"ValidationExceptionField"} + }, + "VerificationFailedException":{ + "type":"structure", + "required":[ + "Message", + "Reason" + ], + "members":{ + "Message":{"shape":"String"}, + "Reason":{ + "shape":"VerificationFailedReason", + "documentation":"The reason for the exception.
" + } + }, + "documentation":"This request failed verification.
", + "error":{ + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "VerificationFailedReason":{ + "type":"string", + "enum":[ + "INVALID_MAC", + "INVALID_PIN", + "INVALID_VALIDATION_DATA", + "INVALID_AUTH_REQUEST_CRYPTOGRAM" + ] + }, + "VerifyAuthRequestCryptogramInput":{ + "type":"structure", + "required":[ + "AuthRequestCryptogram", + "KeyIdentifier", + "MajorKeyDerivationMode", + "SessionKeyDerivationAttributes", + "TransactionData" + ], + "members":{ + "AuthRequestCryptogram":{ + "shape":"HexLengthEquals16", + "documentation":"The auth request cryptogram imported into Amazon Web Services Payment Cryptography for ARQC verification using a major encryption key and transaction data.
" + }, + "AuthResponseAttributes":{ + "shape":"CryptogramAuthResponse", + "documentation":"The attributes and values for auth request cryptogram verification. These parameters are required in case using ARPC Method 1 or Method 2 for ARQC verification.
" + }, + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the major encryption key that Amazon Web Services Payment Cryptography uses for ARQC verification.
The method to use when deriving the major encryption key for ARQC verification within Amazon Web Services Payment Cryptography. The same key derivation mode was used for ARQC generation outside of Amazon Web Services Payment Cryptography.
" + }, + "SessionKeyDerivationAttributes":{ + "shape":"SessionKeyDerivation", + "documentation":"The attributes and values to use for deriving a session key for ARQC verification within Amazon Web Services Payment Cryptography. The same attributes were used for ARQC generation outside of Amazon Web Services Payment Cryptography.
" + }, + "TransactionData":{ + "shape":"HexLengthBetween2And1024", + "documentation":"The transaction data that Amazon Web Services Payment Cryptography uses for ARQC verification. The same transaction is used for ARQC generation outside of Amazon Web Services Payment Cryptography.
" + } + } + }, + "VerifyAuthRequestCryptogramOutput":{ + "type":"structure", + "required":[ + "KeyArn", + "KeyCheckValue" + ], + "members":{ + "AuthResponseValue":{ + "shape":"HexLengthBetween1And16", + "documentation":"The result for ARQC verification or ARPC generation within Amazon Web Services Payment Cryptography.
" + }, + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the major encryption key that Amazon Web Services Payment Cryptography uses for ARQC verification.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + } + } + }, + "VerifyCardValidationDataInput":{ + "type":"structure", + "required":[ + "KeyIdentifier", + "PrimaryAccountNumber", + "ValidationData", + "VerificationAttributes" + ], + "members":{ + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the CVK encryption key that Amazon Web Services Payment Cryptography uses to verify card data.
The Primary Account Number (PAN), a unique identifier for a payment credit or debit card that associates the card with a specific account holder.
" + }, + "ValidationData":{ + "shape":"NumberLengthBetween3And5", + "documentation":"The CVV or CSC value for use for card data verification within Amazon Web Services Payment Cryptography.
" + }, + "VerificationAttributes":{ + "shape":"CardVerificationAttributes", + "documentation":"The algorithm to use for verification of card data within Amazon Web Services Payment Cryptography.
" + } + } + }, + "VerifyCardValidationDataOutput":{ + "type":"structure", + "required":[ + "KeyArn", + "KeyCheckValue" + ], + "members":{ + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the CVK encryption key that Amazon Web Services Payment Cryptography uses to verify CVV or CSC.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + } + } + }, + "VerifyMacInput":{ + "type":"structure", + "required":[ + "KeyIdentifier", + "Mac", + "MessageData", + "VerificationAttributes" + ], + "members":{ + "KeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the encryption key that Amazon Web Services Payment Cryptography uses to verify MAC data.
The MAC being verified.
" + }, + "MacLength":{ + "shape":"IntegerRangeBetween4And16", + "documentation":"The length of the MAC.
" + }, + "MessageData":{ + "shape":"HexLengthBetween2And4096", + "documentation":"The data on for which MAC is under verification.
" + }, + "VerificationAttributes":{ + "shape":"MacAttributes", + "documentation":"The attributes and data values to use for MAC verification within Amazon Web Services Payment Cryptography.
" + } + } + }, + "VerifyMacOutput":{ + "type":"structure", + "required":[ + "KeyArn", + "KeyCheckValue" + ], + "members":{ + "KeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the encryption key that Amazon Web Services Payment Cryptography uses for MAC verification.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + } + } + }, + "VerifyPinDataInput":{ + "type":"structure", + "required":[ + "EncryptedPinBlock", + "EncryptionKeyIdentifier", + "PinBlockFormat", + "PrimaryAccountNumber", + "VerificationAttributes", + "VerificationKeyIdentifier" + ], + "members":{ + "DukptAttributes":{ + "shape":"DukptAttributes", + "documentation":"The attributes and values for the DUKPT encrypted PIN block data.
" + }, + "EncryptedPinBlock":{ + "shape":"HexLengthBetween16And32", + "documentation":"The encrypted PIN block data that Amazon Web Services Payment Cryptography verifies.
" + }, + "EncryptionKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the encryption key under which the PIN block data is encrypted. This key type can be PEK or BDK.
The PIN encoding format for pin data generation as specified in ISO 9564. Amazon Web Services Payment Cryptography supports ISO_Format_0 and ISO_Format_3.
The ISO_Format_0 PIN block format is equivalent to the ANSI X9.8, VISA-1, and ECI-1 PIN block formats. It is similar to a VISA-4 PIN block format. It supports a PIN from 4 to 12 digits in length.
The ISO_Format_3 PIN block format is the same as ISO_Format_0 except that the fill digits are random values from 10 to 15.
The length of PIN being verified.
", + "box":true + }, + "PrimaryAccountNumber":{ + "shape":"NumberLengthBetween12And19", + "documentation":"The Primary Account Number (PAN), a unique identifier for a payment credit or debit card that associates the card with a specific account holder.
" + }, + "VerificationAttributes":{ + "shape":"PinVerificationAttributes", + "documentation":"The attributes and values for PIN data verification.
" + }, + "VerificationKeyIdentifier":{ + "shape":"KeyArnOrKeyAliasType", + "documentation":"The keyARN of the PIN verification key.
The keyARN of the PEK that Amazon Web Services Payment Cryptography uses for encrypted pin block generation.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + }, + "VerificationKeyArn":{ + "shape":"KeyArn", + "documentation":"The keyARN of the PIN encryption key that Amazon Web Services Payment Cryptography uses for PIN or PIN Offset verification.
The key check value (KCV) of the encryption key. The KCV is used to check if all parties holding a given key have the same key or to detect that a key has changed. Amazon Web Services Payment Cryptography calculates the KCV by using standard algorithms, typically by encrypting 8 or 16 bytes or \"00\" or \"01\" and then truncating the result to the first 3 bytes, or 6 hex digits, of the resulting cryptogram.
" + } + } + }, + "VisaPin":{ + "type":"structure", + "required":["PinVerificationKeyIndex"], + "members":{ + "PinVerificationKeyIndex":{ + "shape":"IntegerRangeBetween0And9", + "documentation":"The value for PIN verification index. It is used in the Visa PIN algorithm to calculate the PVV (PIN Verification Value).
" + } + }, + "documentation":"Parameters that are required to generate or verify Visa PIN.
" + }, + "VisaPinVerification":{ + "type":"structure", + "required":[ + "PinVerificationKeyIndex", + "VerificationValue" + ], + "members":{ + "PinVerificationKeyIndex":{ + "shape":"IntegerRangeBetween0And9", + "documentation":"The value for PIN verification index. It is used in the Visa PIN algorithm to calculate the PVV (PIN Verification Value).
" + }, + "VerificationValue":{ + "shape":"NumberLengthBetween4And12", + "documentation":"Parameters that are required to generate or verify Visa PVV (PIN Verification Value).
" + } + }, + "documentation":"Parameters that are required to generate or verify Visa PIN.
" + }, + "VisaPinVerificationValue":{ + "type":"structure", + "required":[ + "EncryptedPinBlock", + "PinVerificationKeyIndex" + ], + "members":{ + "EncryptedPinBlock":{ + "shape":"HexLengthBetween16And32", + "documentation":"The encrypted PIN block data to verify.
" + }, + "PinVerificationKeyIndex":{ + "shape":"IntegerRangeBetween0And9", + "documentation":"The value for PIN verification index. It is used in the Visa PIN algorithm to calculate the PVV (PIN Verification Value).
" + } + }, + "documentation":"Parameters that are required to generate or verify Visa PVV (PIN Verification Value).
" + } + }, + "documentation":"You use the Amazon Web Services Payment Cryptography Data Plane to manage how encryption keys are used for payment-related transaction processing and associated cryptographic operations. You can encrypt, decrypt, generate, verify, and translate payment-related cryptographic operations in Amazon Web Services Payment Cryptography. For more information, see Data operations in the Amazon Web Services Payment Cryptography User Guide.
To manage your encryption keys, you use the Amazon Web Services Payment Cryptography Control Plane. You can create, import, export, share, manage, and delete keys. You can also manage Identity and Access Management (IAM) policies for keys.
" +} From 9587cf76db3e92ce65670ecfda18dfd9b3341076 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 8 Jun 2023 18:06:22 +0000 Subject: [PATCH 068/317] Amazon Timestream Write Update: This release adds the capability for customers to define how their data should be partitioned, optimizing for certain access patterns. This definition will take place as a part of the table creation. --- ...feature-AmazonTimestreamWrite-3c4fa6a.json | 6 + .../codegen-resources/endpoint-tests.json | 128 ++++++++++++------ .../codegen-resources/service-2.json | 66 ++++++++- 3 files changed, 158 insertions(+), 42 deletions(-) create mode 100644 .changes/next-release/feature-AmazonTimestreamWrite-3c4fa6a.json diff --git a/.changes/next-release/feature-AmazonTimestreamWrite-3c4fa6a.json b/.changes/next-release/feature-AmazonTimestreamWrite-3c4fa6a.json new file mode 100644 index 000000000000..0350df2f96f3 --- /dev/null +++ b/.changes/next-release/feature-AmazonTimestreamWrite-3c4fa6a.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Timestream Write", + "contributor": "", + "description": "This release adds the capability for customers to define how their data should be partitioned, optimizing for certain access patterns. This definition will take place as a part of the table creation." +} diff --git a/services/timestreamwrite/src/main/resources/codegen-resources/endpoint-tests.json b/services/timestreamwrite/src/main/resources/codegen-resources/endpoint-tests.json index 21efa51ffd86..37439a64a27a 100644 --- a/services/timestreamwrite/src/main/resources/codegen-resources/endpoint-tests.json +++ b/services/timestreamwrite/src/main/resources/codegen-resources/endpoint-tests.json @@ -8,9 +8,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "us-east-1", "UseFIPS": true, - "Region": "us-east-1" + "UseDualStack": true } }, { @@ -21,9 +21,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-east-1", "UseFIPS": true, - "Region": "us-east-1" + "UseDualStack": false } }, { @@ -34,9 +34,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "us-east-1", "UseFIPS": false, - "Region": "us-east-1" + "UseDualStack": true } }, { @@ -47,9 +47,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-east-1", "UseFIPS": false, - "Region": "us-east-1" + "UseDualStack": false } }, { @@ -60,9 +60,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "cn-north-1", "UseFIPS": true, - "Region": "cn-north-1" + "UseDualStack": true } }, { @@ -73,9 +73,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "cn-north-1", "UseFIPS": true, - "Region": "cn-north-1" + "UseDualStack": false } }, { @@ -86,9 +86,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "cn-north-1", "UseFIPS": false, - "Region": "cn-north-1" + "UseDualStack": true } }, { @@ -99,9 +99,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "cn-north-1", "UseFIPS": false, - "Region": "cn-north-1" + "UseDualStack": false } }, { @@ -112,9 +112,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "us-gov-east-1", "UseFIPS": true, - "Region": "us-gov-east-1" + "UseDualStack": true } }, { @@ -125,9 +125,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-gov-east-1", "UseFIPS": true, - "Region": "us-gov-east-1" + "UseDualStack": false } }, { @@ -138,9 +138,9 @@ } }, "params": { - "UseDualStack": true, + "Region": "us-gov-east-1", "UseFIPS": false, - "Region": "us-gov-east-1" + "UseDualStack": true } }, { @@ -151,9 +151,20 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-gov-east-1", "UseFIPS": false, - "Region": "us-gov-east-1" + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": true, + "UseDualStack": true } }, { @@ -164,9 +175,20 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-iso-east-1", "UseFIPS": true, - "Region": "us-iso-east-1" + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": false, + "UseDualStack": true } }, { @@ -177,9 +199,20 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-iso-east-1", "UseFIPS": false, - "Region": "us-iso-east-1" + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": true, + "UseDualStack": true } }, { @@ -190,9 +223,20 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-isob-east-1", "UseFIPS": true, - "Region": "us-isob-east-1" + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": false, + "UseDualStack": true } }, { @@ -203,9 +247,9 @@ } }, "params": { - "UseDualStack": false, + "Region": "us-isob-east-1", "UseFIPS": false, - "Region": "us-isob-east-1" + "UseDualStack": false } }, { @@ -216,9 +260,9 @@ } }, "params": { - "UseDualStack": false, - "UseFIPS": false, "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -230,8 +274,8 @@ } }, "params": { - "UseDualStack": false, "UseFIPS": false, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -241,9 +285,9 @@ "error": "Invalid Configuration: FIPS and custom endpoint are not supported" }, "params": { - "UseDualStack": false, - "UseFIPS": true, "Region": "us-east-1", + "UseFIPS": true, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -253,11 +297,17 @@ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported" }, "params": { - "UseDualStack": true, - "UseFIPS": false, "Region": "us-east-1", + "UseFIPS": false, + "UseDualStack": true, "Endpoint": "https://example.com" } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } } ], "version": "1.0" diff --git a/services/timestreamwrite/src/main/resources/codegen-resources/service-2.json b/services/timestreamwrite/src/main/resources/codegen-resources/service-2.json index 86bfc0adae31..b415b7b1b695 100644 --- a/services/timestreamwrite/src/main/resources/codegen-resources/service-2.json +++ b/services/timestreamwrite/src/main/resources/codegen-resources/service-2.json @@ -32,7 +32,7 @@ {"shape":"ServiceQuotaExceededException"}, {"shape":"InvalidEndpointException"} ], - "documentation":"Creates a new Timestream batch load task. A batch load task processes data from a CSV source in an S3 location and writes to a Timestream table. A mapping from source to target is defined in a batch load task. Errors and events are written to a report at an S3 location. For the report, if the KMS key is not specified, the batch load task will be encrypted with a Timestream managed KMS key located in your account. For more information, see Amazon Web Services managed keys. Service quotas apply. For details, see code sample.
", + "documentation":"Creates a new Timestream batch load task. A batch load task processes data from a CSV source in an S3 location and writes to a Timestream table. A mapping from source to target is defined in a batch load task. Errors and events are written to a report at an S3 location. For the report, if the KMS key is not specified, the report will be encrypted with an S3 managed key when SSE_S3 is the option. Otherwise an error is thrown. For more information, see Amazon Web Services managed keys. Service quotas apply. For details, see code sample.
Contains properties to set on the table when enabling magnetic store writes.
" + }, + "Schema":{ + "shape":"Schema", + "documentation":"The schema of the table.
" } } }, @@ -1164,7 +1168,7 @@ }, "Value":{ "shape":"StringValue2048", - "documentation":"The value for the MeasureValue.
" + "documentation":"The value for the MeasureValue. For information, see Data types.
" }, "Type":{ "shape":"MeasureValueType", @@ -1276,6 +1280,44 @@ "max":20, "min":1 }, + "PartitionKey":{ + "type":"structure", + "required":["Type"], + "members":{ + "Type":{ + "shape":"PartitionKeyType", + "documentation":"The type of the partition key. Options are DIMENSION (dimension key) and MEASURE (measure key).
" + }, + "Name":{ + "shape":"SchemaName", + "documentation":"The name of the attribute used for a dimension key.
" + }, + "EnforcementInRecord":{ + "shape":"PartitionKeyEnforcementLevel", + "documentation":"The level of enforcement for the specification of a dimension key in ingested records. Options are REQUIRED (dimension key must be specified) and OPTIONAL (dimension key does not have to be specified).
" + } + }, + "documentation":"An attribute used in partitioning data in a table. A dimension key partitions data using the values of the dimension specified by the dimension-name as partition key, while a measure key partitions data using measure names (values of the 'measure_name' column).
" + }, + "PartitionKeyEnforcementLevel":{ + "type":"string", + "enum":[ + "REQUIRED", + "OPTIONAL" + ] + }, + "PartitionKeyList":{ + "type":"list", + "member":{"shape":"PartitionKey"}, + "min":1 + }, + "PartitionKeyType":{ + "type":"string", + "enum":[ + "DIMENSION", + "MEASURE" + ] + }, "Record":{ "type":"structure", "members":{ @@ -1293,7 +1335,7 @@ }, "MeasureValueType":{ "shape":"MeasureValueType", - "documentation":" Contains the data type of the measure value for the time-series data point. Default type is DOUBLE.
Contains the data type of the measure value for the time-series data point. Default type is DOUBLE. For more information, see Data types.
A non-empty list of partition keys defining the attributes used to partition the table data. The order of the list determines the partition hierarchy. The name and type of each partition key as well as the partition key order cannot be changed after the table is created. However, the enforcement level of each partition key can be changed.
" + } + }, + "documentation":"A Schema specifies the expected data model of the table.
" + }, "SchemaName":{ "type":"string", "min":1 @@ -1575,6 +1627,10 @@ "MagneticStoreWriteProperties":{ "shape":"MagneticStoreWriteProperties", "documentation":"Contains properties to set on the table when enabling magnetic store writes.
" + }, + "Schema":{ + "shape":"Schema", + "documentation":"The schema of the table.
" } }, "documentation":"Represents a database table in Timestream. Tables contain one or more related time series. You can modify the retention duration of the memory store and the magnetic store for a table.
" @@ -1738,6 +1794,10 @@ "MagneticStoreWriteProperties":{ "shape":"MagneticStoreWriteProperties", "documentation":"Contains properties to set on the table when enabling magnetic store writes.
" + }, + "Schema":{ + "shape":"Schema", + "documentation":"The schema of the table.
" } } }, From 40258a2021c02d90d6c476085063fc3c2217e192 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 8 Jun 2023 18:08:35 +0000 Subject: [PATCH 069/317] Updated endpoints.json and partitions.json. --- .../next-release/feature-AWSSDKforJavav2-0443982.json | 6 ++++++ .../awssdk/regions/internal/region/endpoints.json | 10 ++++++++++ 2 files changed, 16 insertions(+) create mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json new file mode 100644 index 000000000000..e5b5ee3ca5e3 --- /dev/null +++ b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." +} diff --git a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json index 873364c295f0..8a12637e5344 100644 --- a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json +++ b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json @@ -10513,13 +10513,17 @@ "ap-northeast-2" : { }, "ap-northeast-3" : { }, "ap-south-1" : { }, + "ap-south-2" : { }, "ap-southeast-1" : { }, "ap-southeast-2" : { }, "ap-southeast-3" : { }, + "ap-southeast-4" : { }, "ca-central-1" : { }, "eu-central-1" : { }, + "eu-central-2" : { }, "eu-north-1" : { }, "eu-south-1" : { }, + "eu-south-2" : { }, "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, @@ -22577,6 +22581,12 @@ } } }, + "simspaceweaver" : { + "endpoints" : { + "us-gov-east-1" : { }, + "us-gov-west-1" : { } + } + }, "sms" : { "endpoints" : { "fips-us-gov-east-1" : { From ef1e51b722f9e8916caa8023cca3ca736c0623bb Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 8 Jun 2023 18:09:50 +0000 Subject: [PATCH 070/317] Release 2.20.82. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.20.82.json | 54 +++++++++++++++++++ .../feature-AWSComprehendMedical-27b9b3c.json | 6 --- .../feature-AWSSDKforJavav2-0443982.json | 6 --- ...feature-AWSSDKforJavav2AWSSTS-d87c45d.json | 6 --- .../feature-AWSServiceCatalog-aa246b4.json | 6 --- .../feature-AmazonAthena-18332e3.json | 6 --- ...feature-AmazonTimestreamWrite-3c4fa6a.json | 6 --- ...ymentCryptographyControlPlane-201dbc1.json | 6 --- ...-PaymentCryptographyDataPlane-bac9166.json | 6 --- CHANGELOG.md | 33 ++++++++++++ README.md | 8 +-- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 12 ++++- bom-internal/pom.xml | 2 +- bom/pom.xml | 12 ++++- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth-crt/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/crt-core/pom.xml | 2 +- core/endpoints-spi/pom.xml | 2 +- core/imds/pom.xml | 2 +- core/json-utils/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services-custom/s3-transfer-manager/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/account/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/amplifyuibuilder/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appconfigdata/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationcostprofiler/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/apprunner/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/arczonalshift/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/backupgateway/pom.xml | 2 +- services/backupstorage/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/billingconductor/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/chimesdkidentity/pom.xml | 2 +- services/chimesdkmediapipelines/pom.xml | 2 +- services/chimesdkmeetings/pom.xml | 2 +- services/chimesdkmessaging/pom.xml | 2 +- services/chimesdkvoice/pom.xml | 2 +- services/cleanrooms/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/cloudcontrol/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudtraildata/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecatalyst/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcampaigns/pom.xml | 2 +- services/connectcases/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/controltower/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/docdbelastic/pom.xml | 2 +- services/drs/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/emrserverless/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/evidently/pom.xml | 2 +- services/finspace/pom.xml | 2 +- services/finspacedata/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/gamesparks/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/grafana/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/inspector2/pom.xml | 2 +- services/internetmonitor/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotfleetwise/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotroborunner/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iottwinmaker/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/ivschat/pom.xml | 2 +- services/ivsrealtime/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kafkaconnect/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kendraranking/pom.xml | 2 +- services/keyspaces/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kinesisvideowebrtcstorage/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- .../licensemanagerlinuxsubscriptions/pom.xml | 2 +- .../licensemanagerusersubscriptions/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutequipment/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/m2/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagev2/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/memorydb/pom.xml | 2 +- services/mgn/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/migrationhuborchestrator/pom.xml | 2 +- services/migrationhubrefactorspaces/pom.xml | 2 +- services/migrationhubstrategy/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/nimble/pom.xml | 2 +- services/oam/pom.xml | 2 +- services/omics/pom.xml | 2 +- services/opensearch/pom.xml | 2 +- services/opensearchserverless/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/osis/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/panorama/pom.xml | 2 +- services/paymentcryptography/pom.xml | 2 +- services/paymentcryptographydata/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/pinpointsmsvoicev2/pom.xml | 2 +- services/pipes/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 4 +- services/pricing/pom.xml | 2 +- services/privatenetworks/pom.xml | 2 +- services/proton/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rbin/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/redshiftserverless/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resiliencehub/pom.xml | 2 +- services/resourceexplorer2/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/rolesanywhere/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53recoverycluster/pom.xml | 2 +- services/route53recoverycontrolconfig/pom.xml | 2 +- services/route53recoveryreadiness/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/rum/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakergeospatial/pom.xml | 2 +- services/sagemakermetrics/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/scheduler/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- services/securitylake/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/simspaceweaver/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/snowdevicemanagement/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/ssmcontacts/pom.xml | 2 +- services/ssmincidents/pom.xml | 2 +- services/ssmsap/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/supportapp/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/tnb/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/voiceid/pom.xml | 2 +- services/vpclattice/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/wisdom/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/workspacesweb/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/auth-tests/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/region-testing/pom.xml | 2 +- test/ruleset-testing-core/pom.xml | 2 +- test/s3-benchmarks/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- third-party/pom.xml | 2 +- third-party/third-party-jackson-core/pom.xml | 2 +- .../pom.xml | 2 +- utils/pom.xml | 2 +- 420 files changed, 522 insertions(+), 461 deletions(-) create mode 100644 .changes/2.20.82.json delete mode 100644 .changes/next-release/feature-AWSComprehendMedical-27b9b3c.json delete mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json delete mode 100644 .changes/next-release/feature-AWSSDKforJavav2AWSSTS-d87c45d.json delete mode 100644 .changes/next-release/feature-AWSServiceCatalog-aa246b4.json delete mode 100644 .changes/next-release/feature-AmazonAthena-18332e3.json delete mode 100644 .changes/next-release/feature-AmazonTimestreamWrite-3c4fa6a.json delete mode 100644 .changes/next-release/feature-PaymentCryptographyControlPlane-201dbc1.json delete mode 100644 .changes/next-release/feature-PaymentCryptographyDataPlane-bac9166.json diff --git a/.changes/2.20.82.json b/.changes/2.20.82.json new file mode 100644 index 000000000000..9ca6d3458ec4 --- /dev/null +++ b/.changes/2.20.82.json @@ -0,0 +1,54 @@ +{ + "version": "2.20.82", + "date": "2023-06-08", + "entries": [ + { + "type": "feature", + "category": "AWS Comprehend Medical", + "contributor": "", + "description": "This release supports a new set of entities and traits." + }, + { + "type": "feature", + "category": "AWS STS", + "contributor": "", + "description": "Updates the core STS credential provider logic to return AwsSessionCredentials instead of an STS-specific class, and adds expirationTime to AwsSessionCredentials" + }, + { + "type": "feature", + "category": "AWS Service Catalog", + "contributor": "", + "description": "New parameter added in ServiceCatalog DescribeProvisioningArtifact api - IncludeProvisioningArtifactParameters. This parameter can be used to return information about the parameters used to provision the product" + }, + { + "type": "feature", + "category": "Amazon Athena", + "contributor": "", + "description": "You can now define custom spark properties at start of the session for use cases like cluster encryption, table formats, and general Spark tuning." + }, + { + "type": "feature", + "category": "Amazon Timestream Write", + "contributor": "", + "description": "This release adds the capability for customers to define how their data should be partitioned, optimizing for certain access patterns. This definition will take place as a part of the table creation." + }, + { + "type": "feature", + "category": "Payment Cryptography Control Plane", + "contributor": "", + "description": "Initial release of AWS Payment Cryptography Control Plane service for creating and managing cryptographic keys used during card payment processing." + }, + { + "type": "feature", + "category": "Payment Cryptography Data Plane", + "contributor": "", + "description": "Initial release of AWS Payment Cryptography DataPlane Plane service for performing cryptographic operations typically used during card payment processing." + }, + { + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/feature-AWSComprehendMedical-27b9b3c.json b/.changes/next-release/feature-AWSComprehendMedical-27b9b3c.json deleted file mode 100644 index 5f3d549befac..000000000000 --- a/.changes/next-release/feature-AWSComprehendMedical-27b9b3c.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Comprehend Medical", - "contributor": "", - "description": "This release supports a new set of entities and traits." -} diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json deleted file mode 100644 index e5b5ee3ca5e3..000000000000 --- a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS SDK for Java v2", - "contributor": "", - "description": "Updated endpoint and partition metadata." -} diff --git a/.changes/next-release/feature-AWSSDKforJavav2AWSSTS-d87c45d.json b/.changes/next-release/feature-AWSSDKforJavav2AWSSTS-d87c45d.json deleted file mode 100644 index 277a5bc562e5..000000000000 --- a/.changes/next-release/feature-AWSSDKforJavav2AWSSTS-d87c45d.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS STS", - "contributor": "", - "description": "Updates the core STS credential provider logic to return AwsSessionCredentials instead of an STS-specific class, and adds expirationTime to AwsSessionCredentials" -} diff --git a/.changes/next-release/feature-AWSServiceCatalog-aa246b4.json b/.changes/next-release/feature-AWSServiceCatalog-aa246b4.json deleted file mode 100644 index 1b590a6bc19a..000000000000 --- a/.changes/next-release/feature-AWSServiceCatalog-aa246b4.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Service Catalog", - "contributor": "", - "description": "New parameter added in ServiceCatalog DescribeProvisioningArtifact api - IncludeProvisioningArtifactParameters. This parameter can be used to return information about the parameters used to provision the product" -} diff --git a/.changes/next-release/feature-AmazonAthena-18332e3.json b/.changes/next-release/feature-AmazonAthena-18332e3.json deleted file mode 100644 index 8aa3a56e1a85..000000000000 --- a/.changes/next-release/feature-AmazonAthena-18332e3.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Athena", - "contributor": "", - "description": "You can now define custom spark properties at start of the session for use cases like cluster encryption, table formats, and general Spark tuning." -} diff --git a/.changes/next-release/feature-AmazonTimestreamWrite-3c4fa6a.json b/.changes/next-release/feature-AmazonTimestreamWrite-3c4fa6a.json deleted file mode 100644 index 0350df2f96f3..000000000000 --- a/.changes/next-release/feature-AmazonTimestreamWrite-3c4fa6a.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Timestream Write", - "contributor": "", - "description": "This release adds the capability for customers to define how their data should be partitioned, optimizing for certain access patterns. This definition will take place as a part of the table creation." -} diff --git a/.changes/next-release/feature-PaymentCryptographyControlPlane-201dbc1.json b/.changes/next-release/feature-PaymentCryptographyControlPlane-201dbc1.json deleted file mode 100644 index 9e5b26bb2681..000000000000 --- a/.changes/next-release/feature-PaymentCryptographyControlPlane-201dbc1.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Payment Cryptography Control Plane", - "contributor": "", - "description": "Initial release of AWS Payment Cryptography Control Plane service for creating and managing cryptographic keys used during card payment processing." -} diff --git a/.changes/next-release/feature-PaymentCryptographyDataPlane-bac9166.json b/.changes/next-release/feature-PaymentCryptographyDataPlane-bac9166.json deleted file mode 100644 index 75513b30df3f..000000000000 --- a/.changes/next-release/feature-PaymentCryptographyDataPlane-bac9166.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Payment Cryptography Data Plane", - "contributor": "", - "description": "Initial release of AWS Payment Cryptography DataPlane Plane service for performing cryptographic operations typically used during card payment processing." -} diff --git a/CHANGELOG.md b/CHANGELOG.md index abffde95d11a..eff26ede9c2c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,36 @@ +# __2.20.82__ __2023-06-08__ +## __AWS Comprehend Medical__ + - ### Features + - This release supports a new set of entities and traits. + +## __AWS SDK for Java v2__ + - ### Features + - Updated endpoint and partition metadata. + +## __AWS STS__ + - ### Features + - Updates the core STS credential provider logic to return AwsSessionCredentials instead of an STS-specific class, and adds expirationTime to AwsSessionCredentials + +## __AWS Service Catalog__ + - ### Features + - New parameter added in ServiceCatalog DescribeProvisioningArtifact api - IncludeProvisioningArtifactParameters. This parameter can be used to return information about the parameters used to provision the product + +## __Amazon Athena__ + - ### Features + - You can now define custom spark properties at start of the session for use cases like cluster encryption, table formats, and general Spark tuning. + +## __Amazon Timestream Write__ + - ### Features + - This release adds the capability for customers to define how their data should be partitioned, optimizing for certain access patterns. This definition will take place as a part of the table creation. + +## __Payment Cryptography Control Plane__ + - ### Features + - Initial release of AWS Payment Cryptography Control Plane service for creating and managing cryptographic keys used during card payment processing. + +## __Payment Cryptography Data Plane__ + - ### Features + - Initial release of AWS Payment Cryptography DataPlane Plane service for performing cryptographic operations typically used during card payment processing. + # __2.20.81__ __2023-06-07__ ## __AWS CloudFormation__ - ### Features diff --git a/README.md b/README.md index 7a19156ce2bc..12e4aca1144e 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ To automatically manage module versions (currently all modules have the same ver+ * When enabled, a non-blocking dns resolver will be used instead, by modifying netty's bootstrap configuration. + * See https://netty.io/news/2016/05/26/4-1-0-Final.html + */ + Builder useNonBlockingDnsResolver(Boolean useNonBlockingDnsResolver); } /** @@ -492,6 +502,7 @@ private static final class DefaultBuilder implements Builder { private Http2Configuration http2Configuration; private SslProvider sslProvider; private ProxyConfiguration proxyConfiguration; + private Boolean useNonBlockingDnsResolver; private DefaultBuilder() { } @@ -716,6 +727,16 @@ public void setHttp2Configuration(Http2Configuration http2Configuration) { http2Configuration(http2Configuration); } + @Override + public Builder useNonBlockingDnsResolver(Boolean useNonBlockingDnsResolver) { + this.useNonBlockingDnsResolver = useNonBlockingDnsResolver; + return this; + } + + public void setUseNonBlockingDnsResolver(Boolean useNonBlockingDnsResolver) { + useNonBlockingDnsResolver(useNonBlockingDnsResolver); + } + @Override public SdkAsyncHttpClient buildWithDefaults(AttributeMap serviceDefaults) { if (standardOptions.get(SdkHttpConfigurationOption.TLS_NEGOTIATION_TIMEOUT) == null) { diff --git a/http-clients/netty-nio-client/src/main/java/software/amazon/awssdk/http/nio/netty/SdkEventLoopGroup.java b/http-clients/netty-nio-client/src/main/java/software/amazon/awssdk/http/nio/netty/SdkEventLoopGroup.java index abb665f2c39a..254211e9303f 100644 --- a/http-clients/netty-nio-client/src/main/java/software/amazon/awssdk/http/nio/netty/SdkEventLoopGroup.java +++ b/http-clients/netty-nio-client/src/main/java/software/amazon/awssdk/http/nio/netty/SdkEventLoopGroup.java @@ -19,11 +19,13 @@ import io.netty.channel.ChannelFactory; import io.netty.channel.EventLoopGroup; import io.netty.channel.nio.NioEventLoopGroup; +import io.netty.channel.socket.DatagramChannel; +import io.netty.channel.socket.nio.NioDatagramChannel; import io.netty.channel.socket.nio.NioSocketChannel; import java.util.Optional; import java.util.concurrent.ThreadFactory; import software.amazon.awssdk.annotations.SdkPublicApi; -import software.amazon.awssdk.http.nio.netty.internal.utils.SocketChannelResolver; +import software.amazon.awssdk.http.nio.netty.internal.utils.ChannelResolver; import software.amazon.awssdk.utils.ThreadFactoryBuilder; import software.amazon.awssdk.utils.Validate; @@ -39,7 +41,8 @@ * *
+ * If classesFirst is true, loads the class via the optionally
+ * specified classes in the order of their specification, and if not found,
+ * via the context class loader of the current thread, and if not found,
+ * from the caller class loader as the last resort.
+ *
+ * @param fqcn
+ * fully qualified class name of the target class to be loaded
+ * @param classesFirst
+ * true if the class loaders of the optionally specified classes
+ * take precedence over the context class loader of the current
+ * thread; false if the opposite is true.
+ * @param classes
+ * class loader providers
+ * @return the class loaded; never null
+ *
+ * @throws ClassNotFoundException if failed to load the class
+ */
+ public static Class> loadClass(String fqcn, boolean classesFirst,
+ Class>... classes) throws ClassNotFoundException {
+ Class> target = null;
+ if (classesFirst) {
+ target = loadClassViaClasses(fqcn, classes);
+ if (target == null) {
+ target = loadClassViaContext(fqcn);
+ }
+ } else {
+ target = loadClassViaContext(fqcn);
+ if (target == null) {
+ target = loadClassViaClasses(fqcn, classes);
+ }
+ }
+ return target == null ? Class.forName(fqcn) : target;
+ }
+
+ /**
+ * Attempt to get the current thread's class loader and fallback to the system classloader if null
+ * @return a {@link ClassLoader} or null if none found
+ */
+ private static ClassLoader contextClassLoader() {
+ ClassLoader threadClassLoader = Thread.currentThread().getContextClassLoader();
+ if (threadClassLoader != null) {
+ return threadClassLoader;
+ }
+ return ClassLoader.getSystemClassLoader();
+ }
+
+ /**
+ * Attempt to get class loader that loads the classes and fallback to the thread context classloader if null.
+ *
+ * @param classes the classes
+ * @return a {@link ClassLoader} or null if none found
+ */
+ public static ClassLoader classLoader(Class>... classes) {
+ if (classes != null) {
+ for (Class clzz : classes) {
+ ClassLoader classLoader = clzz.getClassLoader();
+
+ if (classLoader != null) {
+ return classLoader;
+ }
+ }
+ }
+
+ return contextClassLoader();
+ }
+
+}
\ No newline at end of file
From 4d7a27cb3fadfbaa3c4c0842c82953a6ff9c84ac Mon Sep 17 00:00:00 2001
From: Stephen Flavin
* {@link #subscribe(Subscriber)} should be implemented to tie this publisher to a subscriber. Ideally each call to subscribe
- * should reproduce the content (i.e if you are reading from a file each subscribe call should produce a {@link
- * org.reactivestreams.Subscription} that reads the file fully). This allows for automatic retries to be performed in the SDK. If
- * the content is not reproducible, an exception may be thrown from any subsequent {@link #subscribe(Subscriber)} calls.
+ * should reproduce the content (i.e if you are reading from a file each subscribe call should produce a
+ * {@link org.reactivestreams.Subscription} that reads the file fully). This allows for automatic retries to be performed in the
+ * SDK. If the content is not reproducible, an exception may be thrown from any subsequent {@link #subscribe(Subscriber)} calls.
*
- * It is important to only send the number of chunks that the subscriber requests to avoid out of memory situations.
- * The subscriber does it's own buffering so it's usually not needed to buffer in the publisher. Additional permits
- * for chunks will be notified via the {@link org.reactivestreams.Subscription#request(long)} method.
+ * It is important to only send the number of chunks that the subscriber requests to avoid out of memory situations. The
+ * subscriber does it's own buffering so it's usually not needed to buffer in the publisher. Additional permits for chunks will be
+ * notified via the {@link org.reactivestreams.Subscription#request(long)} method.
* As the method name implies, this is unsafe. Use {@link #fromBytes(byte[])} unless you're sure you know the risks.
+ *
+ * @param bytes The bytes to send to the service.
+ * @return AsyncRequestBody instance.
+ */
+ static AsyncRequestBody fromBytesUnsafe(byte[] bytes) {
+ return ByteBuffersAsyncRequestBody.from(bytes);
+ }
+
+ /**
+ * Creates an {@link AsyncRequestBody} from a {@link ByteBuffer}. This will copy the contents of the {@link ByteBuffer} to
+ * prevent modifications to the provided {@link ByteBuffer} from being reflected in the {@link AsyncRequestBody}.
+ *
+ * NOTE: This method ignores the current read position. Use {@link #fromRemainingByteBuffer(ByteBuffer)} if you need
+ * it to copy only the remaining readable bytes.
*
* @param byteBuffer ByteBuffer to send to the service.
* @return AsyncRequestBody instance.
*/
static AsyncRequestBody fromByteBuffer(ByteBuffer byteBuffer) {
- return fromBytes(BinaryUtils.copyAllBytesFrom(byteBuffer));
+ ByteBuffer immutableCopy = BinaryUtils.immutableCopyOf(byteBuffer);
+ immutableCopy.rewind();
+ return ByteBuffersAsyncRequestBody.of((long) immutableCopy.remaining(), immutableCopy);
+ }
+
+ /**
+ * Creates an {@link AsyncRequestBody} from the remaining readable bytes from a {@link ByteBuffer}. This will copy the
+ * remaining contents of the {@link ByteBuffer} to prevent modifications to the provided {@link ByteBuffer} from being
+ * reflected in the {@link AsyncRequestBody}.
+ * Unlike {@link #fromByteBuffer(ByteBuffer)}, this method respects the current read position of the buffer and reads
+ * only the remaining bytes.
+ *
+ * @param byteBuffer ByteBuffer to send to the service.
+ * @return AsyncRequestBody instance.
+ */
+ static AsyncRequestBody fromRemainingByteBuffer(ByteBuffer byteBuffer) {
+ ByteBuffer immutableCopy = BinaryUtils.immutableCopyOfRemaining(byteBuffer);
+ return ByteBuffersAsyncRequestBody.of((long) immutableCopy.remaining(), immutableCopy);
+ }
+
+ /**
+ * Creates an {@link AsyncRequestBody} from a {@link ByteBuffer} without copying the contents of the
+ * {@link ByteBuffer}. This introduces concurrency risks, allowing the caller to modify the {@link ByteBuffer} stored in this
+ * {@code AsyncRequestBody} implementation.
+ *
+ * NOTE: This method ignores the current read position. Use {@link #fromRemainingByteBufferUnsafe(ByteBuffer)} if you
+ * need it to copy only the remaining readable bytes.
+ *
+ * As the method name implies, this is unsafe. Use {@link #fromByteBuffer(ByteBuffer)}} unless you're sure you know the
+ * risks.
+ *
+ * @param byteBuffer ByteBuffer to send to the service.
+ * @return AsyncRequestBody instance.
+ */
+ static AsyncRequestBody fromByteBufferUnsafe(ByteBuffer byteBuffer) {
+ ByteBuffer readOnlyBuffer = byteBuffer.asReadOnlyBuffer();
+ readOnlyBuffer.rewind();
+ return ByteBuffersAsyncRequestBody.of((long) readOnlyBuffer.remaining(), readOnlyBuffer);
+ }
+
+ /**
+ * Creates an {@link AsyncRequestBody} from a {@link ByteBuffer} without copying the contents of the
+ * {@link ByteBuffer}. This introduces concurrency risks, allowing the caller to modify the {@link ByteBuffer} stored in this
+ * {@code AsyncRequestBody} implementation.
+ * Unlike {@link #fromByteBufferUnsafe(ByteBuffer)}, this method respects the current read position of
+ * the buffer and reads only the remaining bytes.
+ *
+ * As the method name implies, this is unsafe. Use {@link #fromByteBuffer(ByteBuffer)}} unless you're sure you know the
+ * risks.
+ *
+ * @param byteBuffer ByteBuffer to send to the service.
+ * @return AsyncRequestBody instance.
+ */
+ static AsyncRequestBody fromRemainingByteBufferUnsafe(ByteBuffer byteBuffer) {
+ ByteBuffer readOnlyBuffer = byteBuffer.asReadOnlyBuffer();
+ return ByteBuffersAsyncRequestBody.of((long) readOnlyBuffer.remaining(), readOnlyBuffer);
+ }
+
+ /**
+ * Creates an {@link AsyncRequestBody} from a {@link ByteBuffer} array. This will copy the contents of each {@link ByteBuffer}
+ * to prevent modifications to any provided {@link ByteBuffer} from being reflected in the {@link AsyncRequestBody}.
+ *
+ * NOTE: This method ignores the current read position of each {@link ByteBuffer}. Use
+ * {@link #fromRemainingByteBuffers(ByteBuffer...)} if you need it to copy only the remaining readable bytes.
+ *
+ * @param byteBuffers ByteBuffer array to send to the service.
+ * @return AsyncRequestBody instance.
+ */
+ static AsyncRequestBody fromByteBuffers(ByteBuffer... byteBuffers) {
+ ByteBuffer[] immutableCopy = Arrays.stream(byteBuffers)
+ .map(BinaryUtils::immutableCopyOf)
+ .peek(ByteBuffer::rewind)
+ .toArray(ByteBuffer[]::new);
+ return ByteBuffersAsyncRequestBody.of(immutableCopy);
+ }
+
+ /**
+ * Creates an {@link AsyncRequestBody} from a {@link ByteBuffer} array. This will copy the remaining contents of each
+ * {@link ByteBuffer} to prevent modifications to any provided {@link ByteBuffer} from being reflected in the
+ * {@link AsyncRequestBody}.
+ * Unlike {@link #fromByteBufferUnsafe(ByteBuffer)},
+ * this method respects the current read position of each buffer and reads only the remaining bytes.
+ *
+ * @param byteBuffers ByteBuffer array to send to the service.
+ * @return AsyncRequestBody instance.
+ */
+ static AsyncRequestBody fromRemainingByteBuffers(ByteBuffer... byteBuffers) {
+ ByteBuffer[] immutableCopy = Arrays.stream(byteBuffers)
+ .map(BinaryUtils::immutableCopyOfRemaining)
+ .peek(ByteBuffer::rewind)
+ .toArray(ByteBuffer[]::new);
+ return ByteBuffersAsyncRequestBody.of(immutableCopy);
+ }
+
+ /**
+ * Creates an {@link AsyncRequestBody} from a {@link ByteBuffer} array without copying the contents of each
+ * {@link ByteBuffer}. This introduces concurrency risks, allowing the caller to modify any {@link ByteBuffer} stored in this
+ * {@code AsyncRequestBody} implementation.
+ *
+ * NOTE: This method ignores the current read position of each {@link ByteBuffer}. Use
+ * {@link #fromRemainingByteBuffers(ByteBuffer...)} if you need it to copy only the remaining readable bytes.
+ *
+ * As the method name implies, this is unsafe. Use {@link #fromByteBuffers(ByteBuffer...)} unless you're sure you know the
+ * risks.
+ *
+ * @param byteBuffers ByteBuffer array to send to the service.
+ * @return AsyncRequestBody instance.
+ */
+ static AsyncRequestBody fromByteBuffersUnsafe(ByteBuffer... byteBuffers) {
+ ByteBuffer[] readOnlyBuffers = Arrays.stream(byteBuffers)
+ .map(ByteBuffer::asReadOnlyBuffer)
+ .peek(ByteBuffer::rewind)
+ .toArray(ByteBuffer[]::new);
+ return ByteBuffersAsyncRequestBody.of(readOnlyBuffers);
+ }
+
+ /**
+ * Creates an {@link AsyncRequestBody} from a {@link ByteBuffer} array without copying the contents of each
+ * {@link ByteBuffer}. This introduces concurrency risks, allowing the caller to modify any {@link ByteBuffer} stored in this
+ * {@code AsyncRequestBody} implementation.
+ * Unlike {@link #fromByteBuffersUnsafe(ByteBuffer...)},
+ * this method respects the current read position of each buffer and reads only the remaining bytes.
+ *
+ * As the method name implies, this is unsafe. Use {@link #fromByteBuffers(ByteBuffer...)} unless you're sure you know the
+ * risks.
+ *
+ * @param byteBuffers ByteBuffer array to send to the service.
+ * @return AsyncRequestBody instance.
+ */
+ static AsyncRequestBody fromRemainingByteBuffersUnsafe(ByteBuffer... byteBuffers) {
+ ByteBuffer[] readOnlyBuffers = Arrays.stream(byteBuffers)
+ .map(ByteBuffer::asReadOnlyBuffer)
+ .toArray(ByteBuffer[]::new);
+ return ByteBuffersAsyncRequestBody.of(readOnlyBuffers);
}
/**
- * Creates a {@link AsyncRequestBody} from a {@link InputStream}.
+ * Creates an {@link AsyncRequestBody} from an {@link InputStream}.
*
* An {@link ExecutorService} is required in order to perform the blocking data reads, to prevent blocking the
* non-blocking event loop threads owned by the SDK.
@@ -242,7 +395,7 @@ static BlockingOutputStreamAsyncRequestBody forBlockingOutputStream(Long content
}
/**
- * Creates a {@link AsyncRequestBody} with no content.
+ * Creates an {@link AsyncRequestBody} with no content.
*
* @return AsyncRequestBody instance.
*/
diff --git a/core/sdk-core/src/main/java/software/amazon/awssdk/core/internal/async/ByteArrayAsyncRequestBody.java b/core/sdk-core/src/main/java/software/amazon/awssdk/core/internal/async/ByteArrayAsyncRequestBody.java
deleted file mode 100644
index 29205479b798..000000000000
--- a/core/sdk-core/src/main/java/software/amazon/awssdk/core/internal/async/ByteArrayAsyncRequestBody.java
+++ /dev/null
@@ -1,98 +0,0 @@
-/*
- * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
- *
- * Licensed under the Apache License, Version 2.0 (the "License").
- * You may not use this file except in compliance with the License.
- * A copy of the License is located at
- *
- * http://aws.amazon.com/apache2.0
- *
- * or in the "license" file accompanying this file. This file is distributed
- * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
- * express or implied. See the License for the specific language governing
- * permissions and limitations under the License.
- */
-
-package software.amazon.awssdk.core.internal.async;
-
-import java.nio.ByteBuffer;
-import java.util.Optional;
-import org.reactivestreams.Subscriber;
-import org.reactivestreams.Subscription;
-import software.amazon.awssdk.annotations.SdkInternalApi;
-import software.amazon.awssdk.core.async.AsyncRequestBody;
-import software.amazon.awssdk.utils.Logger;
-
-/**
- * An implementation of {@link AsyncRequestBody} for providing data from memory. This is created using static
- * methods on {@link AsyncRequestBody}
- *
- * @see AsyncRequestBody#fromBytes(byte[])
- * @see AsyncRequestBody#fromByteBuffer(ByteBuffer)
- * @see AsyncRequestBody#fromString(String)
- */
-@SdkInternalApi
-public final class ByteArrayAsyncRequestBody implements AsyncRequestBody {
- private static final Logger log = Logger.loggerFor(ByteArrayAsyncRequestBody.class);
-
- private final byte[] bytes;
-
- private final String mimetype;
-
- public ByteArrayAsyncRequestBody(byte[] bytes, String mimetype) {
- this.bytes = bytes.clone();
- this.mimetype = mimetype;
- }
-
- @Override
- public Optional
+ * The new buffer's position will be set to the position of the given {@code ByteBuffer}, but the mark if defined will be
+ * ignored.
+ *
+ * NOTE: this method intentionally converts direct buffers to non-direct though there is no guarantee this will always
+ * be the case, if this is required see {@link #toNonDirectBuffer(ByteBuffer)}
+ *
+ * @param bb the source {@code ByteBuffer} to copy.
+ * @return a read only {@code ByteBuffer}.
+ */
+ public static ByteBuffer immutableCopyOf(ByteBuffer bb) {
+ if (bb == null) {
+ return null;
+ }
+ int sourceBufferPosition = bb.position();
+ ByteBuffer readOnlyCopy = bb.asReadOnlyBuffer();
+ readOnlyCopy.rewind();
+ ByteBuffer cloned = ByteBuffer.allocate(readOnlyCopy.capacity())
+ .put(readOnlyCopy);
+ cloned.position(sourceBufferPosition);
+ return cloned.asReadOnlyBuffer();
+ }
+
+ /**
+ * Returns an immutable copy of the remaining bytes of the given {@code ByteBuffer}.
+ *
+ * NOTE: this method intentionally converts direct buffers to non-direct though there is no guarantee this will always
+ * be the case, if this is required see {@link #toNonDirectBuffer(ByteBuffer)}
+ *
+ * @param bb the source {@code ByteBuffer} to copy.
+ * @return a read only {@code ByteBuffer}.
+ */
+ public static ByteBuffer immutableCopyOfRemaining(ByteBuffer bb) {
+ if (bb == null) {
+ return null;
+ }
+ ByteBuffer readOnlyCopy = bb.asReadOnlyBuffer();
+ ByteBuffer cloned = ByteBuffer.allocate(readOnlyCopy.remaining())
+ .put(readOnlyCopy);
+ cloned.flip();
+ return cloned.asReadOnlyBuffer();
+ }
+
+ /**
+ * Returns a copy of the given {@code DirectByteBuffer} from its current position as a non-direct {@code HeapByteBuffer}
+ *
+ * The new buffer's position will be set to the position of the given {@code ByteBuffer}, but the mark if defined will be
+ * ignored.
+ *
+ * @param bb the source {@code ByteBuffer} to copy.
+ * @return {@code ByteBuffer}.
+ */
+ public static ByteBuffer toNonDirectBuffer(ByteBuffer bb) {
+ if (bb == null) {
+ return null;
+ }
+ if (!bb.isDirect()) {
+ throw new IllegalArgumentException("Provided ByteBuffer is already non-direct");
+ }
+ int sourceBufferPosition = bb.position();
+ ByteBuffer readOnlyCopy = bb.asReadOnlyBuffer();
+ readOnlyCopy.rewind();
+ ByteBuffer cloned = ByteBuffer.allocate(bb.capacity())
+ .put(readOnlyCopy);
+ cloned.rewind();
+ cloned.position(sourceBufferPosition);
+ if (bb.isReadOnly()) {
+ return cloned.asReadOnlyBuffer();
+ }
+ return cloned;
+ }
+
/**
* Returns a copy of all the bytes from the given Searches for available phone numbers that you can claim to your Amazon Connect instance or traffic distribution group. If the provided Searches the hours of operation in an Amazon Connect instance, with optional filtering. Searches prompts in an Amazon Connect instance, with optional filtering. This API is in preview release for Amazon Connect and is subject to change. Searches queues in an Amazon Connect instance, with optional filtering. Searches quick connects in an Amazon Connect instance, with optional filtering. A list of conditions which would be applied together with an OR condition. A list of conditions which would be applied together with an AND condition. A leaf node condition which can be used to specify a string condition. The currently supported values for The search criteria to be used to return hours of operations. Filters to be applied to search results. A description for the prompt. The description of the prompt. A list of conditions which would be applied together with an OR condition. A list of conditions which would be applied together with an AND condition. A leaf node condition which can be used to specify a string condition. The currently supported values for The search criteria to be used to return prompts. Filters to be applied to search results. A list of conditions which would be applied together with an AND condition. A leaf node condition which can be used to specify a string condition. The currently supported values for The type of queue. A list of conditions which would be applied together with an OR condition. A list of conditions which would be applied together with an AND condition. A leaf node condition which can be used to specify a string condition. The currently supported values for The search criteria to be used to return quick connects. Filters to be applied to search results. A list of conditions which would be applied together with an AND condition. A leaf node condition which can be used to specify a string condition. The currently supported values for The search criteria to be used to return routing profiles. The The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance. The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results. The maximum number of results to return per page. Filters to be applied to search results. The search criteria to be used to return hours of operations. Information about the hours of operations. If there are additional results, this is the token for the next set of results. The total number of hours of operations which matched your search query. The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance. The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results. The maximum number of results to return per page. Filters to be applied to search results. The search criteria to be used to return prompts. Information about the prompts. If there are additional results, this is the token for the next set of results. The total number of quick connects which matched your search query. The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance. The token for the next set of results. Use the value returned in the previous response in the next request to retrieve the next set of results. The maximum number of results to return per page. Filters to be applied to search results. The search criteria to be used to return quick connects. Information about the quick connects. If there are additional results, this is the token for the next set of results. The total number of quick connects which matched your search query. The type of comparison to be made when evaluating the string condition. A leaf node condition which can be used to specify a string condition. The currently supported value for A leaf node condition which can be used to specify a string condition. A leaf node condition which can be used to specify a string condition. A leaf node condition which can be used to specify a string condition. The currently supported values for Specifies a cryptographic key management compliance standard used for handling CA keys. Default: FIPS_140_2_LEVEL_3_OR_HIGHER Note: ap-northeast-3 ap-southeast-3 When creating a CA in these Regions, you must provide Specifies a cryptographic key management compliance standard used for handling CA keys. Default: FIPS_140_2_LEVEL_3_OR_HIGHER Some Amazon Web Services Regions do not support the default. When creating a CA in these Regions, you must provide For information about security standard support in various Regions, see Storage and security compliance of Amazon Web Services Private CA private keys. The name of the algorithm that will be used to sign the certificate to be issued. This parameter should not be confused with the The specified signing algorithm family (RSA or ECDSA) much match the algorithm family of the CA's secret key. The name of the algorithm that will be used to sign the certificate to be issued. This parameter should not be confused with the The specified signing algorithm family (RSA or ECDSA) must match the algorithm family of the CA's secret key. Information describing the start of the validity period of the certificate. This parameter sets the “Not Before\" date for the certificate. By default, when issuing a certificate, Amazon Web Services Private CA sets the \"Not Before\" date to the issuance time minus 60 minutes. This compensates for clock inconsistencies across computer systems. The Unlike the The Information describing the start of the validity period of the certificate. This parameter sets the “Not Before\" date for the certificate. By default, when issuing a certificate, Amazon Web Services Private CA sets the \"Not Before\" date to the issuance time minus 60 minutes. This compensates for clock inconsistencies across computer systems. The Unlike the The Associates one or more faces with an existing UserID. Takes an array of The If successful, an array of The ACTIVE - All associations or disassociations of FaceID(s) for a UserID are complete. CREATED - A UserID has been created, but has no FaceID(s) associated with it. UPDATING - A UserID is being updated and there are current associations or disassociations of FaceID(s) taking place. Creates an Amazon Rekognition stream processor that you can use to detect and recognize faces or to detect labels in a streaming video. Amazon Rekognition Video is a consumer of live video from Amazon Kinesis Video Streams. There are two different settings for stream processors in Amazon Rekognition: detecting faces and detecting labels. If you are creating a stream processor for detecting faces, you provide as input a Kinesis video stream ( If you are creating a stream processor to detect labels, you provide as input a Kinesis video stream ( Use This operation requires permissions to perform the Creates a new User within a collection specified by Uses a Deletes the stream processor identified by Deletes the specified UserID within the collection. Faces that are associated with the UserID are disassociated from the UserID before deleting the specified UserID. If the specified Detects text in the input image and converts it into machine-readable text. Pass the input image as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, you must pass it as a reference to an image in an Amazon S3 bucket. For the AWS CLI, passing image bytes is not supported. The image must be either a .png or .jpeg formatted file. The A word is one or more script characters that are not separated by spaces. A line is a string of equally spaced words. A line isn't necessarily a complete sentence. For example, a driver's license number is detected as a line. A line ends when there is no aligned text after it. Also, a line ends when there is a large gap between words, relative to the length of the words. This means, depending on the gap between words, Amazon Rekognition may detect multiple lines in text aligned in the same direction. Periods don't represent the end of a line. If a sentence spans multiple lines, the To determine whether a To be detected, text must be within +/- 90 degrees orientation of the horizontal axis. For more information, see Detecting text in the Amazon Rekognition Developer Guide. Removes the association between a Returns a list of tags in an Amazon Rekognition collection, stream processor, or Custom Labels model. This operation requires permissions to perform the Returns metadata of the User such as For a given input image, first detects the largest face in the image, and then searches the specified collection for matching faces. The operation compares the features of the input face with faces in the specified collection. To search for all faces in an input image, you might first call the IndexFaces operation, and then use the face IDs returned in subsequent calls to the SearchFaces operation. You can also call the You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file. The response returns an array of faces that match, ordered by similarity score with the highest similarity first. More specifically, it is an array of metadata for each face match found. Along with the metadata, the response also includes a If no faces are detected in the input image, For an example, Searching for a Face Using an Image in the Amazon Rekognition Developer Guide. The To use quality filtering, you need a collection associated with version 3 of the face model or higher. To get the version of the face model associated with a collection, call DescribeCollection. This operation requires permissions to perform the Searches for UserIDs within a collection based on a Searches for UserIDs using a supplied image. It first detects the largest face in the image, and then searches a specified collection for matching UserIDs. The operation returns an array of UserIDs that match the face in the supplied image, ordered by similarity score with the highest similarity first. It also returns a bounding box for the face found in the input image. Information about faces detected in the supplied image, but not used for the search, is returned in an array of The ID of an existing collection containing the UserID. The ID for the existing UserID. An array of FaceIDs to associate with the UserID. An optional value specifying the minimum confidence in the UserID match to return. The default value is 75. Idempotent token used to identify the request to An array of AssociatedFace objects containing FaceIDs that are successfully associated with the UserID is returned. Returned if the AssociateFaces action is successful. An array of UnsuccessfulAssociation objects containing FaceIDs that are not successfully associated along with the reasons. Returned if the AssociateFaces action is successful. The status of an update made to a UserID. Reflects if the UserID has been updated for every requested change. Unique identifier assigned to the face. Provides face metadata for the faces that are associated to a specific UserID. Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison. A User with the same Id already exists within the collection, or the update or deletion of the User caused an inconsistent state. ** The ID of an existing collection to which the new UserID needs to be created. ID for the UserID to be created. This ID needs to be unique within the collection. Idempotent token used to identify the request to An array of strings (face IDs) of the faces that were deleted. An array of any faces that weren't deleted. The ID of an existing collection from which the UserID needs to be deleted. ID for the UserID to be deleted. Idempotent token used to identify the request to The number of milliseconds since the Unix epoch time until the creation of the collection. The Unix epoch time is 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970. The number of UserIDs assigned to the specified colleciton. A set of parameters that allow you to filter out certain results from your returned results. The ID of an existing collection containing the UserID. ID for the existing UserID. Idempotent token used to identify the request to An array of face IDs to disassociate from the UserID. An array of DissociatedFace objects containing FaceIds that are successfully disassociated with the UserID is returned. Returned if the DisassociatedFaces action is successful. An array of UnsuccessfulDisassociation objects containing FaceIds that are not successfully associated, along with the reasons for the failure to associate. Returned if the DisassociateFaces action is successful. The status of an update made to a User. Reflects if the User has been updated for every requested change. Unique identifier assigned to the face. Provides face metadata for the faces that are disassociated from a specific UserID. The version of the face detect and storage model that was used when indexing the face vector. Unique identifier assigned to the user. Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned. Maximum number of faces to return. An array of user IDs to match when listing faces in a collection. An array of face IDs to match when listing faces in a collection. The ID of an existing collection. Maximum number of UsersID to return. Pagingation token to receive the next set of UsersID. List of UsersID associated with the specified collection. A pagination token to be used with the subsequent request if the response is truncated. The format of the project policy document that you supplied to A provided ID for the UserID. Unique within the collection. The status of the user matched to a provided FaceID. Contains metadata for a UserID matched with a given face. The ID of an existing collection containing the UserID. Specifies the minimum confidence in the UserID match to return. Default value is 80. Maximum number of UserIDs to return. A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren't searched for in the collection. The default value is NONE. An array of UserID objects that matched the input face, along with the confidence in the match. The returned structure will be empty if there are no matches. Returned if the SearchUsersByImageResponse action is successful. Version number of the face detection model associated with the input collection CollectionId. A list of FaceDetail objects containing the BoundingBox for the largest face in image, as well as the confidence in the bounding box, that was searched for matches. If no valid face is detected in the image the response will contain no SearchedFace object. List of UnsearchedFace objects. Contains the face details infered from the specified image but not used for search. Contains reasons that describe why a face wasn't used for Search. The ID of an existing collection containing the UserID, used with a UserId or FaceId. If a FaceId is provided, UserId isn’t required to be present in the Collection. ID for the existing User. ID for the existing face. Optional value that specifies the minimum confidence in the matched UserID to return. Default value of 80. Maximum number of identities to return. An array of UserMatch objects that matched the input face along with the confidence in the match. Array will be empty if there are no matches. Version number of the face detection model associated with the input CollectionId. Contains the ID of a face that was used to search for matches in a collection. Contains the ID of the UserID that was used to search for matches in a collection. Unique identifier assigned to the face. Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for search. Contains data regarding the input face used for a search. A provided ID for the UserID. Unique within the collection. Contains metadata about a User searched for within a collection. Reasons why a face wasn't used for Search. Face details inferred from the image but not used for search. The response attribute contains reasons for why a face wasn't used for Search. A unique identifier assigned to the face. A provided ID for the UserID. Unique within the collection. Match confidence with the UserID, provides information regarding if a face association was unsuccessful because it didn't meet UserMatchThreshold. The reason why the association was unsuccessful. Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully associated. A unique identifier assigned to the face. A provided ID for the UserID. Unique within the collection. The reason why the deletion was unsuccessful. Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully deleted. A unique identifier assigned to the face. A provided ID for the UserID. Unique within the collection. The reason why the deletion was unsuccessful. Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully disassociated. A provided ID for the User. Unique within the collection. Communicates if the UserID has been updated with latest set of faces to be associated with the UserID. Metadata of the user stored in a collection. Describes the UserID metadata. Confidence in the match of this UserID with the input face. Provides UserID metadata along with the confidence in the match of this UserID with the input face. This is the API Reference for Amazon Rekognition Image, Amazon Rekognition Custom Labels, Amazon Rekognition Stored Video, Amazon Rekognition Streaming Video. It provides descriptions of actions, data types, common parameters, and common errors. Amazon Rekognition Image Amazon Rekognition Custom Labels Amazon Rekognition Video Stored Video Amazon Rekognition Video Streaming Video This is the API Reference for Amazon Rekognition Image, Amazon Rekognition Custom Labels, Amazon Rekognition Stored Video, Amazon Rekognition Streaming Video. It provides descriptions of actions, data types, common parameters, and common errors. Amazon Rekognition Image Amazon Rekognition Custom Labels Amazon Rekognition Video Stored Video Amazon Rekognition Video Streaming Video The A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. If you request more than 100 items, For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide. By default, In order to minimize response latency, When designing your application, keep in mind that DynamoDB does not return items in any particular order. To help parse the response by item, include the primary key values for the items in your request in the If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. The A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. If you request more than 100 items, For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed. For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide. By default, In order to minimize response latency, When designing your application, keep in mind that DynamoDB does not return items in any particular order. To help parse the response by item, include the primary key values for the items in your request in the If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. Too many operations for a given subscriber. There is no limit to the number of daily on-demand backups that can be taken. For most purposes, up to 500 simultaneous table operations are allowed per account. These operations include When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations. When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. There is a soft account quota of 2,500 tables. There is no limit to the number of daily on-demand backups that can be taken. For most purposes, up to 500 simultaneous table operations are allowed per account. These operations include When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations. When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. There is a soft account quota of 2,500 tables. GetRecords was called with a value of more than 1000 for the limit request parameter. More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling. The maximum number of strongly consistent reads consumed per second before DynamoDB returns a If read/write capacity mode is The maximum number of strongly consistent reads consumed per second before DynamoDB returns a If read/write capacity mode is The maximum number of writes consumed per second before DynamoDB returns a If read/write capacity mode is The maximum number of writes consumed per second before DynamoDB returns a If read/write capacity mode is Represents the provisioned throughput settings for a specified table or index. The settings can be modified using the For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Creates an Amazon FSx for Lustre data repository association (DRA). A data repository association is a link between a directory on the file system and an Amazon S3 bucket or prefix. You can have a maximum of 8 data repository associations on a file system. Data repository associations are supported for all file systems except for Each data repository association must have a unique Amazon FSx file system directory and a unique S3 bucket or prefix associated with it. You can configure a data repository association for automatic import only, for automatic export only, or for both. To learn more about linking a data repository to your file system, see Linking your file system to an S3 bucket. Creates an Amazon FSx for Lustre data repository association (DRA). A data repository association is a link between a directory on the file system and an Amazon S3 bucket or prefix. You can have a maximum of 8 data repository associations on a file system. Data repository associations are supported on all FSx for Lustre 2.12 and newer file systems, excluding Each data repository association must have a unique Amazon FSx file system directory and a unique S3 bucket or prefix associated with it. You can configure a data repository association for automatic import only, for automatic export only, or for both. To learn more about linking a data repository to your file system, see Linking your file system to an S3 bucket. Deletes a data repository association on an Amazon FSx for Lustre file system. Deleting the data repository association unlinks the file system from the Amazon S3 bucket. When deleting a data repository association, you have the option of deleting the data in the file system that corresponds to the data repository association. Data repository associations are supported for all file systems except for Deletes a data repository association on an Amazon FSx for Lustre file system. Deleting the data repository association unlinks the file system from the Amazon S3 bucket. When deleting a data repository association, you have the option of deleting the data in the file system that corresponds to the data repository association. Data repository associations are supported on all FSx for Lustre 2.12 and newer file systems, excluding Deletes an Amazon FSx for NetApp ONTAP or Amazon FSx for OpenZFS volume. Returns the description of specific Amazon FSx for Lustre or Amazon File Cache data repository associations, if one or more You can use filters to narrow the response to include just data repository associations for specific file systems (use the When retrieving all data repository associations, you can paginate the response by using the optional Returns the description of specific Amazon FSx for Lustre or Amazon File Cache data repository associations, if one or more You can use filters to narrow the response to include just data repository associations for specific file systems (use the When retrieving all data repository associations, you can paginate the response by using the optional Updates the configuration of an existing data repository association on an Amazon FSx for Lustre file system. Data repository associations are supported for all file systems except for Updates the configuration of an existing data repository association on an Amazon FSx for Lustre file system. Data repository associations are supported on all FSx for Lustre 2.12 and newer file systems, excluding Updates an Amazon FSx for ONTAP storage virtual machine (SVM). Updates an FSx for ONTAP storage virtual machine (SVM). Specifies the file system deployment type. Single AZ deployment types are configured for redundancy within a single Availability Zone in an Amazon Web Services Region . Valid values are the following: For more information, see: Deployment type availability and File system performance in the Amazon FSx for OpenZFS User Guide. Specifies the file system deployment type. Single AZ deployment types are configured for redundancy within a single Availability Zone in an Amazon Web Services Region . Valid values are the following: For more information, see: Deployment type availability and File system performance in the Amazon FSx for OpenZFS User Guide. Specifies the throughput of an Amazon FSx for OpenZFS file system, measured in megabytes per second (MB/s). Valid values depend on the DeploymentType you choose, as follows: For For You pay for additional throughput capacity that you provision. Specifies the throughput of an Amazon FSx for OpenZFS file system, measured in megabytes per second (MBps). Valid values depend on the DeploymentType you choose, as follows: For For You pay for additional throughput capacity that you provision. The Domain Name Service (DNS) name for the file system. You can mount your file system using its DNS name. The file system's DNS name. You can mount your file system using its DNS name. The configuration for an NFS data repository linked to an Amazon File Cache resource with a data repository association. The configuration of a data repository association that links an Amazon FSx for Lustre file system to an Amazon S3 bucket or an Amazon File Cache resource to an Amazon S3 bucket or an NFS file system. The data repository association configuration object is returned in the response of the following operations: Data repository associations are supported on Amazon File Cache resources and all Amazon FSx for Lustre file systems excluding The configuration of a data repository association that links an Amazon FSx for Lustre file system to an Amazon S3 bucket or an Amazon File Cache resource to an Amazon S3 bucket or an NFS file system. The data repository association configuration object is returned in the response of the following operations: Data repository associations are supported on Amazon File Cache resources and all FSx for Lustre 2.12 and newer file systems, excluding Specifies whether the number of IOPS for the file system is using the system default ( Specifies whether the file system is using the The total number of SSD IOPS provisioned for the file system. The SSD IOPS (input/output operations per second) configuration for an Amazon FSx for NetApp ONTAP or Amazon FSx for OpenZFS file system. The default is 3 IOPS per GB of storage capacity, but you can provision additional IOPS per GB of storage. The configuration consists of the total number of provisioned SSD IOPS and how the amount was provisioned (by the customer or by the system). The SSD IOPS (input/output operations per second) configuration for an Amazon FSx for NetApp ONTAP or FSx for OpenZFS file system. By default, Amazon FSx automatically provisions 3 IOPS per GB of storage capacity. You can provision additional IOPS per GB of storage. The configuration consists of the total number of provisioned SSD IOPS and how it is was provisioned, or the mode (by the customer or by Amazon FSx). (Multi-AZ only) The VPC route tables in which your file system's endpoints are created. You can use the Configuration for the FSx for NetApp ONTAP file system. The current percent of progress of an asynchronous task. Displays the current percent of progress of an asynchronous task. A list of up to three IP addresses of DNS servers or domain controllers in the self-managed AD directory. The configuration that Amazon FSx uses to join a FSx for Windows File Server file system or an ONTAP storage virtual machine (SVM) to a self-managed (including on-premises) Microsoft Active Directory (AD) directory. For more information, see Using Amazon FSx with your self-managed Microsoft Active Directory or Managing SVMs. The configuration that Amazon FSx uses to join a FSx for Windows File Server file system or an FSx for ONTAP storage virtual machine (SVM) to a self-managed (including on-premises) Microsoft Active Directory (AD) directory. For more information, see Using Amazon FSx for Windows with your self-managed Microsoft Active Directory or Managing FSx for ONTAP SVMs. The user name for the service account on your self-managed AD domain that Amazon FSx will use to join to your AD domain. This account must have the permission to join computers to the domain in the organizational unit provided in Specifies the updated user name for the service account on your self-managed AD domain. Amazon FSx uses this account to join to your self-managed AD domain. This account must have the permissions required to join computers to the domain in the organizational unit provided in The password for the service account on your self-managed AD domain that Amazon FSx will use to join to your AD domain. Specifies the updated password for the service account on your self-managed AD domain. Amazon FSx uses this account to join to your self-managed AD domain. A list of up to three IP addresses of DNS servers or domain controllers in the self-managed AD directory. A list of up to three DNS server or domain controller IP addresses in your self-managed AD domain. Specifies an updated fully qualified domain name of your self-managed AD configuration. Specifies an updated fully qualified distinguished name of the organization unit within your self-managed AD. Specifies the updated name of the self-managed AD domain group whose members are granted administrative privileges for the Amazon FSx resource. The configuration that Amazon FSx uses to join the Windows File Server instance to a self-managed Microsoft Active Directory (AD) directory. Specifies changes you are making to the self-managed Microsoft Active Directory (AD) configuration to which an FSx for Windows File Server file system or an FSx for ONTAP SVM is joined. The storage capacity for your Amazon FSx file system, in gibibytes. Specifies the file system's storage capacity, in gibibytes (GiB). The storage type for your Amazon FSx file system. Specifies the file system's storage type. The NetBIOS name of the Active Directory computer object that is joined to your SVM. The NetBIOS name of the AD computer object to which the SVM is joined. Describes the configuration of the Microsoft Active Directory (AD) directory to which the Amazon FSx for ONTAP storage virtual machine (SVM) is joined. Pleae note, account credentials are not returned in the response payload. Describes the Microsoft Active Directory (AD) directory configuration to which the FSx for ONTAP storage virtual machine (SVM) is joined. Note that account credentials are not returned in the response payload. The ONTAP administrative password for the Update the password for the The SSD IOPS (input/output operations per second) configuration for an Amazon FSx for NetApp ONTAP file system. The default is 3 IOPS per GB of storage capacity, but you can provision additional IOPS per GB of storage. The configuration consists of an IOPS mode ( The SSD IOPS (input output operations per second) configuration for an Amazon FSx for NetApp ONTAP file system. The default is 3 IOPS per GB of storage capacity, but you can provision additional IOPS per GB of storage. The configuration consists of an IOPS mode ( Specifies the throughput of an FSx for NetApp ONTAP file system, measured in megabytes per second (MBps). Valid values are 128, 256, 512, 1024, 2048, and 4096 MBps. Enter a new value to change the amount of throughput capacity for the file system. Throughput capacity is measured in megabytes per second (MBps). Valid values are 128, 256, 512, 1024, 2048, and 4096 MBps. For more information, see Managing throughput capacity in the FSx for ONTAP User Guide. Use this parameter to increase the storage capacity of an FSx for Windows File Server, FSx for Lustre, FSx for OpenZFS, or FSx for ONTAP file system. Specifies the storage capacity target value, in GiB, to increase the storage capacity for the file system that you're updating. You can't make a storage capacity increase request if there is an existing storage capacity increase request in progress. For Lustre file systems, the storage capacity target value can be the following: For For For For more information, see Managing storage and throughput capacity in the FSx for Lustre User Guide. For FSx for OpenZFS file systems, the storage capacity target value must be at least 10 percent greater than the current storage capacity value. For more information, see Managing storage capacity in the FSx for OpenZFS User Guide. For Windows file systems, the storage capacity target value must be at least 10 percent greater than the current storage capacity value. To increase storage capacity, the file system must have at least 16 MBps of throughput capacity. For more information, see Managing storage capacity in the Amazon FSx for Windows File Server User Guide. For ONTAP file systems, the storage capacity target value must be at least 10 percent greater than the current storage capacity value. For more information, see Managing storage capacity and provisioned IOPS in the Amazon FSx for NetApp ONTAP User Guide. Use this parameter to increase the storage capacity of an FSx for Windows File Server, FSx for Lustre, FSx for OpenZFS, or FSx for ONTAP file system. Specifies the storage capacity target value, in GiB, to increase the storage capacity for the file system that you're updating. You can't make a storage capacity increase request if there is an existing storage capacity increase request in progress. For Lustre file systems, the storage capacity target value can be the following: For For For For more information, see Managing storage and throughput capacity in the FSx for Lustre User Guide. For FSx for OpenZFS file systems, the storage capacity target value must be at least 10 percent greater than the current storage capacity value. For more information, see Managing storage capacity in the FSx for OpenZFS User Guide. For Windows file systems, the storage capacity target value must be at least 10 percent greater than the current storage capacity value. To increase storage capacity, the file system must have at least 16 MBps of throughput capacity. For more information, see Managing storage capacity in the Amazon FSxfor Windows File Server User Guide. For ONTAP file systems, the storage capacity target value must be at least 10 percent greater than the current storage capacity value. For more information, see Managing storage capacity and provisioned IOPS in the Amazon FSx for NetApp ONTAP User Guide. The configuration updates for an Amazon FSx for OpenZFS file system. The configuration updates for an FSx for OpenZFS file system. The request object for the Updates the Microsoft Active Directory (AD) configuration for an SVM that is joined to an AD. Specifies updates to an SVM's Microsoft Active Directory (AD) configuration. Enter a new SvmAdminPassword if you are updating it. Specifies a new SvmAdminPassword. Specifies an updated NetBIOS name of the AD computer object Updates the Microsoft Active Directory (AD) configuration of an SVM joined to an AD. Please note, account credentials are not returned in the response payload. Specifies updates to an FSx for ONTAP storage virtual machine's (SVM) Microsoft Active Directory (AD) configuration. Note that account credentials are not returned in the response payload. The content type of the data from the input source. The following are the allowed content types for different problems: ImageClassification: TextClassification: The content type of the data from the input source. The following are the allowed content types for different problems: ImageClassification: TextClassification: The name of the pipeline to describe. The name or Amazon Resource Name (ARN) of the pipeline to describe. The number of instances of the type specified by The number of instances of the type specified by The instance type used to run hyperparameter optimization tuning jobs. See descriptions of instance types for more information. The instance type used to run hyperparameter optimization tuning jobs. See descriptions of instance types for more information. The maximum number of AppImageConfigs to return in the response. The default value is 10. The total number of items to return in the response. If the total number of items available is more than the value specified, a Returns a list up to a specified limit. The total number of items to return in the response. If the total number of items available is more than the value specified, a Returns a list up to a specified limit. The total number of items to return in the response. If the total number of items available is more than the value specified, a The name of the pipeline. The name or Amazon Resource Name (ARN) of the pipeline. Returns a list up to a specified limit. The total number of items to return in the response. If the total number of items available is more than the value specified, a The maximum number of Studio Lifecycle Configurations to return in the response. The default value is 10. The total number of items to return in the response. If the total number of items available is more than the value specified, a A token for getting the next set of actions, if there are any. If the previous response was truncated, you will receive this token. Use it in your next request to receive the next set of results. Returns a list up to a specified limit. The total number of items to return in the response. If the total number of items available is more than the value specified, a The name of the pipeline. The name or Amazon Resource Name (ARN) of the pipeline. Creates a new form for an Amplify app. Creates a new form for an Amplify. Exports theme configurations to code that is ready to integrate into an Amplify app. Returns an existing code generation job. Returns an existing theme for an Amplify app. Retrieves a list of code generation jobs for a specified Amplify app and backend environment. Refreshes a previously issued access token that might have expired. Starts a code generation job for for a specified Amplify app and backend environment. Represents the event action configuration for an element of a Specifes whether a code generation job supports data relationships. Specifies whether a code generation job supports non models. Describes the feature flags that you can specify for a code generation job. The list of enum values in the generic data schema. Describes the enums in a generic data schema. The data type for the generic data field. The value of the data type for the generic data field. Specifies whether the generic data field is required. Specifies whether the generic data field is read-only. Specifies whether the generic data field is an array. The relationship of the generic data schema. Describes a field in a generic data schema. The fields in the generic data model. Specifies whether the generic data model is a join table. The primary keys of the generic data model. Describes a model in a generic data schema. The fields in a generic data schema non model. Describes a non-model in a generic data schema. The data relationship type. The name of the related model in the data relationship. The related model fields in the data relationship. Specifies whether the relationship can unlink the associated model. The name of the related join field in the data relationship. The name of the related join table in the data relationship. The value of the The associated fields of the data relationship. Specifies whether the Describes the relationship between generic data models. The unique ID for the code generation job. The ID of the Amplify app associated with the code generation job. The name of the backend environment associated with the code generation job. Specifies whether to autogenerate forms in the code generation job. The status of the code generation job. The customized status message for the code generation job. The One or more key-value pairs to use when tagging the code generation job. The time that the code generation job was created. The time that the code generation job was modified. Describes the configuration for a code generation job that is associated with an Amplify app. The URL to use to access the asset. Describes an asset for a code generation job. The type of the data source for the schema. Currently, the only valid value is an Amplify The name of a The name of a The name of a Describes the data schema for a code generation job. The name of the Describes the configuration information for rendering the UI component associated the code generation job. The unique ID of the Amplify app associated with the code generation job. The name of the backend environment associated with the code generation job. The unique ID for the code generation job summary. The time that the code generation job summary was created. The time that the code generation job summary was modified. A summary of the basic information about the code generation job. The unique ID of the Amplify app associated with the code generation job. The name of the backend environment that is a part of the Amplify app associated with the code generation job. The unique ID of the code generation job. The configuration settings for the code generation job. The unique ID for the Amplify app. The name of the backend environment that is a part of the Amplify app. The token to request the next page of results. The maximum number of jobs to retrieve. The list of code generation jobs for the Amplify app. The pagination token that's included if more results are available. The JavaScript module type. The ECMAScript specification to use. The file type to use for a JavaScript project. Specifies whether the code generation job should render type declaration files. Specifies whether the code generation job should render inline source maps. Describes the code generation job configuration for a React project. The code generation configuration for the codegen job. The data schema to use for a code generation job. Specifies whether to autogenerate forms in the code generation job. The feature flags for a code generation job. One or more key-value pairs to use when tagging the code generation job data. The code generation job resource configuration. The unique ID for the Amplify app. The name of the backend environment that is a part of the Amplify app. The idempotency token used to ensure that the code generation job request completes only once. The code generation job resource configuration. The code generation job for a UI component that is associated with an Amplify app. The request was denied due to request throttling. The endpoint of the remote domain. The Endpoint attribute cannot be modified. The endpoint of the remote domain. Applicable for VPC_ENDPOINT connection mode. The connection properties for cross cluster search. The connection properties of an outbound connection. The connection mode. The Container for the parameters to the Status of SkipUnavailable param for outbound connection. Cross cluster search specific connection properties. The domain endpoint to which index and search requests are submitted. For example, Status of SkipUnavailable param for outbound connection. ENABLED - The SkipUnavailable param is enabled for the connection. DISABLED - The SkipUnavailable param is disabled for the connection. Too many operations for a given subscriber. There is no limit to the number of daily on-demand backups that can be taken. For most purposes, up to 500 simultaneous table operations are allowed per account. These operations include When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations. When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. There is a soft account quota of 2,500 tables. There is no limit to the number of daily on-demand backups that can be taken. For most purposes, up to 500 simultaneous table operations are allowed per account. These operations include When you are creating a table with one or more secondary indexes, you can have up to 250 such requests running at a time. However, if the table or index specifications are complex, then DynamoDB might temporarily reduce the number of concurrent operations. When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. There is a soft account quota of 2,500 tables. GetRecords was called with a value of more than 1000 for the limit request parameter. More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may result in request throttling. The AWS service from which the stream record originated. For DynamoDB Streams, this is The Amazon Web Services service from which the stream record originated. For DynamoDB Streams, this is A timestamp, in ISO 8601 format, for this stream. Note that the AWS customer ID. the table name the A timestamp, in ISO 8601 format, for this stream. Note that the Amazon Web Services customer ID. the table name the Represents all of the data describing a particular stream. A timestamp, in ISO 8601 format, for this stream. Note that the AWS customer ID. the table name the A timestamp, in ISO 8601 format, for this stream. Note that the Amazon Web Services customer ID. the table name the The approximate date and time when the stream record was created, in UNIX epoch time format. The approximate date and time when the stream record was created, in UNIX epoch time format and rounded down to the closest second. Returns metadata about a query, including query run time in milliseconds, number of events scanned and matched, and query status. You must specify an ARN for Returns metadata about a query, including query run time in milliseconds, number of events scanned and matched, and query status. If the query results were delivered to an S3 bucket, the response also provides the S3 URI and the delivery status. You must specify either a Gets event data results of a query. You must specify the Gets event data results of a query. You must specify the Starts a CloudTrail Lake query. The required Starts a CloudTrail Lake query. Use the Updates an event data store. The required For event data stores for CloudTrail events, For event data stores for Config configuration items, Audit Manager evidence, or non-Amazon Web Services events, Updates an event data store. The required For event data stores for CloudTrail events, For event data stores for Config configuration items, Audit Manager evidence, or non-Amazon Web Services events, A field in a CloudTrail event record on which to filter events to be logged. For event data stores for Config configuration items, Audit Manager evidence, or non-Amazon Web Services events, the field is used only for selecting events as filtering is not supported. For CloudTrail event records, supported fields include For event data stores for Config configuration items, Audit Manager evidence, or non-Amazon Web Services events, the only supported field is For CloudTrail event records, the value must be For Config configuration items, the value must be For Audit Manager evidence, the value must be For non-Amazon Web Services events, the value must be You can have only one The trailing slash is intentional; do not exclude it. Replace the text between less than and greater than symbols (<>) with resource-specific information. When resources.type equals When resources.type equals When resources.type equals When resources.type equals When When When When When When When When When When When When A field in a CloudTrail event record on which to filter events to be logged. For event data stores for Config configuration items, Audit Manager evidence, or non-Amazon Web Services events, the field is used only for selecting events as filtering is not supported. For CloudTrail event records, supported fields include For event data stores for Config configuration items, Audit Manager evidence, or non-Amazon Web Services events, the only supported field is For CloudTrail event records, the value must be For Config configuration items, the value must be For Audit Manager evidence, the value must be For non-Amazon Web Services events, the value must be You can have only one The trailing slash is intentional; do not exclude it. Replace the text between less than and greater than symbols (<>) with resource-specific information. When resources.type equals When resources.type equals When resources.type equals When resources.type equals When resources.type equals When When When When When When When When When When When When When This field is no longer in use. Use SnsTopicARN. This field is no longer in use. Use The resource type in which you want to log data events. You can specify the following basic event selector resource types: The following resource types are also available through advanced event selectors. Basic event selector resource types are valid in advanced event selectors, but advanced event selector resource types are not valid in basic event selectors. For more information, see AdvancedFieldSelector$Field. The resource type in which you want to log data events. You can specify the following basic event selector resource types: The following resource types are also available through advanced event selectors. Basic event selector resource types are valid in advanced event selectors, but advanced event selector resource types are not valid in basic event selectors. For more information, see AdvancedFieldSelector. The query ID. The alias that identifies a query template. A SQL string of criteria about events that you want to collect in an event data store. The query ID does not exist or does not map to a query. The URI for the S3 bucket where CloudTrail delivers the query results. The alias that identifies a query template. The query parameters for the specified This field is no longer in use. Use SnsTopicARN. This field is no longer in use. Use This field is no longer in use. Use UpdateTrailResponse$SnsTopicARN. This field is no longer in use. Use Returns the objects or data listed below if successful. Otherwise, returns an error. This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail. CloudTrail is a web service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the Amazon Web Services API call, the source IP address, the request parameters, and the response elements returned by the service. As an alternative to the API, you can use one of the Amazon Web Services SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide programmatic access to CloudTrail. For example, the SDKs handle cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools to Build on Amazon Web Services. See the CloudTrail User Guide for information about the data that is included with each Amazon Web Services API call listed in the log files. Actions available for CloudTrail trails The following actions are available for CloudTrail trails. Actions available for CloudTrail event data stores The following actions are available for CloudTrail event data stores. The following additional actions are available for imports. Actions available for CloudTrail channels The following actions are available for CloudTrail channels. Actions available for managing delegated administrators The following actions are available for adding or a removing a delegated administrator to manage an Organizations organization’s CloudTrail resources. This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail. CloudTrail is a web service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the Amazon Web Services API call, the source IP address, the request parameters, and the response elements returned by the service. As an alternative to the API, you can use one of the Amazon Web Services SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide programmatic access to CloudTrail. For example, the SDKs handle cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools to Build on Amazon Web Services. See the CloudTrail User Guide for information about the data that is included with each Amazon Web Services API call listed in the log files. The path of the account creation endpoint for your application. This is the page on your website that accepts the completed registration form for a new user. This page must accept For example, for the URL The path of the account registration endpoint for your application. This is the page on your website that presents the registration form to new users. This page must accept For example, for the URL The criteria for inspecting account creation requests, used by the ACFP rule group to validate and track account creation attempts. The criteria for inspecting responses to account creation requests, used by the ACFP rule group to track account creation success rates. Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. The ACFP rule group evaluates the responses that your protected resources send back to client account creation attempts, keeping count of successful and failed attempts from each IP address and client session. Using this information, the rule group labels and mitigates requests from client sessions and IP addresses that have had too many successful account creation attempts in a short amount of time. Allow the use of regular expressions in the registration page path and the account creation path. Details for your use of the account creation fraud prevention managed rule group, The criteria for inspecting responses to login requests, used by the ATP rule group to track login failure rates. The ATP rule group evaluates the responses that your protected resources send back to client login attempts, keeping count of successful and failed attempts from each IP address and client session. Using this information, the rule group labels and mitigates requests from client sessions and IP addresses that submit too many failed login attempts in a short amount of time. Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. The criteria for inspecting responses to login requests, used by the ATP rule group to track login failure rates. Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. The ATP rule group evaluates the responses that your protected resources send back to client login attempts, keeping count of successful and failed attempts for each IP address and client session. Using this information, the rule group labels and mitigates requests from client sessions and IP addresses that have had too many failed login attempts in a short amount of time. Allow the use of regular expressions in the login page path. Details for your use of the account takeover prevention managed rule group, The name of a single primary address field. How you specify the address fields depends on the request inspection payload type. For JSON payloads, specify the field identifiers in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with input elements named The name of a field in the request payload that contains part or all of your customer's primary physical address. This data type is used in the The name of the email field. How you specify this depends on the request inspection payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with the input element named The name of the field in the request payload that contains your customer's email. This data type is used in the Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using commas and no added spaces. Matches against the header order string are case insensitive. Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using colons and no added spaces, for example The part of the web request that you want WAF to inspect. Include the single Example JSON for a Example JSON for a The URL to use in SDK integrations with Amazon Web Services managed rule groups. For example, you can use the integration SDKs with the account takeover prevention managed rule group The URL to use in SDK integrations with Amazon Web Services managed rule groups. For example, you can use the integration SDKs with the account takeover prevention managed rule group What WAF should do if the headers of the request are more numerous or larger than WAF can inspect. WAF does not support inspecting the entire contents of request headers when they exceed 8 KB (8192 bytes) or 200 total headers. The underlying host service forwards a maximum of 200 headers and at most 8 KB of header contents to WAF. The options for oversize handling are the following: Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using commas and no added spaces. Matches against the header order string are case insensitive. Inspect a string containing the list of the request's header names, ordered as they appear in the web request that WAF receives for inspection. WAF generates the string and then uses that as the field to match component in its inspection. WAF separates the header names in the string using colons and no added spaces, for example The parts of the request that you want to keep out of the logs. For example, if you redact the You can specify only the following fields for redaction: The parts of the request that you want to keep out of the logs. For example, if you redact the Redaction applies only to the component that's specified in the rule's You can specify only the following fields for redaction: Instead of this setting, provide your configuration under Instead of this setting, provide your configuration under the request inspection configuration for Instead of this setting, provide your configuration under Instead of this setting, provide your configuration under the request inspection configuration for Instead of this setting, provide your configuration under Instead of this setting, provide your configuration under the request inspection configuration for Additional configuration for using the account takeover prevention (ATP) managed rule group, This configuration replaces the individual configuration fields in For information about using the ATP managed rule group, see WAF Fraud Control account takeover prevention (ATP) rule group and WAF Fraud Control account takeover prevention (ATP) in the WAF Developer Guide. Additional configuration for using the account creation fraud prevention (ACFP) managed rule group, For information about using the ACFP managed rule group, see WAF Fraud Control account creation fraud prevention (ACFP) rule group and WAF Fraud Control account creation fraud prevention (ACFP) in the WAF Developer Guide. Additional information that's used by a managed rule group. Many managed rule groups don't require this. Use the Use the For example specifications, see the examples section of CreateWebACL. Additional information that's used by a managed rule group. Many managed rule groups don't require this. The rule groups used for intelligent threat mitigation require additional configuration: Use the Use the Use the For example specifications, see the examples section of CreateWebACL. Additional information that's used by a managed rule group. Many managed rule groups don't require this. Use the Use the Additional information that's used by a managed rule group. Many managed rule groups don't require this. The rule groups used for intelligent threat mitigation require additional configuration: Use the Use the Use the Action settings to use in the place of the rule actions that are configured inside the rule group. You specify one override for each rule whose action you want to change. You can use overrides for testing, for example you can override all of rule actions to A rule statement used to run the rules that are defined in a managed rule group. To use this, provide the vendor name and the name of the rule group in this statement. You can retrieve the required names by calling ListAvailableManagedRuleGroups. You cannot nest a You are charged additional fees when you use the WAF Bot Control managed rule group A rule statement used to run the rules that are defined in a managed rule group. To use this, provide the vendor name and the name of the rule group in this statement. You can retrieve the required names by calling ListAvailableManagedRuleGroups. You cannot nest a You are charged additional fees when you use the WAF Bot Control managed rule group The name of the password field. For example The name of the password field. How you specify this depends on the request inspection payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with the input element named Details about your login page password field for request inspection, used in the The name of the field in the request payload that contains your customer's password. This data type is used in the The name of a single primary phone number field. How you specify the phone number fields depends on the request inspection payload type. For JSON payloads, specify the field identifiers in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with input elements named The name of a field in the request payload that contains part or all of your customer's primary phone number. This data type is used in the Details about your login page username field. How you specify this depends on the payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with input elements named The name of the field in the request payload that contains your customer's username. How you specify this depends on the request inspection payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with the input element named Details about your login page password field. How you specify this depends on the payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with input elements named The name of the field in the request payload that contains your customer's password. How you specify this depends on the request inspection payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with the input element named The criteria for inspecting login requests, used by the ATP rule group to validate credentials usage. This is part of the In these settings, you specify how your application accepts login attempts by providing the request payload type and the names of the fields within the request body where the username and password are provided. The payload type for your account creation endpoint, either JSON or form encoded. The name of the field in the request payload that contains your customer's username. How you specify this depends on the request inspection payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with the input element named The name of the field in the request payload that contains your customer's password. How you specify this depends on the request inspection payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with the input element named The name of the field in the request payload that contains your customer's email. How you specify this depends on the request inspection payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with the input element named The names of the fields in the request payload that contain your customer's primary phone number. Order the phone number fields in the array exactly as they are ordered in the request payload. How you specify the phone number fields depends on the request inspection payload type. For JSON payloads, specify the field identifiers in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with input elements named The names of the fields in the request payload that contain your customer's primary physical address. Order the address fields in the array exactly as they are ordered in the request payload. How you specify the address fields depends on the request inspection payload type. For JSON payloads, specify the field identifiers in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with input elements named The criteria for inspecting account creation requests, used by the ACFP rule group to validate and track account creation attempts. This is part of the In these settings, you specify how your application accepts account creation attempts by providing the request payload type and the names of the fields within the request body where the username, password, email, and primary address and phone number fields are provided. Configures inspection of the response status code. Configures inspection of the response status code for success and failure indicators. Configures inspection of the response header. Configures inspection of the response header for success and failure indicators. Configures inspection of the response body. WAF can inspect the first 65,536 bytes (64 KB) of the response body. Configures inspection of the response body for success and failure indicators. WAF can inspect the first 65,536 bytes (64 KB) of the response body. Configures inspection of the response JSON. WAF can inspect the first 65,536 bytes (64 KB) of the response JSON. Configures inspection of the response JSON for success and failure indicators. WAF can inspect the first 65,536 bytes (64 KB) of the response JSON. The criteria for inspecting responses to login requests, used by the ATP rule group to track login failure rates. The ATP rule group evaluates the responses that your protected resources send back to client login attempts, keeping count of successful and failed attempts from each IP address and client session. Using this information, the rule group labels and mitigates requests from client sessions and IP addresses that submit too many failed login attempts in a short amount of time. Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. This is part of the Enable login response inspection by configuring exactly one component of the response to inspect. You can't configure more than one. If you don't configure any of the response inspection options, response inspection is disabled. The criteria for inspecting responses to login requests and account creation requests, used by the ATP and ACFP rule groups to track login and account creation success and failure rates. Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. The rule groups evaluates the responses that your protected resources send back to client login and account creation attempts, keeping count of successful and failed attempts from each IP address and client session. Using this information, the rule group labels and mitigates requests from client sessions and IP addresses with too much suspicious activity in a short amount of time. This is part of the Enable response inspection by configuring exactly one component of the response to inspect, for example, Strings in the body of the response that indicate a successful login attempt. To be counted as a successful login, the string can be anywhere in the body and must be an exact match, including case. Each string must be unique among the success and failure strings. JSON example: Strings in the body of the response that indicate a successful login or account creation attempt. To be counted as a success, the string can be anywhere in the body and must be an exact match, including case. Each string must be unique among the success and failure strings. JSON examples: Strings in the body of the response that indicate a failed login attempt. To be counted as a failed login, the string can be anywhere in the body and must be an exact match, including case. Each string must be unique among the success and failure strings. JSON example: Strings in the body of the response that indicate a failed login or account creation attempt. To be counted as a failure, the string can be anywhere in the body and must be an exact match, including case. Each string must be unique among the success and failure strings. JSON example: Configures inspection of the response body. WAF can inspect the first 65,536 bytes (64 KB) of the response body. This is part of the Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. Configures inspection of the response body. WAF can inspect the first 65,536 bytes (64 KB) of the response body. This is part of the Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. The name of the header to match against. The name must be an exact match, including case. JSON example: The name of the header to match against. The name must be an exact match, including case. JSON example: Values in the response header with the specified name that indicate a successful login attempt. To be counted as a successful login, the value must be an exact match, including case. Each value must be unique among the success and failure values. JSON example: Values in the response header with the specified name that indicate a successful login or account creation attempt. To be counted as a success, the value must be an exact match, including case. Each value must be unique among the success and failure values. JSON examples: Values in the response header with the specified name that indicate a failed login attempt. To be counted as a failed login, the value must be an exact match, including case. Each value must be unique among the success and failure values. JSON example: Values in the response header with the specified name that indicate a failed login or account creation attempt. To be counted as a failure, the value must be an exact match, including case. Each value must be unique among the success and failure values. JSON examples: Configures inspection of the response header. This is part of the Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. Configures inspection of the response header. This is part of the Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. The identifier for the value to match against in the JSON. The identifier must be an exact match, including case. JSON example: The identifier for the value to match against in the JSON. The identifier must be an exact match, including case. JSON examples: Values for the specified identifier in the response JSON that indicate a successful login attempt. To be counted as a successful login, the value must be an exact match, including case. Each value must be unique among the success and failure values. JSON example: Values for the specified identifier in the response JSON that indicate a successful login or account creation attempt. To be counted as a success, the value must be an exact match, including case. Each value must be unique among the success and failure values. JSON example: Values for the specified identifier in the response JSON that indicate a failed login attempt. To be counted as a failed login, the value must be an exact match, including case. Each value must be unique among the success and failure values. JSON example: Values for the specified identifier in the response JSON that indicate a failed login or account creation attempt. To be counted as a failure, the value must be an exact match, including case. Each value must be unique among the success and failure values. JSON example: Configures inspection of the response JSON. WAF can inspect the first 65,536 bytes (64 KB) of the response JSON. This is part of the Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. Configures inspection of the response JSON. WAF can inspect the first 65,536 bytes (64 KB) of the response JSON. This is part of the Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. Status codes in the response that indicate a successful login attempt. To be counted as a successful login, the response status code must match one of these. Each code must be unique among the success and failure status codes. JSON example: Status codes in the response that indicate a successful login or account creation attempt. To be counted as a success, the response status code must match one of these. Each code must be unique among the success and failure status codes. JSON example: Status codes in the response that indicate a failed login attempt. To be counted as a failed login, the response status code must match one of these. Each code must be unique among the success and failure status codes. JSON example: Status codes in the response that indicate a failed login or account creation attempt. To be counted as a failure, the response status code must match one of these. Each code must be unique among the success and failure status codes. JSON example: Configures inspection of the response status code. This is part of the Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. Configures inspection of the response status code. This is part of the Response inspection is available only in web ACLs that protect Amazon CloudFront distributions. A rule statement used to run the rules that are defined in a managed rule group. To use this, provide the vendor name and the name of the rule group in this statement. You can retrieve the required names by calling ListAvailableManagedRuleGroups. You cannot nest a You are charged additional fees when you use the WAF Bot Control managed rule group A rule statement used to run the rules that are defined in a managed rule group. To use this, provide the vendor name and the name of the rule group in this statement. You can retrieve the required names by calling ListAvailableManagedRuleGroups. You cannot nest a You are charged additional fees when you use the WAF Bot Control managed rule group The name of the username field. For example The name of the username field. How you specify this depends on the request inspection payload type. For JSON payloads, specify the field name in JSON pointer syntax. For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation JavaScript Object Notation (JSON) Pointer. For example, for the JSON payload For form encoded payload types, use the HTML form names. For example, for an HTML form with the input element named Details about your login page username field for request inspection, used in the The name of the field in the request payload that contains your customer's username. This data type is used in the Associate a lens to a workload. Up to 10 lenses can be associated with a workload in a single API operation. A maximum of 20 lenses can be associated with a workload. Disclaimer By accessing and/or applying custom lenses created by another Amazon Web Services user or account, you acknowledge that custom lenses created by other users and shared with you are Third Party Content as defined in the Amazon Web Services Customer Agreement. Associate a profile with a workload. Create a milestone for an existing workload. Create a profile. Create a profile share. Delete a lens share. After the lens share is deleted, Amazon Web Services accounts, users, organizations, and organizational units (OUs) that you shared the lens with can continue to use it, but they will no longer be able to apply it to new workloads. Disclaimer By sharing your custom lenses with other Amazon Web Services accounts, you acknowledge that Amazon Web Services will make your custom lenses available to those other accounts. Those other accounts may continue to access and use your shared custom lenses even if you delete the custom lenses from your own Amazon Web Services account or terminate your Amazon Web Services account. Delete a profile. Disclaimer By sharing your profile with other Amazon Web Services accounts, you acknowledge that Amazon Web Services will make your profile available to those other accounts. Those other accounts may continue to access and use your shared profile even if you delete the profile from your own Amazon Web Services account or terminate your Amazon Web Services account. Delete a profile share. Disassociate a lens from a workload. Up to 10 lenses can be disassociated from a workload in a single API operation. The Amazon Web Services Well-Architected Framework lens ( Disassociate a profile from a workload. Get a milestone for an existing workload. Get profile information. Get profile template. List lens notifications. List profile notifications. List profile shares. List profiles. List the tags for a resource. The WorkloadArn parameter can be either a workload ARN or a custom lens ARN. List the tags for a resource. The WorkloadArn parameter can be a workload ARN, a custom lens ARN, or a profile ARN. Adds one or more tags to the specified resource. The WorkloadArn parameter can be either a workload ARN or a custom lens ARN. Adds one or more tags to the specified resource. The WorkloadArn parameter can be a workload ARN, a custom lens ARN, or a profile ARN. Deletes specified tags from a resource. The WorkloadArn parameter can be either a workload ARN or a custom lens ARN. To specify multiple tags, use separate tagKeys parameters, for example: Deletes specified tags from a resource. The WorkloadArn parameter can be a workload ARN, a custom lens ARN, or a profile ARN. To specify multiple tags, use separate tagKeys parameters, for example: Update lens review for a particular workload. Update a profile. Upgrade lens review for a particular workload. Upgrade a profile. The reason why a choice is non-applicable to a question in your workload. The type of the question. An answer summary of a lens review in a workload. Input to associate lens reviews. The list of profile ARNs to associate with the workload. An Amazon Web Services account ID. A unique case-sensitive string used to ensure that this request is idempotent (executes only once). You should not reuse the same token for other requests. If you retry a request with the same client request token and the same parameters after the original request has completed successfully, the result of the original request is returned. This token is listed as required, however, if you do not specify it, the Amazon Web Services SDKs automatically generate one for you. If you are not using the Amazon Web Services SDK or the CLI, you must provide this token or the request will fail. A unique case-sensitive string used to ensure that this request is idempotent (executes only once). You should not reuse the same token for other requests. If you retry a request with the same client request token and the same parameters after the original request has completed successfully, the result of the original request is returned. This token is listed as required, however, if you do not specify it, the Amazon Web Services SDKs automatically generate one for you. If you are not using the Amazon Web Services SDK or the CLI, you must provide this token or the request will fail. Output of a create milestone call. Name of the profile. The profile description. The profile questions. The tags assigned to the profile. The profile ARN. Version of the profile. The profile ARN. The profile ARN. List of AppRegistry application ARNs associated to the workload. The list of profile ARNs associated with the workload. Input for workload creation. The profile ARN. The profile ARN. Input to disassociate lens reviews. The list of profile ARNs to disassociate from the workload. Output of a get milestone call. The profile ARN. The profile version. The profile. The profile template. The profiles associated with the workload. A lens review of a question. The status of the lens. The profiles associated with the workload. A lens review summary of a workload. The maximum number of results to return for this request. The priority of the question. Input to list answers. The maximum number of results to return for this request. The priority of the question. Input to list lens review improvements. Notification summaries. The profile ARN. The Amazon Web Services account ID, IAM role, organization ID, or organizational unit (OU) ID with which the profile is shared. The maximum number of results to return for this request. Profile share summaries. Prefix for profile name. Profile owner type. Profile summaries. The maximum number of results to return for this request. Profile name prefix. Input for List Share Invitations A milestone summary return object. The token to use to retrieve the next set of results. Permission granted on a workload share. Permission granted on a share request. A pillar review summary of a lens review. The profile ARN. The profile version. The profile name. The profile description. Profile questions. The ID assigned to the share invitation. The tags assigned to the profile. A profile. The profile choice. The current profile version. The latest profile version. Type of notification. The profile ARN. The profile name. The profile notification summary. The question choices. The selected choices. The minimum number of selected choices. The maximum number of selected choices. A profile question. The selected choices. An update to a profile question. Profile share invitation status message. Summary of a profile share. The profile ARN. The profile version. The profile name. The profile description. Summary of a profile. The name of the profile template. Profile template questions. The profile template. A profile template choice. The question choices. The minimum number of choices selected. The maximum number of choices selected. A profile template question. The description of the question. The title of the question. Service Quotas requirement to identify originating quota. A map from risk names to the count of how many questions have that rating. List of selected choice IDs in a question answer. The values entered replace the previously selected choices. Service Quotas requirement to identify originating service. The ID associated with the workload share. The ID associated with the share. The ARN for the lens. The profile ARN. The share invitation. The ARN for the lens. The profile name. The profile ARN. A share invitation summary return object. The status of a workload share. The status of the share request. The Amazon Web Services account ID, IAM role, organization ID, or organizational unit (OU) ID with which the workload is shared. The Amazon Web Services account ID, IAM role, organization ID, or organizational unit (OU) ID with which the workload, lens, or profile is shared. Output of a update lens review call. The profile ARN. The profile description. Profile questions. The profile. The profile ARN. List of AppRegistry application ARNs associated to the workload. Profile associated with a workload. A workload return object. The ID assigned to the workload. This ID is unique within an Amazon Web Services Region. The priorities of the pillars, which are used to order items in the improvement plan. Each pillar is represented by its PillarReviewSummary$PillarId. The profile ARN. The profile version. The profile associated with a workload. Profile associated with a workload. A workload summary return object. Returns information about one or more Amazon Lightsail SSL/TLS certificates. To get a summary of a certificate, ommit Returns information about one or more Amazon Lightsail SSL/TLS certificates. To get a summary of a certificate, omit The support code. Include this code in your email to support when you have questions about your Lightsail certificate. This code enables our support team to look up your Lightsail information more easily. Describes the full details of an Amazon Lightsail SSL/TLS certificate. To get a summary of a certificate, use the Describes the full details of an Amazon Lightsail SSL/TLS certificate. To get a summary of a certificate, use the The name for the certificate for which to return information. When omitted, the response includes all of your certificates in the Amazon Web Services Region where the request is made. The token to advance to the next page of results from your request. To get a page token, perform an initial An object that describes certificates. If The cost estimate start time. Constraints: Specified in Coordinated Universal Time (UTC). Specified in the Unix time format. For example, if you wish to use a start time of October 1, 2018, at 8 PM UTC, specify You can convert a human-friendly time to Unix time format using a converter like Epoch converter. The cost estimate start time. Constraints: Specified in Coordinated Universal Time (UTC). Specified in the Unix time format. For example, if you want to use a start time of October 1, 2018, at 8 PM UTC, specify You can convert a human-friendly time to Unix time format using a converter like Epoch converter. The cost estimate end time. Constraints: Specified in Coordinated Universal Time (UTC). Specified in the Unix time format. For example, if you wish to use an end time of October 1, 2018, at 9 PM UTC, specify You can convert a human-friendly time to Unix time format using a converter like Epoch converter. The cost estimate end time. Constraints: Specified in Coordinated Universal Time (UTC). Specified in the Unix time format. For example, if you want to use an end time of October 1, 2018, at 9 PM UTC, specify You can convert a human-friendly time to Unix time format using a converter like Epoch converter. Creates a reference to an Amazon Cognito user pool as an external identity provider (IdP). After you create an identity source, you can use the identities provided by the IdP as proxies for the principal in authorization queries that use the IsAuthorizedWithToken operation. These identities take the form of tokens that contain claims about the user, such as IDs, attributes and group memberships. Amazon Cognito provides both identity tokens and access tokens, and Verified Permissions can use either or both. Any combination of identity and access tokens results in the same Cedar principal. Verified Permissions automatically translates the information about the identities into the standard Cedar attributes that can be evaluated by your policies. Because the Amazon Cognito identity and access tokens can contain different information, the tokens you choose to use determine which principal attributes are available to access when evaluating Cedar policies. If you delete a Amazon Cognito user pool or user, tokens from that deleted pool or that deleted user continue to be usable until they expire. To reference a user from this identity source in your Cedar policies, use the following syntax. IdentityType::\"<CognitoUserPoolIdentifier>|<CognitoClientId> Where Creates a Cedar policy and saves it in the specified policy store. You can create either a static policy or a policy linked to a policy template. To create a static policy, provide the Cedar policy text in the To create a policy that is dynamically linked to a policy template, specify the policy template ID and the principal and resource to associate with this policy in the Creating a policy causes it to be validated against the schema in the policy store. If the policy doesn't pass validation, the operation fails and the policy isn't stored. Creates a policy store. A policy store is a container for policy resources. Creates a policy template. A template can use placeholders for the principal and resource. A template must be instantiated into a policy by associating it with specific principals and resources to use for the placeholders. That instantiated policy can then be considered in authorization decisions. The instantiated policy works identically to any other policy, except that it is dynamically linked to the template. If the template changes, then any policies that are linked to that template are immediately updated as well. Deletes an identity source that references an identity provider (IdP) such as Amazon Cognito. After you delete the identity source, you can no longer use tokens for identities from that identity source to represent principals in authorization queries made using IsAuthorizedWithToken. operations. Deletes the specified policy from the policy store. This operation is idempotent; if you specify a policy that doesn't exist, the request response returns a successful Deletes the specified policy store. This operation is idempotent. If you specify a policy store that does not exist, the request response will still return a successful HTTP 200 status code. Deletes the specified policy template from the policy store. This operation also deletes any policies that were created from the specified policy template. Those policies are immediately removed from all future API responses, and are asynchronously deleted from the policy store. Retrieves the details about the specified identity source. Retrieves information about the specified policy. Retrieves details about a policy store. Retrieve the details for the specified policy template in the specified policy store. Retrieve the details for the specified schema in the specified policy store. Makes an authorization decision about a service request described in the parameters. The information in the parameters can also define additional context that Verified Permissions can include in the evaluation. The request is evaluated against all matching policies in the specified policy store. The result of the decision is either Makes an authorization decision about a service request described in the parameters. The principal in this request comes from an external identity source. The information in the parameters can also define additional context that Verified Permissions can include in the evaluation. The request is evaluated against all matching policies in the specified policy store. The result of the decision is either If you delete a Amazon Cognito user pool or user, tokens from that deleted pool or that deleted user continue to be usable until they expire. Returns a paginated list of all of the identity sources defined in the specified policy store. Returns a paginated list of all policies stored in the specified policy store. Returns a paginated list of all policy stores in the calling Amazon Web Services account. Returns a paginated list of all policy templates in the specified policy store. Creates or updates the policy schema in the specified policy store. The schema is used to validate any Cedar policies and policy templates submitted to the policy store. Any changes to the schema validate only policies and templates submitted after the schema change. Existing policies and templates are not re-evaluated against the changed schema. If you later update a policy, then it is evaluated against the new schema at that time. Updates the specified identity source to use a new identity provider (IdP) source, or to change the mapping of identities from the IdP to a different principal entity type. Modifies a Cedar static policy in the specified policy store. You can change only certain elements of the UpdatePolicyDefinition parameter. You can directly update only static policies. To change a template-linked policy, you must update the template instead, using UpdatePolicyTemplate. If policy validation is enabled in the policy store, then updating a static policy causes Verified Permissions to validate the policy against the schema in the policy store. If the updated static policy doesn't pass validation, the operation fails and the update isn't stored. Modifies the validation setting for a policy store. Updates the specified policy template. You can update only the description and the some elements of the policyBody. Changes you make to the policy template content are immediately reflected in authorization decisions that involve all template-linked policies instantiated from this template. You don't have sufficient access to perform this action. The type of an action. The ID of an action. Contains information about an action for a request for which an authorization decision is made. This data type is used as an request parameter to the IsAuthorized and IsAuthorizedWithToken operations. Example: An attribute value of Boolean type. Example: An attribute value of type EntityIdentifier. Example: An attribute value of Long type. Example: An attribute value of String type. Example: An attribute value of Set type. Example: An attribute value of Record type. Example: The value of an attribute. Contains information about the runtime context for a request for which an authorization decision is made. This data type is used as a member of the ContextDefinition structure which is uses as a request parameter for the IsAuthorized and IsAuthorizedWithToken operations. The Amazon Resource Name (ARN) of the Amazon Cognito user pool that contains the identities to be authorized. Example: The unique application client IDs that are associated with the specified Amazon Cognito user pool. Example: The configuration for an identity source that represents a connection to an Amazon Cognito user pool used as an identity provider for Verified Permissions. This data type is used as a field that is part of an Configuration structure that is used as a parameter to the Configuration. Example: Contains configuration details of a Amazon Cognito user pool that Verified Permissions can use as a source of authenticated identities as entities. It specifies the Amazon Resource Name (ARN) of a Amazon Cognito user pool and one or more application client IDs. Example: Contains configuration information used when creating a new identity source. At this time, the only valid member of this structure is a Amazon Cognito user pool configuration. You must specify a This data type is used as a request parameter for the CreateIdentitySource operation. The list of resources referenced with this failed request. The request failed because another request to modify a resource occurred at the same. An list of attributes that are needed to successfully evaluate an authorization request. Each attribute in this array must include a map of a data type and its value. Example: Contains additional details about the context of the request. Verified Permissions evaluates this information in an authorization request as part of the This data type is used as a request parameter for the IsAuthorized and IsAuthorizedWithToken operations. Example: Specifies a unique, case-sensitive ID that you provide to ensure the idempotency of the request. This lets you safely retry the request without accidentally performing the same operation a second time. Passing the same value to a later call to an operation requires that you also pass the same value for all other parameters. We recommend that you use a UUID type of value.. If you don't provide this value, then Amazon Web Services generates a random one for you. If you retry the operation with the same Specifies the ID of the policy store in which you want to store this identity source. Only policies and requests made using this policy store can reference identities from the identity provider configured in the new identity source. Specifies the details required to communicate with the identity provider (IdP) associated with this identity source. At this time, the only valid member of this structure is a Amazon Cognito user pool configuration. You must specify a Specifies the namespace and data type of the principals generated for identities authenticated by the new identity source. The date and time the identity source was originally created. The unique ID of the new identity source. The date and time the identity source was most recently updated. The ID of the policy store that contains the identity source. Specifies a unique, case-sensitive ID that you provide to ensure the idempotency of the request. This lets you safely retry the request without accidentally performing the same operation a second time. Passing the same value to a later call to an operation requires that you also pass the same value for all other parameters. We recommend that you use a UUID type of value.. If you don't provide this value, then Amazon Web Services generates a random one for you. If you retry the operation with the same Specifies the A structure that specifies the policy type and content to use for the new policy. You must include either a static or a templateLinked element. The policy content must be written in the Cedar policy language. The ID of the policy store that contains the new policy. The unique ID of the new policy. The policy type of the new policy. The principal specified in the new policy's scope. This response element isn't present when The resource specified in the new policy's scope. This response element isn't present when the The date and time the policy was originally created. The date and time the policy was last updated. Specifies a unique, case-sensitive ID that you provide to ensure the idempotency of the request. This lets you safely retry the request without accidentally performing the same operation a second time. Passing the same value to a later call to an operation requires that you also pass the same value for all other parameters. We recommend that you use a UUID type of value.. If you don't provide this value, then Amazon Web Services generates a random one for you. If you retry the operation with the same Specifies the validation setting for this policy store. Currently, the only valid and required value is We recommend that you turn on The unique ID of the new policy store. The Amazon Resource Name (ARN) of the new policy store. The date and time the policy store was originally created. The date and time the policy store was last updated. Specifies a unique, case-sensitive ID that you provide to ensure the idempotency of the request. This lets you safely retry the request without accidentally performing the same operation a second time. Passing the same value to a later call to an operation requires that you also pass the same value for all other parameters. We recommend that you use a UUID type of value.. If you don't provide this value, then Amazon Web Services generates a random one for you. If you retry the operation with the same The ID of the policy store in which to create the policy template. Specifies a description for the policy template. Specifies the content that you want to use for the new policy template, written in the Cedar policy language. The ID of the policy store that contains the policy template. The unique ID of the new policy template. The date and time the policy template was originally created. The date and time the policy template was most recently updated. Specifies the ID of the policy store that contains the identity source that you want to delete. Specifies the ID of the identity source that you want to delete. Specifies the ID of the policy store that contains the policy that you want to delete. Specifies the ID of the policy that you want to delete. Specifies the ID of the policy store that you want to delete. Specifies the ID of the policy store that contains the policy template that you want to delete. Specifies the ID of the policy template that you want to delete. The Id of a policy that determined to an authorization decision. Example: Contains information about one of the policies that determined an authorization decision. This data type is used as an element in a response parameter for the IsAuthorized and IsAuthorizedWithToken operations. Example: An array of entities that are needed to successfully evaluate an authorization request. Each entity in this array must include an identifier for the entity, the attributes of the entity, and a list of any parent entities. Contains the list of entities to be considered during an authorization request. This includes all principals, resources, and actions required to successfully evaluate the request. This data type is used as a field in the response parameter for the IsAuthorized and IsAuthorizedWithToken operations. The type of an entity. Example: The identifier of an entity. Contains the identifier of an entity, including its ID and type. This data type is used as a request parameter for IsAuthorized operation, and as a response parameter for the CreatePolicy, GetPolicy, and UpdatePolicy operations. Example: The identifier of the entity. A list of attributes for the entity. The parents in the hierarchy that contains the entity. Contains information about an entity that can be referenced in a Cedar policy. This data type is used as one of the fields in the EntitiesDefinition structure. Used to indicate that a principal or resource is not specified. This can be used to search for policies that are not associated with a specific principal or resource. The identifier of the entity. It can consist of either an EntityType and EntityId, a principal, or a resource. Contains information about a principal or resource that can be referenced in a Cedar policy. This data type is used as part of the PolicyFilter structure that is used as a request parameter for the ListPolicies operation.. The error description. Contains a description of an evaluation error. This data type is used as a request parameter in the IsAuthorized and IsAuthorizedWithToken operations. Specifies the ID of the policy store that contains the identity source you want information about. Specifies the ID of the identity source you want information about. The date and time that the identity source was originally created. A structure that describes the configuration of the identity source. The ID of the identity source. The date and time that the identity source was most recently updated. The ID of the policy store that contains the identity source. The data type of principals generated for identities authenticated by this identity source. Specifies the ID of the policy store that contains the policy that you want information about. Specifies the ID of the policy you want information about. The ID of the policy store that contains the policy that you want information about. The unique ID of the policy that you want information about. The type of the policy. The principal specified in the policy's scope. This element isn't included in the response when The resource specified in the policy's scope. This element isn't included in the response when The definition of the requested policy. The date and time that the policy was originally created. The date and time that the policy was last updated. Specifies the ID of the policy store that you want information about. The ID of the policy store; The Amazon Resource Name (ARN) of the policy store. The current validation settings for the policy store. The date and time that the policy store was originally created. The date and time that the policy store was last updated. Specifies the ID of the policy store that contains the policy template that you want information about. Specifies the ID of the policy template that you want information about. The ID of the policy store that contains the policy template. The ID of the policy template. The description of the policy template. The content of the body of the policy template written in the Cedar policy language. The date and time that the policy template was originally created. The date and time that the policy template was most recently updated. Specifies the ID of the policy store that contains the schema. The ID of the policy store that contains the schema. The body of the schema, written in Cedar schema JSON. The date and time that the schema was originally created. The date and time that the schema was most recently updated. The application client IDs associated with the specified Amazon Cognito user pool that are enabled for this identity source. The Amazon Resource Name (ARN) of the Amazon Cognito user pool whose identities are accessible to this Verified Permissions policy store. The well-known URL that points to this user pool's OIDC discovery endpoint. This is a URL string in the following format. This URL replaces the placeholders for both the Amazon Web Services Region and the user pool identifier with those appropriate for this user pool. A string that identifies the type of OIDC service represented by this identity source. At this time, the only valid value is A structure that contains configuration of the identity source. This data type is used as a response parameter for the CreateIdentitySource operation. The Cedar entity type of the principals returned by the identity provider (IdP) associated with this identity source. A structure that defines characteristics of an identity source that you can use to filter. This data type is used as a request parameter for the ListIdentityStores operation. The date and time the identity source was originally created. A structure that contains the details of the associated identity provider (IdP). The unique identifier of the identity source. The date and time the identity source was most recently updated. The identifier of the policy store that contains the identity source. The Cedar entity type of the principals returned from the IdP associated with this identity source. A structure that defines an identity source. This data type is used as a request parameter for the ListIdentityStores operation. The application client IDs associated with the specified Amazon Cognito user pool that are enabled for this identity source. The Amazon Cognito user pool whose identities are accessible to this Verified Permissions policy store. The well-known URL that points to this user pool's OIDC discovery endpoint. This is a URL string in the following format. This URL replaces the placeholders for both the Amazon Web Services Region and the user pool identifier with those appropriate for this user pool. A string that identifies the type of OIDC service represented by this identity source. At this time, the only valid value is A structure that contains configuration of the identity source. This data type is used as a response parameter for the CreateIdentitySource operation. The request failed because of an internal error. Try your request again later Specifies the ID of the policy store. Policies in this policy store will be used to make an authorization decision for the input. Specifies the principal for which the authorization decision is to be made. Specifies the requested action to be authorized. For example, is the principal authorized to perform this action on the resource? Specifies the resource for which the authorization decision is to be made. Specifies additional context that can be used to make more granular authorization decisions. Specifies the list of entities and their associated attributes that Verified Permissions can examine when evaluating the policies. An authorization decision that indicates if the authorization request should be allowed or denied. The list of determining policies used to make the authorization decision. For example, if there are two matching policies, where one is a forbid and the other is a permit, then the forbid policy will be the determining policy. In the case of multiple matching permit policies then there would be multiple determining policies. In the case that no policies match, and hence the response is DENY, there would be no determining policies. Errors that occurred while making an authorization decision, for example, a policy references an Entity or entity Attribute that does not exist in the slice. Specifies the ID of the policy store. Policies in this policy store will be used to make an authorization decision for the input. Specifies an identity token for the principal to be authorized. This token is provided to you by the identity provider (IdP) associated with the specified identity source. You must specify either an Specifies an access token for the principal to be authorized. This token is provided to you by the identity provider (IdP) associated with the specified identity source. You must specify either an Specifies the requested action to be authorized. Is the specified principal authorized to perform this action on the specified resource. Specifies the resource for which the authorization decision is made. For example, is the principal allowed to perform the action on the resource? Specifies additional context that can be used to make more granular authorization decisions. Specifies the list of entities and their associated attributes that Verified Permissions can examine when evaluating the policies. An authorization decision that indicates if the authorization request should be allowed or denied. The list of determining policies used to make the authorization decision. For example, if there are multiple matching policies, where at least one is a forbid policy, then because forbid always overrides permit the forbid policies are the determining policies. If all matching policies are permit policies, then those policies are the determining policies. When no policies match and the response is the default DENY, there are no determining policies. Errors that occurred while making an authorization decision. For example, a policy references an entity or entity attribute that does not exist in the slice. Specifies the ID of the policy store that contains the identity sources that you want to list. Specifies that you want to receive the next page of results. Valid only if you received a Specifies the total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the number you specify, the Specifies characteristics of an identity source that you can use to limit the output to matching identity sources. If present, this value indicates that more output is available than is included in the current response. Use this value in the The list of identity sources stored in the specified policy store. Specifies the ID of the policy store you want to list policies from. Specifies that you want to receive the next page of results. Valid only if you received a Specifies the total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the number you specify, the Specifies a filter that limits the response to only policies that match the specified criteria. For example, you list only the policies that reference a specified principal. If present, this value indicates that more output is available than is included in the current response. Use this value in the Lists all policies that are available in the specified policy store. Specifies that you want to receive the next page of results. Valid only if you received a Specifies the total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the number you specify, the If present, this value indicates that more output is available than is included in the current response. Use this value in the The list of policy stores in the account. Specifies the ID of the policy store that contains the policy templates you want to list. Specifies that you want to receive the next page of results. Valid only if you received a Specifies the total number of results that you want included on each page of the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the number you specify, the If present, this value indicates that more output is available than is included in the current response. Use this value in the The list of the policy templates in the specified policy store. A structure that describes a static policy. An static policy doesn't use a template or allow placeholders for entities. A structure that describes a policy that was instantiated from a template. The template can specify placeholders for A structure that contains the details for a Cedar policy definition. It includes the policy type, a description, and a policy body. This is a top level data type used to create a policy. This data type is used as a request parameter for the CreatePolicy operation. This structure must always have either an Information about a static policy that wasn't created with a policy template. Information about a template-linked policy that was created by instantiating a policy template. A structure that describes a policy definition. It must always have either an This data type is used as a response parameter for the GetPolicy operation. Information about a static policy that wasn't created with a policy template. Information about a template-linked policy that was created by instantiating a policy template. A structure that describes a PolicyDefinintion. It will always have either an This data type is used as a response parameter for the CreatePolicy and ListPolicies operations. Filters the output to only policies that reference the specified principal. Filters the output to only policies that reference the specified resource. Filters the output to only policies of the specified type. Filters the output to only template-linked policies that were instantiated from the specified policy template. Contains information about a filter to refine policies returned in a query. This data type is used as a response parameter for the ListPolicies operation. The identifier of the PolicyStore where the policy you want information about is stored. The identifier of the policy you want information about. The type of the policy. This is one of the following values: The principal associated with the policy. The resource associated with the policy. The policy definition of an item in the list of policies returned. The date and time the policy was created. The date and time the policy was most recently updated. Contains information about a policy. This data type is used as a response parameter for the ListPolicies operation. The unique identifier of the policy store. The Amazon Resource Name (ARN) of the policy store. The date and time the policy was created. Contains information about a policy store. This data type is used as a response parameter for the ListPolicyStores operation. The unique identifier of the policy store that contains the template. The unique identifier of the policy template. The description attached to the policy template. The date and time that the policy template was created. The date and time that the policy template was most recently updated. Contains details about a policy template This data type is used as a response parameter for the ListPolicyTemplates operation. Specifies the ID of the policy store in which to place the schema. Specifies the definition of the schema to be stored. The schema definition must be written in Cedar schema JSON. The unique ID of the policy store that contains the schema. Identifies the namespaces of the entities referenced by this schema. The date and time that the schema was originally created. The date and time that the schema was last updated. The unique identifier of the resource involved in a conflict. The type of the resource involved in a conflict. Contains information about a resource conflict. The unique ID of the resource referenced in the failed request. The resource type of the resource referenced in the failed request. The request failed because it references a resource that doesn't exist. A JSON string representation of the schema supported by applications that use this policy store. For more information, see Policy store schema in the Amazon Verified Permissions User Guide. Contains a list of principal types, resource types, and actions that can be specified in policies stored in the same policy store. If the validation mode for the policy store is set to The unique ID of the resource referenced in the failed request. The resource type of the resource referenced in the failed request. The code for the Amazon Web Service that owns the quota. The quota code recognized by the Amazon Web Services Service Quotas service. The request failed because it would cause a service quota to be exceeded. The description of the static policy. The policy content of the static policy, written in the Cedar policy language. Contains information about a static policy. This data type is used as a field that is part of the PolicyDefinitionDetail type. A description of the static policy. The content of the static policy written in the Cedar policy language. A structure that contains details about a static policy. It includes the description and policy body. This data type is used within a PolicyDefinition structure as part of a request parameter for the CreatePolicy operation. A description of the static policy. A structure that contains details about a static policy. It includes the description and policy statement. This data type is used within a PolicyDefinition structure as part of a request parameter for the CreatePolicy operation. The unique identifier of the policy template used to create this policy. The principal associated with this template-linked policy. Verified Permissions substitutes this principal for the The resource associated with this template-linked policy. Verified Permissions substitutes this resource for the Contains information about a policy created by instantiating a policy template. The unique identifier of the policy template used to create this policy. The principal associated with this template-linked policy. Verified Permissions substitutes this principal for the The resource associated with this template-linked policy. Verified Permissions substitutes this resource for the Contains information about a policy that was created by instantiating a policy template. This The unique identifier of the policy template used to create this policy. The principal associated with this template-linked policy. Verified Permissions substitutes this principal for the The resource associated with this template-linked policy. Verified Permissions substitutes this resource for the Contains information about a policy created by instantiating a policy template. This The code for the Amazon Web Service that owns the quota. The quota code recognized by the Amazon Web Services Service Quotas service. The request failed because it exceeded a throttling quota. The Amazon Resource Name (ARN) of the Amazon Cognito user pool associated with this identity source. The client ID of an app client that is configured for the specified Amazon Cognito user pool. Contains configuration details of a Amazon Cognito user pool for use with an identity source. Contains configuration details of a Amazon Cognito user pool. Contains an updated configuration to replace the configuration in an existing identity source. At this time, the only valid member of this structure is a Amazon Cognito user pool configuration. You must specify a Specifies the ID of the policy store that contains the identity source that you want to update. Specifies the ID of the identity source that you want to update. Specifies the details required to communicate with the identity provider (IdP) associated with this identity source. At this time, the only valid member of this structure is a Amazon Cognito user pool configuration. You must specify a Specifies the data type of principals generated for identities authenticated by the identity source. The date and time that the updated identity source was originally created. The ID of the updated identity source. The date and time that the identity source was most recently updated. The ID of the policy store that contains the updated identity source. Contains details about the updates to be applied to a static policy. Contains information about updates to be applied to a policy. This data type is used as a request parameter in the UpdatePolicy operation. Specifies the ID of the policy store that contains the policy that you want to update. Specifies the ID of the policy that you want to update. To find this value, you can use ListPolicies. Specifies the updated policy content that you want to replace on the specified policy. The content must be valid Cedar policy language text. You can change only the following elements from the policy definition: The Any conditional clauses, such as You can't change the following elements: Changing from Changing the effect of the policy from The The The ID of the policy store that contains the policy that was updated. The ID of the policy that was updated. The type of the policy that was updated. The principal specified in the policy's scope. This element isn't included in the response when The resource specified in the policy's scope. This element isn't included in the response when The date and time that the policy was originally created. The date and time that the policy was most recently updated. Specifies the ID of the policy store that you want to update A structure that defines the validation settings that want to enable for the policy store. The ID of the updated policy store. The Amazon Resource Name (ARN) of the updated policy store. The date and time that the policy store was originally created. The date and time that the policy store was most recently updated. Specifies the ID of the policy store that contains the policy template that you want to update. Specifies the ID of the policy template that you want to update. Specifies a new description to apply to the policy template. Specifies new statement content written in Cedar policy language to replace the current body of the policy template. You can change only the following elements of the policy body: The Any conditional clauses, such as You can't change the following elements: The effect ( The The The ID of the policy store that contains the updated policy template. The ID of the updated policy template. The date and time that the policy template was originally created. The date and time that the policy template was most recently updated. Specifies the description to be added to or replaced on the static policy. Specifies the Cedar policy language text to be added to or replaced on the static policy. You can change only the following elements from the original content: The Any conditional clauses, such as You can't change the following elements: Changing from The effect ( The The Contains information about an update to a static policy. The list of fields that aren't valid. The request failed because one or more input parameters don't satisfy their constraint requirements. The output is provided as a list of fields and a reason for each field that isn't valid. The possible reasons include the following: UnrecognizedEntityType The policy includes an entity type that isn't found in the schema. UnrecognizedActionId The policy includes an action id that isn't found in the schema. InvalidActionApplication The policy includes an action that, according to the schema, doesn't support the specified principal and resource. UnexpectedType The policy included an operand that isn't a valid type for the specified operation. IncompatibleTypes The types of elements included in a MissingAttribute The policy attempts to access a record or entity attribute that isn't specified in the schema. Test for the existence of the attribute first before attempting to access its value. For more information, see the has (presence of attribute test) operator in the Cedar Policy Language Guide. UnsafeOptionalAttributeAccess The policy attempts to access a record or entity attribute that is optional and isn't guaranteed to be present. Test for the existence of the attribute first before attempting to access its value. For more information, see the has (presence of attribute test) operator in the Cedar Policy Language Guide. ImpossiblePolicy Cedar has determined that a policy condition always evaluates to false. If the policy is always false, it can never apply to any query, and so it can never affect an authorization decision. WrongNumberArguments The policy references an extension type with the wrong number of arguments. FunctionArgumentValidationError Cedar couldn't parse the argument passed to an extension type. For example, a string that is to be parsed as an IPv4 address can contain only digits and the period character. The path to the specific element that Verified Permissions found to be not valid. Describes the policy validation error. Details about a field that failed policy validation. The validation mode currently configured for this policy store. The valid values are: OFF – Neither Verified Permissions nor Cedar perform any validation on policies. No validation errors are reported by either service. STRICT – Requires a schema to be present in the policy store. Cedar performs validation on all submitted new or updated static policies and policy templates. Any that fail validation are rejected and Cedar doesn't store them in the policy store. If To submit a static policy or policy template without a schema, you must turn off validation. A structure that contains Cedar policy validation settings for the policy store. The validation mode determines which validation failures that Cedar considers serious enough to block acceptance of a new or edited static policy or policy template. This data type is used as a request parameter in the CreatePolicyStore and UpdatePolicyStore operations. Amazon Verified Permissions is a permissions management service from Amazon Web Services. You can use Verified Permissions to manage permissions for your application, and authorize user access based on those permissions. Using Verified Permissions, application developers can grant access based on information about the users, resources, and requested actions. You can also evaluate additional information like group membership, attributes of the resources, and session context, such as time of request and IP addresses. Verified Permissions manages these permissions by letting you create and store authorization policies for your applications, such as consumer-facing web sites and enterprise business systems. Verified Permissions uses Cedar as the policy language to express your permission requirements. Cedar supports both role-based access control (RBAC) and attribute-based access control (ABAC) authorization models. For more information about configuring, administering, and using Amazon Verified Permissions in your applications, see the Amazon Verified Permissions User Guide. For more information about the Cedar policy language, see the Cedar Policy Language Guide. When you write Cedar policies that reference principals, resources and actions, you can define the unique identifiers used for each of those elements. We strongly recommend that you follow these best practices: Use values like universally unique identifiers (UUIDs) for all principal and resource identifiers. For example, if user Where you use a UUID for an entity, we recommend that you follow it with the // comment specifier and the ‘friendly’ name of your entity. This helps to make your policies easier to understand. For example: principal == User::\"a1b2c3d4-e5f6-a1b2-c3d4-EXAMPLE11111\", // alice Do not include personally identifying, confidential, or sensitive information as part of the unique identifier for your principals or resources. These identifiers are included in log entries shared in CloudTrail trails. Several operations return structures that appear similar, but have different purposes. As new functionality is added to the product, the structure used in a parameter of one operation might need to change in a way that wouldn't make sense for the same parameter in a different operation. To help you understand the purpose of each, the following naming convention is used for the structures: Parameters that end in Parameters that end in Parameters that use neither suffix are used in the mutating (create and update) operations. This is no longer supported, and does not return a value. The next date when the pipeline is scheduled to run. Returns a list of all requested findings. Use to create a scan using code uploaded to an S3 bucket. Generates a pre-signed URL and request headers used to upload a code resource. You can upload your code resource to the URL and add the request headers using any HTTP client. Use to get account level configuration. Returns a list of all findings generated by a particular scan. Returns top level metrics about an account from a specified date, including number of open findings, the categories with most findings, the scans with most open findings, and scans with most open critical findings. Returns details about a scan, including whether or not a scan has completed. Returns metrics about all findings in an account within a specified time range. Returns a list of all the scans in an account. Returns a list of all tags associated with a scan. Use to add one or more tags to an existing scan. Use to remove one or more tags from an existing scan. Use to update account-level configuration with an encryption key. The identifier for the error. Description of the error. The identifier for the resource you don't have access to. The type of resource you don't have access to. You do not have sufficient access to perform this action. The number of closed findings of each severity in an account on the specified date. The date from which the finding metrics were retrieved. The average time it takes to close findings of each severity in days. The number of new findings of each severity in account on the specified date. The number of open findings of each severity in an account as of the specified date. A summary of findings metrics in an account. A code associated with the type of error. The finding ID of the finding that was not fetched. Describes the error. The name of the scan that generated the finding. Contains information about the error that caused a finding to fail to be retrieved. A list of finding identifiers. Each identifier consists of a A list of errors for individual findings which were not fetched. Each BatchGetFindingsError contains the A list of all requested findings. The name of the finding category. A finding category is determined by the detector that detected the finding. The number of open findings in the category. Information about a finding category with open findings. The code that contains a vulnerability. The code line number. The line of code where a finding was detected. The identifier for the error. Description of the error. The identifier for the service resource associated with the request. The type of resource associated with the request. The requested operation would cause a conflict with the current state of a service resource associated with the request. Resolve the conflict before retrying this request. The type of analysis you want CodeGuru Security to perform in the scan, either The idempotency token for the request. Amazon CodeGuru Security uses this value to prevent the accidental creation of duplicate scans if there are failures and retries. The identifier for an input resource used to create a scan. The unique name that CodeGuru Security uses to track revisions across multiple scans of the same resource. Only allowed for a The type of scan, either An array of key-value pairs used to tag a scan. A tag is a custom attribute label with two parts: A tag key. For example, An optional tag value field. For example, The identifier for the resource object that contains resources that were scanned. UUID that identifies the individual scan run. The name of the scan. The ARN for the scan name. The current state of the scan. Returns either The name of the scan that will use the uploaded resource. CodeGuru Security uses the unique scan name to track revisions across multiple scans of the same resource. Use this The identifier for the uploaded code resource. A set of key-value pairs that contain the required headers when uploading your resource. A pre-signed S3 URL. You can upload the code file you want to scan and add the required The KMS key ARN to use for encryption. This must be provided as a header when uploading your code resource. Information about account-level configuration. A list of The last line number of the code snippet where the security vulnerability appears in your code. The name of the file. The path to the resource with the security vulnerability. The first line number of the code snippet where the security vulnerability appears in your code. Information about the location of security vulnerabilities that Amazon CodeGuru Security detected in your code. The time when the finding was created. A description of the finding. The identifier for the detector that detected the finding in your code. A detector is a defined rule based on industry standards and AWS best practices. The name of the detector that identified the security vulnerability in your code. One or more tags or categorizations that are associated with a detector. These tags are defined by type, programming language, or other classification such as maintainability or consistency. The identifier for the component that generated a finding such as AWSCodeGuruSecurity or AWSInspector. The identifier for a finding. An object that contains the details about how to remediate a finding. The resource where Amazon CodeGuru Security detected a finding. The identifier for the rule that generated the finding. The severity of the finding. The status of the finding. A finding status can be open or closed. The title of the finding. The type of finding. The time when the finding was last updated. Findings are updated when you remediate them or when the finding code location changes. An object that describes the detected security vulnerability. Information about a finding that was detected in your code. The identifier for a finding. The name of the scan that generated the finding. An object that contains information about a finding and the scan that generated it. The severity of the finding is critical and should be addressed immediately. The severity of the finding is high and should be addressed as a near-term priority. The finding is related to quality or readability improvements and not considered actionable. The severity of the finding is low and does require action on its own. The severity of the finding is medium and should be addressed as a mid-term priority. The severity of the issue in the code that generated a finding. An The maximum number of results to return in the response. Use this parameter when paginating results. If additional results exist beyond the number you specify, the A token to use for paginating results that are returned in the response. Set the value of this parameter to null for the first request. For subsequent calls, use the The name of the scan you want to retrieve findings from. The status of the findings you want to get. Pass either A list of findings generated by the specified scan. A pagination token. You can use this in future calls to The date you want to retrieve summary metrics from, rounded to the nearest day. The date must be within the past two years since metrics data is only stored for two years. If a date outside of this range is passed, the response will be empty. The summary metrics from the specified date. UUID that identifies the individual scan run you want to view details about. You retrieve this when you call the The name of the scan you want to view details about. The type of analysis CodeGuru Security performed in the scan, either The time the scan was created. The number of times a scan has been re-run on a revised resource. UUID that identifies the individual scan run. The name of the scan. The ARN for the scan name. The current state of the scan. Pass either The time when the scan was last updated. Only available for The internal error encountered by the server. Description of the error. The server encountered an internal error and is unable to complete the request. The end date of the interval which you want to retrieve metrics from. The maximum number of results to return in the response. Use this parameter when paginating results. If additional results exist beyond the number you specify, the A token to use for paginating results that are returned in the response. Set the value of this parameter to null for the first request. For subsequent calls, use the The start date of the interval which you want to retrieve metrics from. A list of A pagination token. You can use this in future calls to The maximum number of results to return in the response. Use this parameter when paginating results. If additional results exist beyond the number you specify, the A token to use for paginating results that are returned in the response. Set the value of this parameter to null for the first request. For subsequent calls, use the A pagination token. You can use this in future calls to A list of The ARN of the An array of key-value pairs used to tag an existing scan. A tag is a custom attribute label with two parts: A tag key. For example, An optional tag value field. For example, A list of The date from which the metrics summary information was retrieved. The number of open findings of each severity in an account. A list of A list of Information about summary metrics in an account. The recommended course of action to remediate the finding. The URL address to the recommendation for remediating the finding. Information about the recommended course of action to remediate a finding. An object that contains information about the recommended course of action to remediate a finding. A list of Information about how to remediate a finding. The identifier for the resource. The identifier for a section of the resource, such as an AWS Lambda layer. Information about a resource, such as an Amazon S3 bucket or AWS Lambda function, that contains a finding. The identifier for the code file uploaded to the resource where a finding was detected. The identifier for a resource object that contains resources where a finding was detected. The identifier for the error. Description of the error. The identifier for the resource that was not found. The type of resource that was not found. The resource specified in the request was not found. The number of open findings generated by a scan. The name of the scan. Information about a scan with open findings. The time when the scan was created. The identifier for the scan run. The name of the scan. The ARN for the scan name. The state of the scan. A scan can be The time the scan was last updated. A scan is updated when it is re-run. Information about a scan. The suggested code to add to your file. A description of the suggested code fix and why it is being suggested. Information about the suggested code fix to remediate a finding. The ARN of the An array of key-value pairs used to tag an existing scan. A tag is a custom attribute label with two parts: A tag key. For example, An optional tag value field. For example, The identifier for the error. Description of the error. The identifier for the originating quota. The identifier for the originating service. The request was denied due to request throttling. The ARN of the A list of keys for each tag you want to remove from a scan. The KMS key ARN you want to use for encryption. Defaults to service-side encryption if missing. An The identifier for the error. The field that caused the error, if applicable. Description of the error. The reason the request failed validation. The input fails to satisfy the specified constraints. Describes the exception. The name of the exception. Information about a validation exception. An object that describes the location of the detected security vulnerability in your code. The identifier for the vulnerability. The number of times the vulnerability appears in your code. One or more URL addresses that contain details about a vulnerability. One or more vulnerabilities that are related to the vulnerability being described. Information about a security vulnerability that Amazon CodeGuru Security detected. This section provides documentation for the Amazon CodeGuru Security API operations. CodeGuru Security is a service that uses program analysis and machine learning to detect security policy violations and vulnerabilities, and recommends ways to address these security risks. By proactively detecting and providing recommendations for addressing security risks, CodeGuru Security improves the overall security of your application code. For more information about CodeGuru Security, see the Amazon CodeGuru Security User Guide. Associate a Source Network to an existing CloudFormation Stack and modify launch templates to use this network. Can be used for reverting to previously deployed CloudFormation stacks. Creates a new ReplicationConfigurationTemplate. Create a new Source Network resource for a provided VPC ID. Deletes a single Replication Configuration Template by ID Delete Source Network resource. Lists all ReplicationConfigurationTemplates, filtered by Source Server IDs. Lists all Source Networks or multiple Source Networks filtered by ID. Disconnects a specific Source Server from Elastic Disaster Recovery. Data replication is stopped immediately. All AWS resources created by Elastic Disaster Recovery for enabling the replication of the Source Server will be terminated / deleted within 90 minutes. You cannot disconnect a Source Server if it has a Recovery Instance. If the agent on the Source Server has not been prevented from communicating with the Elastic Disaster Recovery service, then it will receive a command to uninstall itself (within approximately 10 minutes). The following properties of the SourceServer will be changed immediately: dataReplicationInfo.dataReplicationState will be set to DISCONNECTED; The totalStorageBytes property for each of dataReplicationInfo.replicatedDisks will be set to zero; dataReplicationInfo.lagDuration and dataReplicationInfo.lagDuration will be nullified. Export the Source Network CloudFormation template to an S3 bucket. Starts replication for a stopped Source Server. This action would make the Source Server protected again and restart billing for it. Deploy VPC for the specified Source Network and modify launch templates to use this network. The VPC will be deployed using a dedicated CloudFormation stack. Starts replication for a Source Network. This action would make the Source Network protected. Stops replication for a Source Server. This action would make the Source Server unprotected, delete its existing snapshots and stop billing for it. Stops replication for a Source Network. This action would make the Source Network unprotected. CloudFormation template to associate with a Source Network. The Source Network ID to associate with CloudFormation template. The Source Network association Job. Information about a server's CPU. Copy tags. S3 bucket ARN to export Source Network templates. Launch disposition. Account containing the VPC to protect. Region containing the VPC to protect. A set of tags to be associated with the Source Network resource. Which VPC ID to protect. ID of the created Source Network. ID of the Source Network to delete. A set of filters by which to return Source Networks. Maximum number of Source Networks to retrieve. The token of the next Source Networks to retrieve. Filter Source Networks by account ID containing the protected VPCs. Filter Source Networks by the region containing the protected VPCs. An array of Source Network IDs that should be returned. An empty array means all Source Networks. A set of filters by which to return Source Networks. An array of Source Networks. The token of the next Source Networks to retrieve. Source Network properties. Properties of resource related to a job event. The Source Network ID to export its CloudFormation template to an S3 bucket. S3 bucket URL where the Source Network CloudFormation template was exported to. The ID of the Job. A list of resources that the Job is acting upon. A list of servers that the Job is acting upon. The ID of a conversion server. Properties of resource related to a job event. A string representing a job error. Copy tags. S3 bucket ARN to export Source Network templates. ID of the Launch Configuration Template. The launch status of a participating resource. The ID of a participating resource. Represents a resource participating in an asynchronous Job. Source Network ID. ID of a resource participating in an asynchronous Job. The date and time the last Source Network recovery was initiated. The ID of the Job that was used to last recover the Source Network. The status of the last recovery status of this Source Network. An object representing the Source Network recovery Lifecycle. Properties of the cloud environment where this Source Server originated from. The ARN of the Source Network. CloudFormation stack name that was deployed for recovering the Source Network. An object containing information regarding the last recovery of the Source Network. ID of the recovered VPC following Source Network recovery. Status of Source Network Replication. Possible values: (a) STOPPED - Source Network is not replicating. (b) IN_PROGRESS - Source Network is being replicated. (c) PROTECTED - Source Network was replicated successfully and is being synchronized for changes. (d) ERROR - Source Network replication has failed Error details in case Source Network replication status is ERROR. Account ID containing the VPC protected by the Source Network. Source Network ID. Region containing the VPC protected by the Source Network. VPC ID protected by the Source Network. A list of tags associated with the Source Network. The ARN of the Source Network. Source Network ID. VPC ID protected by the Source Network. CloudFormation stack name that was deployed for recovering the Source Network. ID of the recovered VPC following Source Network recovery. Properties of Source Network related to a job event. Source cloud properties of the Source Server. ID of the Source Network which is protecting this Source Server's network. The source properties of the Source Server. Don't update existing CloudFormation Stack, recover the network using a new stack. The Source Networks that we want to start a Recovery Job for. The tags to be associated with the Source Network recovery Job. CloudFormation stack name to be used for recovering the network. The ID of the Source Network you want to recover. An object representing the Source Network to recover. The Source Network recovery Job. ID of the Source Network to replicate. Source Network which was requested for replication. ID of the Source Network to stop replication. Source Network which was requested to stop replication. Copy tags. S3 bucket ARN to export Source Network templates. Launch Configuration Template ID. AWS Elastic Disaster Recovery Service. The Amazon S3 bucket and optional folder (object key prefix) where SimSpace Weaver creates the snapshot file. The Amazon S3 bucket and optional folder (object key prefix) where SimSpace Weaver creates the snapshot file. The Amazon S3 bucket must be in the same Amazon Web Services Region as the simulation. The location of the snapshot .zip file in Amazon Simple Storage Service (Amazon S3). For more information about Amazon S3, see the Amazon Simple Storage Service User Guide . Provide a If you provide a The location of the snapshot .zip file in Amazon Simple Storage Service (Amazon S3). For more information about Amazon S3, see the Amazon Simple Storage Service User Guide . Provide a The Amazon S3 bucket must be in the same Amazon Web Services Region as the simulation. If you provide a Deletes one or more automation rules. Enables the standards specified by the provided For more information, see the Security Standards section of the Security Hub User Guide. Retrieves a list of details for automation rules based on rule Amazon Resource Names (ARNs). Imports security findings generated by a finding provider into Security Hub. This action is requested by the finding provider to import its findings into Security Hub. The Amazon Web Services account that is associated with a finding if you are using the default product ARN or are a partner sending findings from within a customer's Amazon Web Services account. In these cases, the identifier of the account that you are calling An Amazon Web Services account that Security Hub has allow-listed for an official partner integration. In this case, you can call The maximum allowed size for a finding is 240 Kb. An error is returned for any finding larger than 240 Kb. After a finding is created, Finding providers also should not use Instead, finding providers use Updates one or more automation rules based on rule Amazon Resource Names (ARNs) and input parameters. Creates a custom action target in Security Hub. You can use custom actions on findings and insights in Security Hub to trigger target actions in Amazon CloudWatch Events. Creates an automation rule based on input parameters. Invites other Amazon Web Services accounts to become member accounts for the Security Hub administrator account that the invitation is sent from. This operation is only used to invite accounts that do not belong to an organization. Organization accounts do not receive invitations. Before you can use this action to invite a member, you must first use the When the account owner enables Security Hub and accepts the invitation to become a member account, the administrator account can view the findings generated from the member account. A list of automation rules and their metadata for the calling account. Provides details about one of the following actions that affects or that was taken on a resource: A remote IP address issued an Amazon Web Services API call A DNS request was received A remote IP address attempted to connect to an EC2 instance A remote IP address attempted a port probe on an EC2 instance Specifies that the rule action should update the Specifies that the automation rule action is an update to a finding field. One or more actions to update finding fields if a finding matches the defined criteria of the rule. The Amazon Resource Name (ARN) of a rule. Whether the rule is active after it is created. If this parameter is equal to An integer ranging from 1 to 1000 that represents the order in which the rule action is applied to findings. Security Hub applies rules with lower values for this parameter first. The name of the rule. A description of the rule. Specifies whether a rule is the last to be applied with respect to a finding that matches the rule criteria. This is useful when a finding matches the criteria for multiple rules, and each rule has different actions. If the value of this field is set to A set of Amazon Web Services Security Finding Format finding field attributes and corresponding expected values that Security Hub uses to filter findings. If a finding matches the conditions specified in this parameter, Security Hub applies the rule action to the finding. One or more actions to update finding fields if a finding matches the defined criteria of the rule. A timestamp that indicates when the rule was created. Uses the A timestamp that indicates when the rule was most recently updated. Uses the The principal that created a rule. Defines the configuration of an automation rule. The rule action will update the The rule action will update the The rule action will update the The rule action will update the The rule action will update the A list of findings that are related to a finding. Identifies the finding fields that the automation rule action will update when a finding matches the defined criteria. The Amazon Resource Name (ARN) for a third-party product that generated a finding in Security Hub. The Amazon Web Services account ID in which a finding was generated. The product-specific identifier for a finding. The identifier for the solution-specific component that generated a finding. One or more finding types in the format of namespace/category/classifier that classify a finding. For a list of namespaces, classifiers, and categories, see Types taxonomy for ASFF in the Security Hub User Guide. A timestamp that indicates when the potential security issue captured by a finding was first observed by the security findings product. Uses the A timestamp that indicates when the potential security issue captured by a finding was most recently observed by the security findings product. Uses the A timestamp that indicates when this finding record was created. Uses the A timestamp that indicates when the finding record was most recently updated. Uses the The likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. The level of importance that is assigned to the resources that are associated with a finding. A finding's title. A finding's description. Provides a URL that links to a page about the current finding in the finding product. Provides the name of the product that generated the finding. For control-based findings, the product name is Security Hub. The name of the company for the product that generated the finding. For control-based findings, the company is Amazon Web Services. The severity value of the finding. The type of resource that the finding pertains to. The identifier for the given resource type. For Amazon Web Services resources that are identified by Amazon Resource Names (ARNs), this is the ARN. For Amazon Web Services resources that lack ARNs, this is the identifier as defined by the Amazon Web Service that created the resource. For non-Amazon Web Services resources, this is a unique identifier that is associated with the resource. The partition in which the resource that the finding pertains to is located. A partition is a group of Amazon Web Services Regions. Each Amazon Web Services account is scoped to one partition. The Amazon Web Services Region where the resource that a finding pertains to is located. A list of Amazon Web Services tags associated with a resource at the time the finding was processed. Custom fields and values about the resource that a finding pertains to. The result of a security check. This field is only used for findings generated from controls. The security control ID for which a finding was generated. Security control IDs are the same across standards. The unique identifier of a standard in which a control is enabled. This field consists of the resource portion of the Amazon Resource Name (ARN) returned for a standard in the DescribeStandards API response. Provides the veracity of a finding. Provides information about the status of the investigation into a finding. Provides the current state of a finding. The ARN for the product that generated a related finding. The product-generated identifier for a related finding. The text of a user-defined note that's added to a finding. The timestamp of when the note was updated. Uses the date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, The principal that created a note. A list of user-defined name and value string pairs added to a finding. The criteria that determine which findings a rule applies to. The Amazon Resource Name (ARN) for the rule. Whether the rule is active after it is created. If this parameter is equal to An integer ranging from 1 to 1000 that represents the order in which the rule action is applied to findings. Security Hub applies rules with lower values for this parameter first. The name of the rule. A description of the rule. Specifies whether a rule is the last to be applied with respect to a finding that matches the rule criteria. This is useful when a finding matches the criteria for multiple rules, and each rule has different actions. If the value of this field is set to A timestamp that indicates when the rule was created. Uses the A timestamp that indicates when the rule was most recently updated. Uses the The principal that created a rule. Metadata for automation rules in the calling account. The response includes rules with a Information about the encryption configuration for X-Ray. A list of Amazon Resource Names (ARNs) for the rules that are to be deleted. A list of properly processed rule ARNs. A list of objects containing A list of rule ARNs to get details for. A list of rule details for the provided rule ARNs. A list of objects containing An array of ARNs for the rules that are to be updated. Optionally, you can also include A list of properly processed rule ARNs. A list of objects containing User-defined tags that help you label the purpose of a rule. Whether the rule is active after it is created. If this parameter is equal to An integer ranging from 1 to 1000 that represents the order in which the rule action is applied to findings. Security Hub applies rules with lower values for this parameter first. The name of the rule. A description of the rule. Specifies whether a rule is the last to be applied with respect to a finding that matches the rule criteria. This is useful when a finding matches the criteria for multiple rules, and each rule has different actions. If the value of this field is set to A set of ASFF finding field attributes and corresponding expected values that Security Hub uses to filter findings. If a finding matches the conditions specified in this parameter, Security Hub applies the rule action to the finding. One or more actions to update finding fields if a finding matches the conditions specified in The Amazon Resource Name (ARN) of the automation rule that you created. A token to specify where to start paginating the response. This is the The maximum number of rules to return in the response. This currently ranges from 1 to 100. Metadata for rules in the calling account. The response includes rules with a A pagination token for the response. A list of port ranges. The Amazon Resource Name (ARN) for the unprocessed automation rule. The error code associated with the unprocessed automation rule. An error message describing why a request didn't process a specific rule. A list of objects containing The Amazon Resource Name (ARN) for the rule. Whether the rule is active after it is created. If this parameter is equal to An integer ranging from 1 to 1000 that represents the order in which the rule action is applied to findings. Security Hub applies rules with lower values for this parameter first. A description of the rule. The name of the rule. Specifies whether a rule is the last to be applied with respect to a finding that matches the rule criteria. This is useful when a finding matches the criteria for multiple rules, and each rule has different actions. If the value of this field is set to A set of ASFF finding field attributes and corresponding expected values that Security Hub uses to filter findings. If a finding matches the conditions specified in this parameter, Security Hub applies the rule action to the finding. One or more actions to update finding fields if a finding matches the conditions specified in Specifies the parameters to update in an existing automation rule. Allocates an Elastic IP address to your Amazon Web Services account. After you allocate the Elastic IP address you can associate it with an instance or network interface. After you release an Elastic IP address, it is released to the IP address pool and can be allocated to a different Amazon Web Services account. You can allocate an Elastic IP address from an address pool owned by Amazon Web Services or from an address pool created from a public IPv4 address range that you have brought to Amazon Web Services for use with your Amazon Web Services resources using bring your own IP addresses (BYOIP). For more information, see Bring Your Own IP Addresses (BYOIP) in the Amazon Elastic Compute Cloud User Guide. [EC2-VPC] If you release an Elastic IP address, you might be able to recover it. You cannot recover an Elastic IP address that you released after it is allocated to another Amazon Web Services account. You cannot recover an Elastic IP address for EC2-Classic. To attempt to recover an Elastic IP address that you released, specify it in this operation. An Elastic IP address is for use either in the EC2-Classic platform or in a VPC. By default, you can allocate 5 Elastic IP addresses for EC2-Classic per Region and 5 Elastic IP addresses for EC2-VPC per Region. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide. You can allocate a carrier IP address which is a public IP address from a telecommunication carrier, to a network interface which resides in a subnet in a Wavelength Zone (for example an EC2 instance). We are retiring EC2-Classic. We recommend that you migrate from EC2-Classic to a VPC. For more information, see Migrate from EC2-Classic to a VPC in the Amazon Elastic Compute Cloud User Guide. Allocates an Elastic IP address to your Amazon Web Services account. After you allocate the Elastic IP address you can associate it with an instance or network interface. After you release an Elastic IP address, it is released to the IP address pool and can be allocated to a different Amazon Web Services account. You can allocate an Elastic IP address from an address pool owned by Amazon Web Services or from an address pool created from a public IPv4 address range that you have brought to Amazon Web Services for use with your Amazon Web Services resources using bring your own IP addresses (BYOIP). For more information, see Bring Your Own IP Addresses (BYOIP) in the Amazon Elastic Compute Cloud User Guide. If you release an Elastic IP address, you might be able to recover it. You cannot recover an Elastic IP address that you released after it is allocated to another Amazon Web Services account. To attempt to recover an Elastic IP address that you released, specify it in this operation. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide. You can allocate a carrier IP address which is a public IP address from a telecommunication carrier, to a network interface which resides in a subnet in a Wavelength Zone (for example an EC2 instance). Allocate a CIDR from an IPAM pool. In IPAM, an allocation is a CIDR assignment from an IPAM pool to another IPAM pool or to a resource. For more information, see Allocate CIDRs in the Amazon VPC IPAM User Guide. This action creates an allocation with strong consistency. The returned CIDR will not overlap with any other allocations from the same pool. Allocate a CIDR from an IPAM pool. The Region you use should be the IPAM pool locale. The locale is the Amazon Web Services Region where this IPAM pool is available for allocations. In IPAM, an allocation is a CIDR assignment from an IPAM pool to another IPAM pool or to a resource. For more information, see Allocate CIDRs in the Amazon VPC IPAM User Guide. This action creates an allocation with strong consistency. The returned CIDR will not overlap with any other allocations from the same pool. Associates an Elastic IP address, or carrier IP address (for instances that are in subnets in Wavelength Zones) with an instance or a network interface. Before you can use an Elastic IP address, you must allocate it to your account. An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide. [EC2-Classic, VPC in an EC2-VPC-only account] If the Elastic IP address is already associated with a different instance, it is disassociated from that instance and associated with the specified instance. If you associate an Elastic IP address with an instance that has an existing Elastic IP address, the existing address is disassociated from the instance, but remains allocated to your account. [VPC in an EC2-Classic account] If you don't specify a private IP address, the Elastic IP address is associated with the primary IP address. If the Elastic IP address is already associated with a different instance or a network interface, you get an error unless you allow reassociation. You cannot associate an Elastic IP address with an instance or network interface that has an existing Elastic IP address. [Subnets in Wavelength Zones] You can associate an IP address from the telecommunication carrier to the instance or network interface. You cannot associate an Elastic IP address with an interface in a different network border group. This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error, and you may be charged for each time the Elastic IP address is remapped to the same instance. For more information, see the Elastic IP Addresses section of Amazon EC2 Pricing. We are retiring EC2-Classic. We recommend that you migrate from EC2-Classic to a VPC. For more information, see Migrate from EC2-Classic to a VPC in the Amazon Elastic Compute Cloud User Guide. Associates an Elastic IP address, or carrier IP address (for instances that are in subnets in Wavelength Zones) with an instance or a network interface. Before you can use an Elastic IP address, you must allocate it to your account. If the Elastic IP address is already associated with a different instance, it is disassociated from that instance and associated with the specified instance. If you associate an Elastic IP address with an instance that has an existing Elastic IP address, the existing address is disassociated from the instance, but remains allocated to your account. [Subnets in Wavelength Zones] You can associate an IP address from the telecommunication carrier to the instance or network interface. You cannot associate an Elastic IP address with an interface in a different network border group. This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error, and you may be charged for each time the Elastic IP address is remapped to the same instance. For more information, see the Elastic IP Addresses section of Amazon EC2 Pricing. Creates an Amazon EBS-backed AMI from an Amazon EBS-backed instance that is either running or stopped. By default, when Amazon EC2 creates the new AMI, it reboots the instance so that it can take snapshots of the attached volumes while data is at rest, in order to ensure a consistent state. You can set the If you choose to bypass the shutdown and reboot process by setting the If you customized your instance with instance store volumes or Amazon EBS volumes in addition to the root device volume, the new AMI contains block device mapping information for those volumes. When you launch an instance from this new AMI, the instance automatically launches with those additional volumes. For more information, see Create an Amazon EBS-backed Linux AMI in the Amazon Elastic Compute Cloud User Guide. Creates an EC2 Instance Connect Endpoint. An EC2 Instance Connect Endpoint allows you to connect to a resource, without requiring the resource to have a public IPv4 address. For more information, see Connect to your resources without requiring a public IPv4 address using EC2 Instance Connect Endpoint in the Amazon EC2 User Guide. Deletes the specified Amazon FPGA Image (AFI). Deletes the specified EC2 Instance Connect Endpoint. Describes an Elastic IP address transfer. For more information, see Transfer Elastic IP addresses in the Amazon Virtual Private Cloud User Guide. Describes an Elastic IP address transfer. For more information, see Transfer Elastic IP addresses in the Amazon Virtual Private Cloud User Guide. When you transfer an Elastic IP address, there is a two-step handshake between the source and transfer Amazon Web Services accounts. When the source account starts the transfer, the transfer account has seven days to accept the Elastic IP address transfer. During those seven days, the source account can view the pending transfer by using this action. After seven days, the transfer expires and ownership of the Elastic IP address returns to the source account. Accepted transfers are visible to the source account for three days after the transfers have been accepted. Describes the specified Elastic IP addresses or all of your Elastic IP addresses. An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide. We are retiring EC2-Classic. We recommend that you migrate from EC2-Classic to a VPC. For more information, see Migrate from EC2-Classic to a VPC in the Amazon Elastic Compute Cloud User Guide. Describes the specified Elastic IP addresses or all of your Elastic IP addresses. Describes the specified attribute of the specified instance. You can specify only one attribute at a time. Valid attribute values are: Describes the specified EC2 Instance Connect Endpoints or all EC2 Instance Connect Endpoints. Describes your Elastic IP addresses that are being moved to the EC2-VPC platform, or that are being restored to the EC2-Classic platform. This request does not return information about any other Elastic IP addresses in your account. This action is deprecated. Describes your Elastic IP addresses that are being moved from or being restored to the EC2-Classic platform. This request does not return information about any other Elastic IP addresses in your account. Disassociates an Elastic IP address from the instance or network interface it's associated with. An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide. We are retiring EC2-Classic. We recommend that you migrate from EC2-Classic to a VPC. For more information, see Migrate from EC2-Classic to a VPC in the Amazon Elastic Compute Cloud User Guide. This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error. Disassociates an Elastic IP address from the instance or network interface it's associated with. This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error. Get a list of all the CIDR allocations in an IPAM pool. If you use this action after AllocateIpamPoolCidr or ReleaseIpamPoolAllocation, note that all EC2 API actions follow an eventual consistency model. Get a list of all the CIDR allocations in an IPAM pool. The Region you use should be the IPAM pool locale. The locale is the Amazon Web Services Region where this IPAM pool is available for allocations. If you use this action after AllocateIpamPoolCidr or ReleaseIpamPoolAllocation, note that all EC2 API actions follow an eventual consistency model. Moves an Elastic IP address from the EC2-Classic platform to the EC2-VPC platform. The Elastic IP address must be allocated to your account for more than 24 hours, and it must not be associated with an instance. After the Elastic IP address is moved, it is no longer available for use in the EC2-Classic platform, unless you move it back using the RestoreAddressToClassic request. You cannot move an Elastic IP address that was originally allocated for use in the EC2-VPC platform to the EC2-Classic platform. We are retiring EC2-Classic. We recommend that you migrate from EC2-Classic to a VPC. For more information, see Migrate from EC2-Classic to a VPC in the Amazon Elastic Compute Cloud User Guide. This action is deprecated. Moves an Elastic IP address from the EC2-Classic platform to the EC2-VPC platform. The Elastic IP address must be allocated to your account for more than 24 hours, and it must not be associated with an instance. After the Elastic IP address is moved, it is no longer available for use in the EC2-Classic platform, unless you move it back using the RestoreAddressToClassic request. You cannot move an Elastic IP address that was originally allocated for use in the EC2-VPC platform to the EC2-Classic platform. Releases the specified Elastic IP address. [EC2-Classic, default VPC] Releasing an Elastic IP address automatically disassociates it from any instance that it's associated with. To disassociate an Elastic IP address without releasing it, use DisassociateAddress. We are retiring EC2-Classic. We recommend that you migrate from EC2-Classic to a VPC. For more information, see Migrate from EC2-Classic to a VPC in the Amazon Elastic Compute Cloud User Guide. [Nondefault VPC] You must use DisassociateAddress to disassociate the Elastic IP address before you can release it. Otherwise, Amazon EC2 returns an error ( After releasing an Elastic IP address, it is released to the IP address pool. Be sure to update your DNS records and any servers or devices that communicate with the address. If you attempt to release an Elastic IP address that you already released, you'll get an [EC2-VPC] After you release an Elastic IP address for use in a VPC, you might be able to recover it. For more information, see AllocateAddress. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide. Releases the specified Elastic IP address. [Default VPC] Releasing an Elastic IP address automatically disassociates it from any instance that it's associated with. To disassociate an Elastic IP address without releasing it, use DisassociateAddress. [Nondefault VPC] You must use DisassociateAddress to disassociate the Elastic IP address before you can release it. Otherwise, Amazon EC2 returns an error ( After releasing an Elastic IP address, it is released to the IP address pool. Be sure to update your DNS records and any servers or devices that communicate with the address. If you attempt to release an Elastic IP address that you already released, you'll get an After you release an Elastic IP address, you might be able to recover it. For more information, see AllocateAddress. Release an allocation within an IPAM pool. You can only use this action to release manual allocations. To remove an allocation for a resource without deleting the resource, set its monitored state to false using ModifyIpamResourceCidr. For more information, see Release an allocation in the Amazon VPC IPAM User Guide. All EC2 API actions follow an eventual consistency model. Release an allocation within an IPAM pool. The Region you use should be the IPAM pool locale. The locale is the Amazon Web Services Region where this IPAM pool is available for allocations. You can only use this action to release manual allocations. To remove an allocation for a resource without deleting the resource, set its monitored state to false using ModifyIpamResourceCidr. For more information, see Release an allocation in the Amazon VPC IPAM User Guide. All EC2 API actions follow an eventual consistency model. Restores an Elastic IP address that was previously moved to the EC2-VPC platform back to the EC2-Classic platform. You cannot move an Elastic IP address that was originally allocated for use in EC2-VPC. The Elastic IP address must not be associated with an instance or network interface. We are retiring EC2-Classic. We recommend that you migrate from EC2-Classic to a VPC. For more information, see Migrate from EC2-Classic to a VPC in the Amazon Elastic Compute Cloud User Guide. This action is deprecated. Restores an Elastic IP address that was previously moved to the EC2-VPC platform back to the EC2-Classic platform. You cannot move an Elastic IP address that was originally allocated for use in EC2-VPC. The Elastic IP address must not be associated with an instance or network interface. The ID representing the allocation of the address for use with EC2-VPC. The ID representing the allocation of the address. The ID representing the association of the address with an instance in a VPC. The ID representing the association of the address with an instance. Indicates whether this Elastic IP address is for use with instances in EC2-Classic ( The network ( Indicates whether the Elastic IP address is for use with instances in a VPC or instances in EC2-Classic. Default: If the Region supports EC2-Classic, the default is The network ( [EC2-VPC] The Elastic IP address to recover or an IPv4 address from an address pool. The Elastic IP address to recover or an IPv4 address from an address pool. [EC2-VPC] The ID that Amazon Web Services assigns to represent the allocation of the Elastic IP address for use with instances in a VPC. The ID that represents the allocation of the Elastic IP address. Indicates whether the Elastic IP address is for use with instances in a VPC ( The network ( The carrier IP address. This option is only available for network interfaces which reside in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier IP address. This option is only available for network interfaces that reside in a subnet in a Wavelength Zone. [EC2-VPC] The allocation ID. This is required for EC2-VPC. The allocation ID. This is required. The ID of the instance. The instance must have exactly one attached network interface. For EC2-VPC, you can specify either the instance ID or the network interface ID, but not both. For EC2-Classic, you must specify an instance ID and the instance must be in the running state. The ID of the instance. The instance must have exactly one attached network interface. You can specify either the instance ID or the network interface ID, but not both. [EC2-Classic] The Elastic IP address to associate with the instance. This is required for EC2-Classic. Deprecated. [EC2-VPC] For a VPC in an EC2-Classic account, specify true to allow an Elastic IP address that is already associated with an instance or network interface to be reassociated with the specified instance or network interface. Otherwise, the operation fails. In a VPC in an EC2-VPC-only account, reassociation is automatic, therefore you can specify false to ensure the operation fails if the Elastic IP address is already associated with another resource. Reassociation is automatic, but you can specify false to ensure the operation fails if the Elastic IP address is already associated with another resource. [EC2-VPC] The ID of the network interface. If the instance has more than one network interface, you must specify a network interface ID. For EC2-VPC, you can specify either the instance ID or the network interface ID, but not both. The ID of the network interface. If the instance has more than one network interface, you must specify a network interface ID. You can specify either the instance ID or the network interface ID, but not both. [EC2-VPC] The primary or secondary private IP address to associate with the Elastic IP address. If no private IP address is specified, the Elastic IP address is associated with the primary private IP address. The primary or secondary private IP address to associate with the Elastic IP address. If no private IP address is specified, the Elastic IP address is associated with the primary private IP address. [EC2-VPC] The ID that represents the association of the Elastic IP address with an instance. The ID that represents the association of the Elastic IP address with an instance. Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is The ID of the subnet in which to create the EC2 Instance Connect Endpoint. One or more security groups to associate with the endpoint. If you don't specify a security group, the default security group for your VPC will be associated with the endpoint. Indicates whether your client's IP address is preserved as the source. The value is If If Default: Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. The tags to apply to the EC2 Instance Connect Endpoint during creation. Information about the EC2 Instance Connect Endpoint. Unique, case-sensitive idempotency token provided by the client in the the request. The type of network interface. The default is The only supported values are The type of network interface. The default is The only supported values are Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is The ID of the EC2 Instance Connect Endpoint to delete. Information about the EC2 Instance Connect Endpoint. One or more filters. Filter names and values are case-sensitive. One or more filters. Filter names and values are case-sensitive. [EC2-VPC] Information about the allocation IDs. Information about the allocation IDs. Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is The maximum number of items to return for this request. To get the next page of items, make another request with the token returned in the output. For more information, see Pagination. The token returned from a previous paginated request. Pagination continues from the end of the items returned by the previous request. One or more filters. One or more EC2 Instance Connect Endpoint IDs. Information about the EC2 Instance Connect Endpoints. The token to include in another request to get the next page of items. This value is [EC2-VPC] The association ID. Required for EC2-VPC. The association ID. This parameter is required. [EC2-Classic] The Elastic IP address. Required for EC2-Classic. Deprecated. The ID of the Amazon Web Services account that created the EC2 Instance Connect Endpoint. The ID of the EC2 Instance Connect Endpoint. The Amazon Resource Name (ARN) of the EC2 Instance Connect Endpoint. The current state of the EC2 Instance Connect Endpoint. The message for the current state of the EC2 Instance Connect Endpoint. Can include a failure message. The DNS name of the EC2 Instance Connect Endpoint. The ID of the elastic network interface that Amazon EC2 automatically created when creating the EC2 Instance Connect Endpoint. The ID of the VPC in which the EC2 Instance Connect Endpoint was created. The Availability Zone of the EC2 Instance Connect Endpoint. The date and time that the EC2 Instance Connect Endpoint was created. The ID of the subnet in which the EC2 Instance Connect Endpoint was created. Indicates whether your client's IP address is preserved as the source. The value is If If Default: The security groups associated with the endpoint. If you didn't specify a security group, the default security group for your VPC is associated with the endpoint. The tags assigned to the EC2 Instance Connect Endpoint. The EC2 Instance Connect Endpoint. Information about the number of instances that can be launched onto the Dedicated Host. The status of the Elastic IP address that's being moved to the EC2-VPC platform, or restored to the EC2-Classic platform. The status of the Elastic IP address that's being moved or restored. Describes the status of a moving Elastic IP address. We are retiring EC2-Classic. We recommend that you migrate from EC2-Classic to a VPC. For more information, see Migrate from EC2-Classic to a VPC in the Amazon Elastic Compute Cloud User Guide. This action is deprecated. Describes the status of a moving Elastic IP address. [EC2-VPC] The allocation ID. Required for EC2-VPC. The allocation ID. This parameter is required. [EC2-Classic] The Elastic IP address. Required for EC2-Classic. Deprecated.ByteBuffer,
* from the beginning to the buffer's limit; or null if the input is null.
diff --git a/utils/src/test/java/software/amazon/awssdk/utils/BinaryUtilsTest.java b/utils/src/test/java/software/amazon/awssdk/utils/BinaryUtilsTest.java
index 5f255d347adc..4e416ea9e3b6 100644
--- a/utils/src/test/java/software/amazon/awssdk/utils/BinaryUtilsTest.java
+++ b/utils/src/test/java/software/amazon/awssdk/utils/BinaryUtilsTest.java
@@ -16,9 +16,11 @@
package software.amazon.awssdk.utils;
import static org.assertj.core.api.Assertions.assertThat;
+import static org.junit.jupiter.api.Assertions.assertArrayEquals;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertNull;
+import static org.junit.jupiter.api.Assertions.assertThrows;
import static org.junit.jupiter.api.Assertions.assertTrue;
import java.nio.ByteBuffer;
@@ -32,13 +34,11 @@ public class BinaryUtilsTest {
public void testHex() {
{
String hex = BinaryUtils.toHex(new byte[] {0});
- System.out.println(hex);
String hex2 = Base16Lower.encodeAsString(new byte[] {0});
assertEquals(hex, hex2);
}
{
String hex = BinaryUtils.toHex(new byte[] {-1});
- System.out.println(hex);
String hex2 = Base16Lower.encodeAsString(new byte[] {-1});
assertEquals(hex, hex2);
}
@@ -169,7 +169,7 @@ public void testCopyRemainingBytesFrom_nullBuffer() {
@Test
public void testCopyRemainingBytesFrom_noRemainingBytes() {
ByteBuffer bb = ByteBuffer.allocate(1);
- bb.put(new byte[]{1});
+ bb.put(new byte[] {1});
bb.flip();
bb.get();
@@ -180,7 +180,7 @@ public void testCopyRemainingBytesFrom_noRemainingBytes() {
@Test
public void testCopyRemainingBytesFrom_fullBuffer() {
ByteBuffer bb = ByteBuffer.allocate(4);
- bb.put(new byte[]{1, 2, 3, 4});
+ bb.put(new byte[] {1, 2, 3, 4});
bb.flip();
byte[] copy = BinaryUtils.copyRemainingBytesFrom(bb);
@@ -191,7 +191,7 @@ public void testCopyRemainingBytesFrom_fullBuffer() {
@Test
public void testCopyRemainingBytesFrom_partiallyReadBuffer() {
ByteBuffer bb = ByteBuffer.allocate(4);
- bb.put(new byte[]{1, 2, 3, 4});
+ bb.put(new byte[] {1, 2, 3, 4});
bb.flip();
bb.get();
@@ -201,4 +201,137 @@ public void testCopyRemainingBytesFrom_partiallyReadBuffer() {
assertThat(bb).isEqualTo(ByteBuffer.wrap(copy));
assertThat(copy).hasSize(2);
}
+
+ @Test
+ public void testImmutableCopyOfByteBuffer() {
+ ByteBuffer sourceBuffer = ByteBuffer.allocate(4);
+ byte[] originalBytesInSource = {1, 2, 3, 4};
+ sourceBuffer.put(originalBytesInSource);
+ sourceBuffer.flip();
+
+ ByteBuffer immutableCopy = BinaryUtils.immutableCopyOf(sourceBuffer);
+
+ byte[] bytesInSourceAfterCopy = {-1, -2, -3, -4};
+ sourceBuffer.put(bytesInSourceAfterCopy);
+ sourceBuffer.flip();
+
+ assertTrue(immutableCopy.isReadOnly());
+ byte[] fromImmutableCopy = new byte[originalBytesInSource.length];
+ immutableCopy.get(fromImmutableCopy);
+ assertArrayEquals(originalBytesInSource, fromImmutableCopy);
+
+ assertEquals(0, sourceBuffer.position());
+ byte[] fromSource = new byte[bytesInSourceAfterCopy.length];
+ sourceBuffer.get(fromSource);
+ assertArrayEquals(bytesInSourceAfterCopy, fromSource);
+ }
+
+ @Test
+ public void testImmutableCopyOfByteBuffer_nullBuffer() {
+ assertNull(BinaryUtils.immutableCopyOf(null));
+ }
+
+ @Test
+ public void testImmutableCopyOfByteBuffer_partiallyReadBuffer() {
+ ByteBuffer sourceBuffer = ByteBuffer.allocate(4);
+ byte[] bytes = {1, 2, 3, 4};
+ sourceBuffer.put(bytes);
+ sourceBuffer.position(2);
+
+ ByteBuffer immutableCopy = BinaryUtils.immutableCopyOf(sourceBuffer);
+
+ assertEquals(sourceBuffer.position(), immutableCopy.position());
+ immutableCopy.rewind();
+ byte[] fromImmutableCopy = new byte[bytes.length];
+ immutableCopy.get(fromImmutableCopy);
+ assertArrayEquals(bytes, fromImmutableCopy);
+ }
+
+ @Test
+ public void testImmutableCopyOfRemainingByteBuffer() {
+ ByteBuffer sourceBuffer = ByteBuffer.allocate(4);
+ byte[] originalBytesInSource = {1, 2, 3, 4};
+ sourceBuffer.put(originalBytesInSource);
+ sourceBuffer.flip();
+
+ ByteBuffer immutableCopy = BinaryUtils.immutableCopyOfRemaining(sourceBuffer);
+
+ byte[] bytesInSourceAfterCopy = {-1, -2, -3, -4};
+ sourceBuffer.put(bytesInSourceAfterCopy);
+ sourceBuffer.flip();
+
+ assertTrue(immutableCopy.isReadOnly());
+ byte[] fromImmutableCopy = new byte[originalBytesInSource.length];
+ immutableCopy.get(fromImmutableCopy);
+ assertArrayEquals(originalBytesInSource, fromImmutableCopy);
+
+ assertEquals(0, sourceBuffer.position());
+ byte[] fromSource = new byte[bytesInSourceAfterCopy.length];
+ sourceBuffer.get(fromSource);
+ assertArrayEquals(bytesInSourceAfterCopy, fromSource);
+ }
+
+ @Test
+ public void testImmutableCopyOfByteBufferRemaining_nullBuffer() {
+ assertNull(BinaryUtils.immutableCopyOfRemaining(null));
+ }
+
+ @Test
+ public void testImmutableCopyOfByteBufferRemaining_partiallyReadBuffer() {
+ ByteBuffer sourceBuffer = ByteBuffer.allocate(4);
+ byte[] bytes = {1, 2, 3, 4};
+ sourceBuffer.put(bytes);
+ sourceBuffer.position(2);
+
+ ByteBuffer immutableCopy = BinaryUtils.immutableCopyOfRemaining(sourceBuffer);
+
+ assertEquals(2, immutableCopy.capacity());
+ assertEquals(2, immutableCopy.remaining());
+ assertEquals(0, immutableCopy.position());
+ assertEquals((byte) 3, immutableCopy.get());
+ assertEquals((byte) 4, immutableCopy.get());
+ }
+
+ @Test
+ public void testToNonDirectBuffer() {
+ ByteBuffer bb = ByteBuffer.allocateDirect(4);
+ byte[] expected = {1, 2, 3, 4};
+ bb.put(expected);
+ bb.flip();
+
+ ByteBuffer nonDirectBuffer = BinaryUtils.toNonDirectBuffer(bb);
+
+ assertFalse(nonDirectBuffer.isDirect());
+ byte[] bytes = new byte[expected.length];
+ nonDirectBuffer.get(bytes);
+ assertArrayEquals(expected, bytes);
+ }
+
+ @Test
+ public void testToNonDirectBuffer_nullBuffer() {
+ assertNull(BinaryUtils.toNonDirectBuffer(null));
+ }
+
+ @Test
+ public void testToNonDirectBuffer_partiallyReadBuffer() {
+ ByteBuffer sourceBuffer = ByteBuffer.allocateDirect(4);
+ byte[] bytes = {1, 2, 3, 4};
+ sourceBuffer.put(bytes);
+ sourceBuffer.position(2);
+
+ ByteBuffer nonDirectBuffer = BinaryUtils.toNonDirectBuffer(sourceBuffer);
+
+ assertEquals(sourceBuffer.position(), nonDirectBuffer.position());
+ nonDirectBuffer.rewind();
+ byte[] fromNonDirectBuffer = new byte[bytes.length];
+ nonDirectBuffer.get(fromNonDirectBuffer);
+ assertArrayEquals(bytes, fromNonDirectBuffer);
+ }
+
+ @Test
+ public void testToNonDirectBuffer_nonDirectBuffer() {
+ ByteBuffer nonDirectBuffer = ByteBuffer.allocate(0);
+ assertThrows(IllegalArgumentException.class, () -> BinaryUtils.toNonDirectBuffer(nonDirectBuffer));
+ }
+
}
From 8aa76b407fdde36a5f3c4a801e5b22ab3ef2e06f Mon Sep 17 00:00:00 2001
From: AWS <>
Date: Fri, 9 Jun 2023 18:07:06 +0000
Subject: [PATCH 075/317] Amazon Connect Service Update: This release adds
search APIs for Prompts, Quick Connects and Hours of Operations, which can be
used to search for those resources within a Connect Instance.
---
.../feature-AmazonConnectService-1e94a24.json | 6 +
.../codegen-resources/paginators-1.json | 27 ++
.../codegen-resources/service-2.json | 298 +++++++++++++++++-
3 files changed, 326 insertions(+), 5 deletions(-)
create mode 100644 .changes/next-release/feature-AmazonConnectService-1e94a24.json
diff --git a/.changes/next-release/feature-AmazonConnectService-1e94a24.json b/.changes/next-release/feature-AmazonConnectService-1e94a24.json
new file mode 100644
index 000000000000..0742339c56e2
--- /dev/null
+++ b/.changes/next-release/feature-AmazonConnectService-1e94a24.json
@@ -0,0 +1,6 @@
+{
+ "type": "feature",
+ "category": "Amazon Connect Service",
+ "contributor": "",
+ "description": "This release adds search APIs for Prompts, Quick Connects and Hours of Operations, which can be used to search for those resources within a Connect Instance."
+}
diff --git a/services/connect/src/main/resources/codegen-resources/paginators-1.json b/services/connect/src/main/resources/codegen-resources/paginators-1.json
index 230d2e1b39b2..e6c58c5cc27a 100644
--- a/services/connect/src/main/resources/codegen-resources/paginators-1.json
+++ b/services/connect/src/main/resources/codegen-resources/paginators-1.json
@@ -228,6 +228,24 @@
"output_token": "NextToken",
"result_key": "AvailableNumbersList"
},
+ "SearchHoursOfOperations": {
+ "input_token": "NextToken",
+ "limit_key": "MaxResults",
+ "non_aggregate_keys": [
+ "ApproximateTotalCount"
+ ],
+ "output_token": "NextToken",
+ "result_key": "HoursOfOperations"
+ },
+ "SearchPrompts": {
+ "input_token": "NextToken",
+ "limit_key": "MaxResults",
+ "non_aggregate_keys": [
+ "ApproximateTotalCount"
+ ],
+ "output_token": "NextToken",
+ "result_key": "Prompts"
+ },
"SearchQueues": {
"input_token": "NextToken",
"limit_key": "MaxResults",
@@ -237,6 +255,15 @@
"output_token": "NextToken",
"result_key": "Queues"
},
+ "SearchQuickConnects": {
+ "input_token": "NextToken",
+ "limit_key": "MaxResults",
+ "non_aggregate_keys": [
+ "ApproximateTotalCount"
+ ],
+ "output_token": "NextToken",
+ "result_key": "QuickConnects"
+ },
"SearchRoutingProfiles": {
"input_token": "NextToken",
"limit_key": "MaxResults",
diff --git a/services/connect/src/main/resources/codegen-resources/service-2.json b/services/connect/src/main/resources/codegen-resources/service-2.json
index d3ce50dd21ba..af72e81e7ac4 100644
--- a/services/connect/src/main/resources/codegen-resources/service-2.json
+++ b/services/connect/src/main/resources/codegen-resources/service-2.json
@@ -2272,6 +2272,40 @@
],
"documentation":"TargetArn is a traffic distribution group, you can call this API in both Amazon Web Services Regions associated with the traffic distribution group.FieldName are name, description, timezone, and resourceID.FieldName are name, description, and resourceID.FieldName are name, description, and resourceID.FieldName are name, description, and resourceID.FieldName are name, description, and resourceID.name and description fields support \"contains\" queries with a minimum of 2 characters and a maximum of 25 characters. Any queries with character lengths outside of this range will throw invalid results. FieldName: name FieldName are name, description, and resourceID.FIPS_140_2_LEVEL_3_OR_HIGHER is not supported in the following Regions:
FIPS_140_2_LEVEL_2_OR_HIGHER as the argument for KeyStorageSecurityStandard. Failure to do this results in an InvalidArgsException with the message, \"A certificate authority cannot be created in this region with the specified security standard.\"FIPS_140_2_LEVEL_2_OR_HIGHER as the argument for KeyStorageSecurityStandard. Failure to do this results in an InvalidArgsException with the message, \"A certificate authority cannot be created in this region with the specified security standard.\"SigningAlgorithm parameter used to sign a CSR in the CreateCertificateAuthority action.SigningAlgorithm parameter used to sign a CSR in the CreateCertificateAuthority action.ValidityNotBefore parameter can be used to customize the “Not Before” value. Validity parameter, the ValidityNotBefore parameter is optional.ValidityNotBefore value is expressed as an explicit date and time, using the Validity type value ABSOLUTE. For more information, see Validity in this API reference and Validity in RFC 5280.ValidityNotBefore parameter can be used to customize the “Not Before” value. Validity parameter, the ValidityNotBefore parameter is optional.ValidityNotBefore value is expressed as an explicit date and time, using the Validity type value ABSOLUTE. For more information, see Validity in this API reference and Validity in RFC 5280.FaceIds. Each FaceId that are present in the FaceIds list is associated with the provided UserID. The maximum number of total FaceIds per UserID is 100. UserMatchThreshold parameter specifies the minimum user match confidence required for the face to be associated with a UserID that has at least one FaceID already associated. This ensures that the FaceIds are associated with the right UserID. The value ranges from 0-100 and default value is 75. AssociatedFace objects containing the associated FaceIds is returned. If a given face is already associated with the given UserID, it will be ignored and will not be returned in the response. If a given face is already associated to a different UserID, isn't found in the collection, doesn’t meet the UserMatchThreshold, or there are already 100 faces associated with the UserID, it will be returned as part of an array of UnsuccessfulFaceAssociations. UserStatus reflects the status of an operation which updates a UserID representation with a list of given faces. The UserStatus can be:
"
+ },
"CompareFaces":{
"name":"CompareFaces",
"http":{
@@ -171,6 +192,27 @@
],
"documentation":"
Input) and a Kinesis data stream (Output) stream for receiving the output. You must use the FaceSearch option in Settings, specifying the collection that contains the faces you want to recognize. After you have finished analyzing a streaming video, use StopStreamProcessor to stop processing.Input), Amazon S3 bucket information (Output), and an Amazon SNS topic ARN (NotificationChannel). You can also provide a KMS key ID to encrypt the data sent to your Amazon S3 bucket. You specify what you want to detect by using the ConnectedHome option in settings, and selecting one of the following: PERSON, PET, PACKAGE, ALL You can also specify where in the frame you want Amazon Rekognition to monitor with RegionsOfInterest. When you run the StartStreamProcessor operation on a label detection stream processor, you input start and stop information to determine the length of the processing time.Name to assign an identifier for the stream processor. You use Name to manage the stream processor. For example, you can start processing the source video by calling StartStreamProcessor with the Name field. rekognition:CreateStreamProcessor action. If you want to tag your stream processor, you also require permission to perform the rekognition:TagResource operation.CollectionId. Takes UserId as a parameter, which is a user provided ID which should be unique within the collection. The provided UserId will alias the system generated UUID to make the UserId more user friendly. ClientToken, an idempotency token that ensures a call to CreateUser completes only once. If the value is not supplied, the AWS SDK generates an idempotency token for the requests. This prevents retries after a network error results from making multiple CreateUser calls. Name. You assign the value for Name when you create the stream processor with CreateStreamProcessor. You might not be able to use the same name for a stream processor for a few seconds after calling DeleteStreamProcessor.Collection or UserID is already deleted or not found, a ResourceNotFoundException will be thrown. If the action is successful with a 200 response, an empty HTTP body is returned. DetectText operation returns text in an array of TextDetection elements, TextDetections. Each TextDetection element provides information about a single word or line of text that was detected in the image. DetectText can detect up to 100 words in an image.DetectText operation returns multiple lines.TextDetection element is a line of text or a word, use the TextDetection object Type field. Face supplied in an array of FaceIds and the User. If the User is not present already, then a ResourceNotFound exception is thrown. If successful, an array of faces that are disassociated from the User is returned. If a given face is already disassociated from the given UserID, it will be ignored and not be returned in the response. If a given face is already associated with a different User or not found in the collection it will be returned as part of UnsuccessfulDisassociations. You can remove 1 - 100 face IDs from a user at one time.rekognition:ListTagsForResource action. UserID in the specified collection. Anonymous User (to reserve faces without any identity) is not returned as part of this request. The results are sorted by system generated primary key ID. If the response is truncated, NextToken is returned in the response that can be used in the subsequent request to retrieve the next set of identities.DetectFaces operation and use the bounding boxes in the response to make face crops, which then you can pass in to the SearchFacesByImage operation. similarity indicating how similar the face is to the input face. In the response, the operation also returns the bounding box (and a confidence level that the bounding box contains a face) of the face that Amazon Rekognition used for the input image. SearchFacesByImage returns an InvalidParameterException error. QualityFilter input parameter allows you to filter out detected faces that don’t meet a required quality bar. The quality bar is based on a variety of common use cases. Use QualityFilter to set the quality bar for filtering by specifying LOW, MEDIUM, or HIGH. If you do not want to filter detected faces, specify NONE. The default value is NONE.rekognition:SearchFacesByImage action.FaceId or UserId. This API can be used to find the closest UserID (with a highest similarity) to associate a face. The request must be provided with either FaceId or UserId. The operation returns an array of UserID that match the FaceId or UserId, ordered by similarity score with the highest similarity first. UnsearchedFace objects. If no valid face is detected in the image, the response will contain an empty UserMatches list and no SearchedFace object. AssociateFaces. If you use the same token with multiple AssociateFaces requests, the same response is returned. Use ClientRequestToken to prevent the same request from being processed more than once.CreateUser. If you use the same token with multiple CreateUser requests, the same response is returned. Use ClientRequestToken to prevent the same request from being processed more than once.DeleteUser. If you use the same token with multiple DeleteUser requests, the same response is returned. Use ClientRequestToken to prevent the same request from being processed more than once.DisassociateFaces. If you use the same token with multiple DisassociateFaces requests, the same response is returned. Use ClientRequestToken to prevent the same request from being processed more than once.PutProjectPolicy is incorrect.
"
+ "documentation":"
"
}
From 06c969af757e758cacf07b2e503b93b43751a737 Mon Sep 17 00:00:00 2001
From: AWS <>
Date: Mon, 12 Jun 2023 18:07:19 +0000
Subject: [PATCH 082/317] Amazon DynamoDB Update: Documentation updates for
DynamoDB
---
.changes/next-release/feature-AmazonDynamoDB-981ae1b.json | 6 ++++++
.../resources/codegen-resources/dynamodb/service-2.json | 8 ++++----
2 files changed, 10 insertions(+), 4 deletions(-)
create mode 100644 .changes/next-release/feature-AmazonDynamoDB-981ae1b.json
diff --git a/.changes/next-release/feature-AmazonDynamoDB-981ae1b.json b/.changes/next-release/feature-AmazonDynamoDB-981ae1b.json
new file mode 100644
index 000000000000..688659934481
--- /dev/null
+++ b/.changes/next-release/feature-AmazonDynamoDB-981ae1b.json
@@ -0,0 +1,6 @@
+{
+ "type": "feature",
+ "category": "Amazon DynamoDB",
+ "contributor": "",
+ "description": "Documentation updates for DynamoDB"
+}
diff --git a/services/dynamodb/src/main/resources/codegen-resources/dynamodb/service-2.json b/services/dynamodb/src/main/resources/codegen-resources/dynamodb/service-2.json
index 6ced961e524a..503507b44955 100644
--- a/services/dynamodb/src/main/resources/codegen-resources/dynamodb/service-2.json
+++ b/services/dynamodb/src/main/resources/codegen-resources/dynamodb/service-2.json
@@ -41,7 +41,7 @@
{"shape":"RequestLimitExceeded"},
{"shape":"InternalServerError"}
],
- "documentation":"BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.BatchGetItem returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.BatchGetItem returns a ValidationException with the message \"Too many items requested for the BatchGetItem call.\"UnprocessedKeys value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one dataset.BatchGetItem returns a ProvisionedThroughputExceededException. If at least one of the items is successfully processed, then BatchGetItem completes successfully, while returning the keys of the unread items in UnprocessedKeys.BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.BatchGetItem may retrieve items in parallel.ProjectionExpression parameter.BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.BatchGetItem returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.BatchGetItem returns a ValidationException with the message \"Too many items requested for the BatchGetItem call.\"UnprocessedKeys value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one dataset.BatchGetItem returns a ProvisionedThroughputExceededException. If at least one of the items is successfully processed, then BatchGetItem completes successfully, while returning the keys of the unread items in UnprocessedKeys.BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.BatchGetItem may retrieve items in parallel.ProjectionExpression parameter.CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, and RestoreTableToPointInTime. CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, and RestoreTableToPointInTime. ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.PAY_PER_REQUEST the value is set to 0.ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.PAY_PER_REQUEST the value is set to 0.ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.PAY_PER_REQUEST the value is set to 0.ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide.PAY_PER_REQUEST the value is set to 0.UpdateTable operation.Scratch_1 deployment type.CreateDataRepositoryAssociation isn't supported on Amazon File Cache resources. To create a DRA on Amazon File Cache, use the CreateFileCache operation.scratch_1 deployment type.CreateDataRepositoryAssociation isn't supported on Amazon File Cache resources. To create a DRA on Amazon File Cache, use the CreateFileCache operation.Scratch_1 deployment type.scratch_1 deployment type.AssociationIds values are provided in the request, or if filters are used in the request. Data repository associations are supported on Amazon File Cache resources and all Amazon FSx for Lustre file systems excluding Scratch_1 deployment types.file-system-id filter with the ID of the file system) or caches (use the file-cache-id filter with the ID of the cache), or data repository associations for a specific repository type (use the data-repository-type filter with a value of S3 or NFS). If you don't use filters, the response returns all data repository associations owned by your Amazon Web Services account in the Amazon Web Services Region of the endpoint that you're calling.MaxResults parameter to limit the number of data repository associations returned in a response. If more data repository associations remain, a NextToken value is returned in the response. In this case, send a later request with the NextToken request parameter set to the value of NextToken from the last response.AssociationIds values are provided in the request, or if filters are used in the request. Data repository associations are supported on Amazon File Cache resources and all FSx for Lustre 2.12 and newer file systems, excluding scratch_1 deployment type.file-system-id filter with the ID of the file system) or caches (use the file-cache-id filter with the ID of the cache), or data repository associations for a specific repository type (use the data-repository-type filter with a value of S3 or NFS). If you don't use filters, the response returns all data repository associations owned by your Amazon Web Services account in the Amazon Web Services Region of the endpoint that you're calling.MaxResults parameter to limit the number of data repository associations returned in a response. If more data repository associations remain, a NextToken value is returned in the response. In this case, send a later request with the NextToken request parameter set to the value of NextToken from the last response.Scratch_1 deployment type.scratch_1 deployment type.
SINGLE_AZ_1- (Default) Creates file systems with throughput capacities of 64 - 4,096 MB/s. Single_AZ_1 is available in all Amazon Web Services Regions where Amazon FSx for OpenZFS is available, except US West (Oregon).SINGLE_AZ_2- Creates file systems with throughput capacities of 160 - 10,240 MB/s using an NVMe L2ARC cache. Single_AZ_2 is available only in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Amazon Web Services Regions.
SINGLE_AZ_1- (Default) Creates file systems with throughput capacities of 64 - 4,096 MBps. Single_AZ_1 is available in all Amazon Web Services Regions where Amazon FSx for OpenZFS is available, except US West (Oregon).SINGLE_AZ_2- Creates file systems with throughput capacities of 160 - 10,240 MBps using an NVMe L2ARC cache. Single_AZ_2 is available only in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Amazon Web Services Regions.
SINGLE_AZ_1, valid values are 64, 128, 256, 512, 1024, 2048, 3072, or 4096 MB/s.SINGLE_AZ_2, valid values are 160, 320, 640, 1280, 2560, 3840, 5120, 7680, or 10240 MB/s.
SINGLE_AZ_1, valid values are 64, 128, 256, 512, 1024, 2048, 3072, or 4096 MBps.SINGLE_AZ_2, valid values are 160, 320, 640, 1280, 2560, 3840, 5120, 7680, or 10240 MBps.
CreateDataRepositoryAssociation UpdateDataRepositoryAssociation DescribeDataRepositoryAssociations Scratch_1 deployment types.
CreateDataRepositoryAssociation UpdateDataRepositoryAssociation DescribeDataRepositoryAssociations scratch_1 deployment type.AUTOMATIC) or was provisioned by the customer (USER_PROVISIONED).AUTOMATIC setting of SSD IOPS of 3 IOPS per GB of storage capacity, , or if it using a USER_PROVISIONED value.fsxadmin user account to access the NetApp ONTAP CLI and REST API. The password value is always redacted in the response.OrganizationalUnitDistinguishedName.OrganizationalUnitDistinguishedName.fsxadmin user.fsxadmin user by entering a new password. You use the fsxadmin user to access the NetApp ONTAP CLI and REST API to manage your file system resources. For more information, see Managing resources using NetApp Applicaton.AUTOMATIC or USER_PROVISIONED), and in the case of USER_PROVISIONED IOPS, the total number of SSD IOPS provisioned.AUTOMATIC or USER_PROVISIONED), and in the case of USER_PROVISIONED IOPS, the total number of SSD IOPS provisioned. For more information, see Updating SSD storage capacity and IOPS.
SCRATCH_2, PERSISTENT_1, and PERSISTENT_2 SSD deployment types, valid values are in multiples of 2400 GiB. The value must be greater than the current storage capacity.PERSISTENT HDD file systems, valid values are multiples of 6000 GiB for 12-MBps throughput per TiB file systems and multiples of 1800 GiB for 40-MBps throughput per TiB file systems. The values must be greater than the current storage capacity.SCRATCH_1 file systems, you can't increase the storage capacity.
SCRATCH_2, PERSISTENT_1, and PERSISTENT_2 SSD deployment types, valid values are in multiples of 2400 GiB. The value must be greater than the current storage capacity.PERSISTENT HDD file systems, valid values are multiples of 6000 GiB for 12-MBps throughput per TiB file systems and multiples of 1800 GiB for 40-MBps throughput per TiB file systems. The values must be greater than the current storage capacity.SCRATCH_1 file systems, you can't increase the storage capacity.UpdateFileSystem operation.NetBiosName to which an SVM is joined.
"
+ "documentation":"image/png, image/jpeg, image/* text/csv;header=present
"
},
"CompressionType":{
"shape":"CompressionType",
@@ -13807,7 +13807,7 @@
"members":{
"PipelineName":{
"shape":"PipelineNameOrArn",
- "documentation":"image/png, image/jpeg, or image/*. The default value is image/*.text/csv;header=present or x-application/vnd.amazon+parquet. The default value is text/csv;header=present.InstanceType. Choose an instance count larger than 1 for distributed training algorithms. See SageMaker distributed training jobs for more information.InstanceType. Choose an instance count larger than 1 for distributed training algorithms. See Step 2: Launch a SageMaker Distributed Training Job Using the SageMaker Python SDK for more information.NextToken is provided in the response. To resume pagination, provide the NextToken value in the as part of a subsequent call. The default value is 10.NextToken is provided in the response. To resume pagination, provide the NextToken value in the as part of a subsequent call. The default value is 10.NextToken is provided in the response. To resume pagination, provide the NextToken value in the as part of a subsequent call. The default value is 10.NextToken is provided in the response. To resume pagination, provide the NextToken value in the as part of a subsequent call. The default value is 10.NextToken is provided in the response. To resume pagination, provide the NextToken value in the as part of a subsequent call. The default value is 10.NextToken is provided in the response. To resume pagination, provide the NextToken value in the as part of a subsequent call. The default value is 10.Component or ComponentChild. Use for the workflow feature in Amplify Studio that allows you to bind events and actions to components. ActionParameters defines the action that is performed when an event occurs on the component.belongsTo field on the related data model. @index directive is supported for a hasMany data relationship.CodegenJobAsset to use for the code generation job.DataStore.CodegenGenericDataModel.CodegenGenericDataEnum.CodegenGenericDataNonModel.ReactStartCodegenJobData object.ConnectionProperties for the outbound connection.CreateOutboundConnection operation.search-imdb-movies-oopcnjfn6ugo.eu-west-1.es.amazonaws.com or doc-imdb-movies-oopcnjfn6u.eu-west-1.es.amazonaws.com.
",
+ "enum":[
+ "ENABLED",
+ "DISABLED"
+ ]
+ },
"SlotList":{
"type":"list",
"member":{"shape":"Long"}
From 6292bddfb1922cff205da32948c897bd9c861234 Mon Sep 17 00:00:00 2001
From: AWS <>
Date: Mon, 12 Jun 2023 18:07:31 +0000
Subject: [PATCH 087/317] Amazon DynamoDB Streams Update: Documentation updates
for DynamoDB Streams
---
...feature-AmazonDynamoDBStreams-cd31874.json | 6 +
.../dynamodbstreams/endpoint-rule-set.json | 577 ++++---
.../dynamodbstreams/endpoint-tests.json | 1499 ++---------------
.../dynamodbstreams/service-2.json | 10 +-
4 files changed, 555 insertions(+), 1537 deletions(-)
create mode 100644 .changes/next-release/feature-AmazonDynamoDBStreams-cd31874.json
diff --git a/.changes/next-release/feature-AmazonDynamoDBStreams-cd31874.json b/.changes/next-release/feature-AmazonDynamoDBStreams-cd31874.json
new file mode 100644
index 000000000000..6ec28e0ef2c9
--- /dev/null
+++ b/.changes/next-release/feature-AmazonDynamoDBStreams-cd31874.json
@@ -0,0 +1,6 @@
+{
+ "type": "feature",
+ "category": "Amazon DynamoDB Streams",
+ "contributor": "",
+ "description": "Documentation updates for DynamoDB Streams"
+}
diff --git a/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/endpoint-rule-set.json b/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/endpoint-rule-set.json
index d086a70a8612..911bf62628e8 100644
--- a/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/endpoint-rule-set.json
+++ b/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/endpoint-rule-set.json
@@ -3,7 +3,7 @@
"parameters": {
"Region": {
"builtIn": "AWS::Region",
- "required": true,
+ "required": false,
"documentation": "The AWS region used to dispatch the request.",
"type": "String"
},
@@ -32,13 +32,12 @@
{
"conditions": [
{
- "fn": "aws.partition",
+ "fn": "isSet",
"argv": [
{
- "ref": "Region"
+ "ref": "Endpoint"
}
- ],
- "assign": "PartitionResult"
+ ]
}
],
"type": "tree",
@@ -46,14 +45,20 @@
{
"conditions": [
{
- "fn": "isSet",
+ "fn": "booleanEquals",
"argv": [
{
- "ref": "Endpoint"
- }
+ "ref": "UseFIPS"
+ },
+ true
]
}
],
+ "error": "Invalid Configuration: FIPS and custom endpoint are not supported",
+ "type": "error"
+ },
+ {
+ "conditions": [],
"type": "tree",
"rules": [
{
@@ -62,67 +67,42 @@
"fn": "booleanEquals",
"argv": [
{
- "ref": "UseFIPS"
+ "ref": "UseDualStack"
},
true
]
}
],
- "error": "Invalid Configuration: FIPS and custom endpoint are not supported",
+ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported",
"type": "error"
},
{
"conditions": [],
- "type": "tree",
- "rules": [
- {
- "conditions": [
- {
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseDualStack"
- },
- true
- ]
- }
- ],
- "error": "Invalid Configuration: Dualstack and custom endpoint are not supported",
- "type": "error"
+ "endpoint": {
+ "url": {
+ "ref": "Endpoint"
},
- {
- "conditions": [],
- "endpoint": {
- "url": {
- "ref": "Endpoint"
- },
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
- }
- ]
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
}
]
- },
+ }
+ ]
+ },
+ {
+ "conditions": [],
+ "type": "tree",
+ "rules": [
{
"conditions": [
{
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseFIPS"
- },
- true
- ]
- },
- {
- "fn": "booleanEquals",
+ "fn": "isSet",
"argv": [
{
- "ref": "UseDualStack"
- },
- true
+ "ref": "Region"
+ }
]
}
],
@@ -131,94 +111,321 @@
{
"conditions": [
{
- "fn": "booleanEquals",
+ "fn": "aws.partition",
"argv": [
- true,
{
- "fn": "getAttr",
+ "ref": "Region"
+ }
+ ],
+ "assign": "PartitionResult"
+ }
+ ],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "booleanEquals",
"argv": [
{
- "ref": "PartitionResult"
+ "ref": "UseFIPS"
},
- "supportsFIPS"
+ true
+ ]
+ },
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseDualStack"
+ },
+ true
]
}
+ ],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ true,
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "supportsFIPS"
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ true,
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "supportsDualStack"
+ ]
+ }
+ ]
+ }
+ ],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://streams.dynamodb-fips.{Region}.{PartitionResult#dualStackDnsSuffix}",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "conditions": [],
+ "error": "FIPS and DualStack are enabled, but this partition does not support one or both",
+ "type": "error"
+ }
]
},
{
- "fn": "booleanEquals",
- "argv": [
- true,
+ "conditions": [
{
- "fn": "getAttr",
+ "fn": "booleanEquals",
"argv": [
{
- "ref": "PartitionResult"
+ "ref": "UseFIPS"
},
- "supportsDualStack"
+ true
+ ]
+ }
+ ],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ true,
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "supportsFIPS"
+ ]
+ }
+ ]
+ }
+ ],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ "aws-us-gov",
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ }
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "https://streams.dynamodb.{Region}.amazonaws.com",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://streams.dynamodb-fips.{Region}.{PartitionResult#dnsSuffix}",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ]
+ }
]
+ },
+ {
+ "conditions": [],
+ "error": "FIPS is enabled but this partition does not support FIPS",
+ "type": "error"
}
]
- }
- ],
- "type": "tree",
- "rules": [
- {
- "conditions": [],
- "endpoint": {
- "url": "https://streams.dynamodb-fips.{Region}.{PartitionResult#dualStackDnsSuffix}",
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
- }
- ]
- },
- {
- "conditions": [],
- "error": "FIPS and DualStack are enabled, but this partition does not support one or both",
- "type": "error"
- }
- ]
- },
- {
- "conditions": [
- {
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseFIPS"
},
- true
- ]
- }
- ],
- "type": "tree",
- "rules": [
- {
- "conditions": [
{
- "fn": "booleanEquals",
- "argv": [
- true,
+ "conditions": [
{
- "fn": "getAttr",
+ "fn": "booleanEquals",
"argv": [
{
- "ref": "PartitionResult"
+ "ref": "UseDualStack"
},
- "supportsFIPS"
+ true
+ ]
+ }
+ ],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ true,
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "supportsDualStack"
+ ]
+ }
+ ]
+ }
+ ],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [],
+ "type": "tree",
+ "rules": [
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://streams.dynamodb.{Region}.{PartitionResult#dualStackDnsSuffix}",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ]
+ }
]
+ },
+ {
+ "conditions": [],
+ "error": "DualStack is enabled but this partition does not support DualStack",
+ "type": "error"
}
]
- }
- ],
- "type": "tree",
- "rules": [
+ },
{
"conditions": [],
"type": "tree",
"rules": [
+ {
+ "conditions": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "ref": "Region"
+ },
+ "local"
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "http://localhost:8000",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingName": "dynamodb",
+ "signingRegion": "us-east-1"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
+ {
+ "conditions": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ "aws",
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ }
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "https://streams.dynamodb.{Region}.amazonaws.com",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
+ {
+ "conditions": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ "aws-cn",
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ }
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "https://streams.dynamodb.{Region}.amazonaws.com.cn",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
{
"conditions": [
{
@@ -238,125 +445,81 @@
}
],
"endpoint": {
- "url": "https://streams.dynamodb.{Region}.{PartitionResult#dnsSuffix}",
+ "url": "https://streams.dynamodb.{Region}.amazonaws.com",
"properties": {},
"headers": {}
},
"type": "endpoint"
},
{
- "conditions": [],
+ "conditions": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ "aws-iso",
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ }
+ ]
+ }
+ ],
"endpoint": {
- "url": "https://streams.dynamodb-fips.{Region}.{PartitionResult#dnsSuffix}",
+ "url": "https://streams.dynamodb.{Region}.c2s.ic.gov",
"properties": {},
"headers": {}
},
"type": "endpoint"
- }
- ]
- }
- ]
- },
- {
- "conditions": [],
- "error": "FIPS is enabled but this partition does not support FIPS",
- "type": "error"
- }
- ]
- },
- {
- "conditions": [
- {
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseDualStack"
- },
- true
- ]
- }
- ],
- "type": "tree",
- "rules": [
- {
- "conditions": [
- {
- "fn": "booleanEquals",
- "argv": [
- true,
+ },
{
- "fn": "getAttr",
- "argv": [
+ "conditions": [
{
- "ref": "PartitionResult"
- },
- "supportsDualStack"
- ]
+ "fn": "stringEquals",
+ "argv": [
+ "aws-iso-b",
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ }
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "https://streams.dynamodb.{Region}.sc2s.sgov.gov",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://streams.dynamodb.{Region}.{PartitionResult#dnsSuffix}",
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
}
]
}
- ],
- "type": "tree",
- "rules": [
- {
- "conditions": [],
- "endpoint": {
- "url": "https://streams.dynamodb.{Region}.{PartitionResult#dualStackDnsSuffix}",
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
- }
]
- },
- {
- "conditions": [],
- "error": "DualStack is enabled but this partition does not support DualStack",
- "type": "error"
}
]
},
{
"conditions": [],
- "type": "tree",
- "rules": [
- {
- "conditions": [
- {
- "fn": "stringEquals",
- "argv": [
- {
- "ref": "Region"
- },
- "local"
- ]
- }
- ],
- "endpoint": {
- "url": "http://localhost:8000",
- "properties": {
- "authSchemes": [
- {
- "name": "sigv4",
- "signingRegion": "us-east-1",
- "signingName": "dynamodb"
- }
- ]
- },
- "headers": {}
- },
- "type": "endpoint"
- },
- {
- "conditions": [],
- "endpoint": {
- "url": "https://streams.dynamodb.{Region}.{PartitionResult#dnsSuffix}",
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
- }
- ]
+ "error": "Invalid Configuration: Missing Region",
+ "type": "error"
}
]
}
diff --git a/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/endpoint-tests.json b/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/endpoint-tests.json
index d24464c47857..8fa93e555fbe 100644
--- a/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/endpoint-tests.json
+++ b/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/endpoint-tests.json
@@ -1,1028 +1,31 @@
{
"testCases": [
{
- "documentation": "For region ap-south-2 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-south-2.api.aws"
- }
- },
- "params": {
- "Region": "ap-south-2",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-south-2 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-south-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "ap-south-2",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ap-south-2 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ap-south-2.api.aws"
- }
- },
- "params": {
- "Region": "ap-south-2",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-south-2 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ap-south-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "ap-south-2",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ap-south-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-south-1.api.aws"
- }
- },
- "params": {
- "Region": "ap-south-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-south-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-south-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "ap-south-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ap-south-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ap-south-1.api.aws"
- }
- },
- "params": {
- "Region": "ap-south-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-south-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ap-south-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "ap-south-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-south-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-south-1.api.aws"
- }
- },
- "params": {
- "Region": "eu-south-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-south-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-south-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-south-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-south-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-south-1.api.aws"
- }
- },
- "params": {
- "Region": "eu-south-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-south-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-south-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-south-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-south-2 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-south-2.api.aws"
- }
- },
- "params": {
- "Region": "eu-south-2",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-south-2 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-south-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-south-2",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-south-2 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-south-2.api.aws"
- }
- },
- "params": {
- "Region": "eu-south-2",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-south-2 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-south-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-south-2",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.us-gov-east-1.api.aws"
- }
- },
- "params": {
- "Region": "us-gov-east-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.us-gov-east-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "us-gov-east-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.us-gov-east-1.api.aws"
- }
- },
- "params": {
- "Region": "us-gov-east-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.us-gov-east-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "us-gov-east-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region me-central-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.me-central-1.api.aws"
- }
- },
- "params": {
- "Region": "me-central-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region me-central-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.me-central-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "me-central-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region me-central-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.me-central-1.api.aws"
- }
- },
- "params": {
- "Region": "me-central-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region me-central-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.me-central-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "me-central-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ca-central-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ca-central-1.api.aws"
- }
- },
- "params": {
- "Region": "ca-central-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ca-central-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ca-central-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "ca-central-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ca-central-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ca-central-1.api.aws"
- }
- },
- "params": {
- "Region": "ca-central-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ca-central-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ca-central-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "ca-central-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-central-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-central-1.api.aws"
- }
- },
- "params": {
- "Region": "eu-central-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-central-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-central-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-central-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-central-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-central-1.api.aws"
- }
- },
- "params": {
- "Region": "eu-central-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-central-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-central-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-central-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region us-iso-west-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "error": "FIPS and DualStack are enabled, but this partition does not support one or both"
- },
- "params": {
- "Region": "us-iso-west-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region us-iso-west-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.us-iso-west-1.c2s.ic.gov"
- }
- },
- "params": {
- "Region": "us-iso-west-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region us-iso-west-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "error": "DualStack is enabled but this partition does not support DualStack"
- },
- "params": {
- "Region": "us-iso-west-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region us-iso-west-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.us-iso-west-1.c2s.ic.gov"
- }
- },
- "params": {
- "Region": "us-iso-west-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-central-2 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-central-2.api.aws"
- }
- },
- "params": {
- "Region": "eu-central-2",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-central-2 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-central-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-central-2",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-central-2 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-central-2.api.aws"
- }
- },
- "params": {
- "Region": "eu-central-2",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-central-2 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-central-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-central-2",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region us-west-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.us-west-1.api.aws"
- }
- },
- "params": {
- "Region": "us-west-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region us-west-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.us-west-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "us-west-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region us-west-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.us-west-1.api.aws"
- }
- },
- "params": {
- "Region": "us-west-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region us-west-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.us-west-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "us-west-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region us-west-2 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.us-west-2.api.aws"
- }
- },
- "params": {
- "Region": "us-west-2",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region us-west-2 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.us-west-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "us-west-2",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region us-west-2 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.us-west-2.api.aws"
- }
- },
- "params": {
- "Region": "us-west-2",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region us-west-2 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.us-west-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "us-west-2",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region af-south-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.af-south-1.api.aws"
- }
- },
- "params": {
- "Region": "af-south-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region af-south-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.af-south-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "af-south-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region af-south-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.af-south-1.api.aws"
- }
- },
- "params": {
- "Region": "af-south-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region af-south-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.af-south-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "af-south-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-north-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-north-1.api.aws"
- }
- },
- "params": {
- "Region": "eu-north-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-north-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-north-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-north-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-north-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-north-1.api.aws"
- }
- },
- "params": {
- "Region": "eu-north-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-north-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-north-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-north-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-west-3 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-west-3.api.aws"
- }
- },
- "params": {
- "Region": "eu-west-3",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-west-3 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-west-3.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-west-3",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-west-3 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-west-3.api.aws"
- }
- },
- "params": {
- "Region": "eu-west-3",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-west-3 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-west-3.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-west-3",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-west-2 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-west-2.api.aws"
- }
- },
- "params": {
- "Region": "eu-west-2",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-west-2 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-west-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-west-2",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-west-2 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-west-2.api.aws"
- }
- },
- "params": {
- "Region": "eu-west-2",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-west-2 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-west-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-west-2",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-west-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-west-1.api.aws"
- }
- },
- "params": {
- "Region": "eu-west-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-west-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.eu-west-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-west-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region eu-west-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-west-1.api.aws"
- }
- },
- "params": {
- "Region": "eu-west-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region eu-west-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.eu-west-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "eu-west-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ap-northeast-3 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-northeast-3.api.aws"
- }
- },
- "params": {
- "Region": "ap-northeast-3",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-northeast-3 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-northeast-3.amazonaws.com"
- }
- },
- "params": {
- "Region": "ap-northeast-3",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ap-northeast-3 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ap-northeast-3.api.aws"
- }
- },
- "params": {
- "Region": "ap-northeast-3",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-northeast-3 with FIPS disabled and DualStack disabled",
+ "documentation": "For region af-south-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-northeast-3.amazonaws.com"
+ "url": "https://streams.dynamodb.af-south-1.amazonaws.com"
}
},
"params": {
- "Region": "ap-northeast-3",
+ "Region": "af-south-1",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-northeast-2 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-northeast-2.api.aws"
- }
- },
- "params": {
- "Region": "ap-northeast-2",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-northeast-2 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-northeast-2.amazonaws.com"
- }
- },
- "params": {
- "Region": "ap-northeast-2",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ap-northeast-2 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ap-northeast-2.api.aws"
- }
- },
- "params": {
- "Region": "ap-northeast-2",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-northeast-2 with FIPS disabled and DualStack disabled",
+ "documentation": "For region ap-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-northeast-2.amazonaws.com"
+ "url": "https://streams.dynamodb.ap-east-1.amazonaws.com"
}
},
"params": {
- "Region": "ap-northeast-2",
+ "Region": "ap-east-1",
"UseFIPS": false,
"UseDualStack": false
}
},
- {
- "documentation": "For region ap-northeast-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-northeast-1.api.aws"
- }
- },
- "params": {
- "Region": "ap-northeast-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-northeast-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-northeast-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "ap-northeast-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ap-northeast-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ap-northeast-1.api.aws"
- }
- },
- "params": {
- "Region": "ap-northeast-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
{
"documentation": "For region ap-northeast-1 with FIPS disabled and DualStack disabled",
"expect": {
@@ -1037,673 +40,513 @@
}
},
{
- "documentation": "For region me-south-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.me-south-1.api.aws"
- }
- },
- "params": {
- "Region": "me-south-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region me-south-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.me-south-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "me-south-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region me-south-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.me-south-1.api.aws"
- }
- },
- "params": {
- "Region": "me-south-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region me-south-1 with FIPS disabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.me-south-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "me-south-1",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region sa-east-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.sa-east-1.api.aws"
- }
- },
- "params": {
- "Region": "sa-east-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region sa-east-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.sa-east-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "sa-east-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region sa-east-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.sa-east-1.api.aws"
- }
- },
- "params": {
- "Region": "sa-east-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region sa-east-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region ap-northeast-2 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.sa-east-1.amazonaws.com"
+ "url": "https://streams.dynamodb.ap-northeast-2.amazonaws.com"
}
},
"params": {
- "Region": "sa-east-1",
+ "Region": "ap-northeast-2",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-east-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-east-1.api.aws"
- }
- },
- "params": {
- "Region": "ap-east-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-east-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-east-1.amazonaws.com"
- }
- },
- "params": {
- "Region": "ap-east-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region ap-east-1 with FIPS disabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.ap-east-1.api.aws"
- }
- },
- "params": {
- "Region": "ap-east-1",
- "UseFIPS": false,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-east-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region ap-northeast-3 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-east-1.amazonaws.com"
+ "url": "https://streams.dynamodb.ap-northeast-3.amazonaws.com"
}
},
"params": {
- "Region": "ap-east-1",
+ "Region": "ap-northeast-3",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region cn-north-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.cn-north-1.api.amazonwebservices.com.cn"
- }
- },
- "params": {
- "Region": "cn-north-1",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region cn-north-1 with FIPS enabled and DualStack disabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.cn-north-1.amazonaws.com.cn"
- }
- },
- "params": {
- "Region": "cn-north-1",
- "UseFIPS": true,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region cn-north-1 with FIPS disabled and DualStack enabled",
+ "documentation": "For region ap-south-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.cn-north-1.api.amazonwebservices.com.cn"
+ "url": "https://streams.dynamodb.ap-south-1.amazonaws.com"
}
},
"params": {
- "Region": "cn-north-1",
+ "Region": "ap-south-1",
"UseFIPS": false,
- "UseDualStack": true
+ "UseDualStack": false
}
},
{
- "documentation": "For region cn-north-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region ap-southeast-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.cn-north-1.amazonaws.com.cn"
+ "url": "https://streams.dynamodb.ap-southeast-1.amazonaws.com"
}
},
"params": {
- "Region": "cn-north-1",
+ "Region": "ap-southeast-1",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack enabled",
+ "documentation": "For region ap-southeast-2 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.us-gov-west-1.api.aws"
+ "url": "https://streams.dynamodb.ap-southeast-2.amazonaws.com"
}
},
"params": {
- "Region": "us-gov-west-1",
- "UseFIPS": true,
- "UseDualStack": true
+ "Region": "ap-southeast-2",
+ "UseFIPS": false,
+ "UseDualStack": false
}
},
{
- "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack disabled",
+ "documentation": "For region ap-southeast-3 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-gov-west-1.amazonaws.com"
+ "url": "https://streams.dynamodb.ap-southeast-3.amazonaws.com"
}
},
"params": {
- "Region": "us-gov-west-1",
- "UseFIPS": true,
+ "Region": "ap-southeast-3",
+ "UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack enabled",
+ "documentation": "For region ca-central-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-gov-west-1.api.aws"
+ "url": "https://streams.dynamodb.ca-central-1.amazonaws.com"
}
},
"params": {
- "Region": "us-gov-west-1",
+ "Region": "ca-central-1",
"UseFIPS": false,
- "UseDualStack": true
+ "UseDualStack": false
}
},
{
- "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region eu-central-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-gov-west-1.amazonaws.com"
+ "url": "https://streams.dynamodb.eu-central-1.amazonaws.com"
}
},
"params": {
- "Region": "us-gov-west-1",
+ "Region": "eu-central-1",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-1 with FIPS enabled and DualStack enabled",
+ "documentation": "For region eu-north-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.ap-southeast-1.api.aws"
+ "url": "https://streams.dynamodb.eu-north-1.amazonaws.com"
}
},
"params": {
- "Region": "ap-southeast-1",
- "UseFIPS": true,
- "UseDualStack": true
+ "Region": "eu-north-1",
+ "UseFIPS": false,
+ "UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-1 with FIPS enabled and DualStack disabled",
+ "documentation": "For region eu-south-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.ap-southeast-1.amazonaws.com"
+ "url": "https://streams.dynamodb.eu-south-1.amazonaws.com"
}
},
"params": {
- "Region": "ap-southeast-1",
- "UseFIPS": true,
+ "Region": "eu-south-1",
+ "UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-1 with FIPS disabled and DualStack enabled",
+ "documentation": "For region eu-west-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-southeast-1.api.aws"
+ "url": "https://streams.dynamodb.eu-west-1.amazonaws.com"
}
},
"params": {
- "Region": "ap-southeast-1",
+ "Region": "eu-west-1",
"UseFIPS": false,
- "UseDualStack": true
+ "UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region eu-west-2 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-southeast-1.amazonaws.com"
+ "url": "https://streams.dynamodb.eu-west-2.amazonaws.com"
}
},
"params": {
- "Region": "ap-southeast-1",
+ "Region": "eu-west-2",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-2 with FIPS enabled and DualStack enabled",
+ "documentation": "For region eu-west-3 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.ap-southeast-2.api.aws"
+ "url": "https://streams.dynamodb.eu-west-3.amazonaws.com"
}
},
"params": {
- "Region": "ap-southeast-2",
- "UseFIPS": true,
- "UseDualStack": true
+ "Region": "eu-west-3",
+ "UseFIPS": false,
+ "UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-2 with FIPS enabled and DualStack disabled",
+ "documentation": "For region local with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.ap-southeast-2.amazonaws.com"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingName": "dynamodb",
+ "signingRegion": "us-east-1"
+ }
+ ]
+ },
+ "url": "http://localhost:8000"
}
},
"params": {
- "Region": "ap-southeast-2",
- "UseFIPS": true,
+ "Region": "local",
+ "UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-2 with FIPS disabled and DualStack enabled",
+ "documentation": "For region me-south-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-southeast-2.api.aws"
+ "url": "https://streams.dynamodb.me-south-1.amazonaws.com"
}
},
"params": {
- "Region": "ap-southeast-2",
+ "Region": "me-south-1",
"UseFIPS": false,
- "UseDualStack": true
+ "UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-2 with FIPS disabled and DualStack disabled",
+ "documentation": "For region sa-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-southeast-2.amazonaws.com"
+ "url": "https://streams.dynamodb.sa-east-1.amazonaws.com"
}
},
"params": {
- "Region": "ap-southeast-2",
+ "Region": "sa-east-1",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled",
+ "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled",
"expect": {
- "error": "FIPS and DualStack are enabled, but this partition does not support one or both"
+ "endpoint": {
+ "url": "https://streams.dynamodb.us-east-1.amazonaws.com"
+ }
},
"params": {
- "Region": "us-iso-east-1",
- "UseFIPS": true,
- "UseDualStack": true
+ "Region": "us-east-1",
+ "UseFIPS": false,
+ "UseDualStack": false
}
},
{
- "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled",
+ "documentation": "For region us-east-2 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.us-iso-east-1.c2s.ic.gov"
+ "url": "https://streams.dynamodb.us-east-2.amazonaws.com"
}
},
"params": {
- "Region": "us-iso-east-1",
- "UseFIPS": true,
+ "Region": "us-east-2",
+ "UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled",
+ "documentation": "For region us-west-1 with FIPS disabled and DualStack disabled",
"expect": {
- "error": "DualStack is enabled but this partition does not support DualStack"
+ "endpoint": {
+ "url": "https://streams.dynamodb.us-west-1.amazonaws.com"
+ }
},
"params": {
- "Region": "us-iso-east-1",
+ "Region": "us-west-1",
"UseFIPS": false,
- "UseDualStack": true
+ "UseDualStack": false
}
},
{
- "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region us-west-2 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-iso-east-1.c2s.ic.gov"
+ "url": "https://streams.dynamodb.us-west-2.amazonaws.com"
}
},
"params": {
- "Region": "us-iso-east-1",
+ "Region": "us-west-2",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-3 with FIPS enabled and DualStack enabled",
+ "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.ap-southeast-3.api.aws"
+ "url": "https://streams.dynamodb-fips.us-east-1.api.aws"
}
},
"params": {
- "Region": "ap-southeast-3",
+ "Region": "us-east-1",
"UseFIPS": true,
"UseDualStack": true
}
},
{
- "documentation": "For region ap-southeast-3 with FIPS enabled and DualStack disabled",
+ "documentation": "For region us-east-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.ap-southeast-3.amazonaws.com"
+ "url": "https://streams.dynamodb-fips.us-east-1.amazonaws.com"
}
},
"params": {
- "Region": "ap-southeast-3",
+ "Region": "us-east-1",
"UseFIPS": true,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-3 with FIPS disabled and DualStack enabled",
+ "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-southeast-3.api.aws"
+ "url": "https://streams.dynamodb.us-east-1.api.aws"
}
},
"params": {
- "Region": "ap-southeast-3",
+ "Region": "us-east-1",
"UseFIPS": false,
"UseDualStack": true
}
},
{
- "documentation": "For region ap-southeast-3 with FIPS disabled and DualStack disabled",
+ "documentation": "For region cn-north-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-southeast-3.amazonaws.com"
+ "url": "https://streams.dynamodb.cn-north-1.amazonaws.com.cn"
}
},
"params": {
- "Region": "ap-southeast-3",
+ "Region": "cn-north-1",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-4 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.ap-southeast-4.api.aws"
- }
- },
- "params": {
- "Region": "ap-southeast-4",
- "UseFIPS": true,
- "UseDualStack": true
- }
- },
- {
- "documentation": "For region ap-southeast-4 with FIPS enabled and DualStack disabled",
+ "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.ap-southeast-4.amazonaws.com"
+ "url": "https://streams.dynamodb.cn-northwest-1.amazonaws.com.cn"
}
},
"params": {
- "Region": "ap-southeast-4",
- "UseFIPS": true,
+ "Region": "cn-northwest-1",
+ "UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region ap-southeast-4 with FIPS disabled and DualStack enabled",
+ "documentation": "For region cn-north-1 with FIPS enabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-southeast-4.api.aws"
+ "url": "https://streams.dynamodb-fips.cn-north-1.api.amazonwebservices.com.cn"
}
},
"params": {
- "Region": "ap-southeast-4",
- "UseFIPS": false,
+ "Region": "cn-north-1",
+ "UseFIPS": true,
"UseDualStack": true
}
},
{
- "documentation": "For region ap-southeast-4 with FIPS disabled and DualStack disabled",
+ "documentation": "For region cn-north-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.ap-southeast-4.amazonaws.com"
+ "url": "https://streams.dynamodb-fips.cn-north-1.amazonaws.com.cn"
}
},
"params": {
- "Region": "ap-southeast-4",
- "UseFIPS": false,
+ "Region": "cn-north-1",
+ "UseFIPS": true,
"UseDualStack": false
}
},
{
- "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled",
+ "documentation": "For region cn-north-1 with FIPS disabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.us-east-1.api.aws"
+ "url": "https://streams.dynamodb.cn-north-1.api.amazonwebservices.com.cn"
}
},
"params": {
- "Region": "us-east-1",
- "UseFIPS": true,
+ "Region": "cn-north-1",
+ "UseFIPS": false,
"UseDualStack": true
}
},
{
- "documentation": "For region us-east-1 with FIPS enabled and DualStack disabled",
+ "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.us-east-1.amazonaws.com"
+ "url": "https://streams.dynamodb.us-gov-east-1.amazonaws.com"
}
},
"params": {
- "Region": "us-east-1",
- "UseFIPS": true,
+ "Region": "us-gov-east-1",
+ "UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled",
+ "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-east-1.api.aws"
+ "url": "https://streams.dynamodb.us-gov-east-1.amazonaws.com"
}
},
"params": {
- "Region": "us-east-1",
- "UseFIPS": false,
- "UseDualStack": true
+ "Region": "us-gov-east-1",
+ "UseFIPS": true,
+ "UseDualStack": false
}
},
{
- "documentation": "For region us-east-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-east-1.amazonaws.com"
+ "url": "https://streams.dynamodb.us-gov-west-1.amazonaws.com"
}
},
"params": {
- "Region": "us-east-1",
+ "Region": "us-gov-west-1",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region us-east-2 with FIPS enabled and DualStack enabled",
+ "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.us-east-2.api.aws"
+ "url": "https://streams.dynamodb.us-gov-west-1.amazonaws.com"
}
},
"params": {
- "Region": "us-east-2",
+ "Region": "us-gov-west-1",
"UseFIPS": true,
- "UseDualStack": true
+ "UseDualStack": false
}
},
{
- "documentation": "For region us-east-2 with FIPS enabled and DualStack disabled",
+ "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.us-east-2.amazonaws.com"
+ "url": "https://streams.dynamodb-fips.us-gov-east-1.api.aws"
}
},
"params": {
- "Region": "us-east-2",
+ "Region": "us-gov-east-1",
"UseFIPS": true,
- "UseDualStack": false
+ "UseDualStack": true
}
},
{
- "documentation": "For region us-east-2 with FIPS disabled and DualStack enabled",
+ "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-east-2.api.aws"
+ "url": "https://streams.dynamodb.us-gov-east-1.api.aws"
}
},
"params": {
- "Region": "us-east-2",
+ "Region": "us-gov-east-1",
"UseFIPS": false,
"UseDualStack": true
}
},
{
- "documentation": "For region us-east-2 with FIPS disabled and DualStack disabled",
+ "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-east-2.amazonaws.com"
+ "url": "https://streams.dynamodb.us-iso-east-1.c2s.ic.gov"
}
},
"params": {
- "Region": "us-east-2",
+ "Region": "us-iso-east-1",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region cn-northwest-1 with FIPS enabled and DualStack enabled",
+ "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled",
"expect": {
- "endpoint": {
- "url": "https://streams.dynamodb-fips.cn-northwest-1.api.amazonwebservices.com.cn"
- }
+ "error": "FIPS and DualStack are enabled, but this partition does not support one or both"
},
"params": {
- "Region": "cn-northwest-1",
+ "Region": "us-iso-east-1",
"UseFIPS": true,
"UseDualStack": true
}
},
{
- "documentation": "For region cn-northwest-1 with FIPS enabled and DualStack disabled",
+ "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb-fips.cn-northwest-1.amazonaws.com.cn"
+ "url": "https://streams.dynamodb-fips.us-iso-east-1.c2s.ic.gov"
}
},
"params": {
- "Region": "cn-northwest-1",
+ "Region": "us-iso-east-1",
"UseFIPS": true,
"UseDualStack": false
}
},
{
- "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack enabled",
+ "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled",
"expect": {
- "endpoint": {
- "url": "https://streams.dynamodb.cn-northwest-1.api.amazonwebservices.com.cn"
- }
+ "error": "DualStack is enabled but this partition does not support DualStack"
},
"params": {
- "Region": "cn-northwest-1",
+ "Region": "us-iso-east-1",
"UseFIPS": false,
"UseDualStack": true
}
},
{
- "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.cn-northwest-1.amazonaws.com.cn"
+ "url": "https://streams.dynamodb.us-isob-east-1.sc2s.sgov.gov"
}
},
"params": {
- "Region": "cn-northwest-1",
+ "Region": "us-isob-east-1",
"UseFIPS": false,
"UseDualStack": false
}
@@ -1744,27 +587,27 @@
}
},
{
- "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For custom endpoint with region set and fips disabled and dualstack disabled",
"expect": {
"endpoint": {
- "url": "https://streams.dynamodb.us-isob-east-1.sc2s.sgov.gov"
+ "url": "https://example.com"
}
},
"params": {
- "Region": "us-isob-east-1",
+ "Region": "us-east-1",
"UseFIPS": false,
- "UseDualStack": false
+ "UseDualStack": false,
+ "Endpoint": "https://example.com"
}
},
{
- "documentation": "For custom endpoint with fips disabled and dualstack disabled",
+ "documentation": "For custom endpoint with region not set and fips disabled and dualstack disabled",
"expect": {
"endpoint": {
"url": "https://example.com"
}
},
"params": {
- "Region": "us-east-1",
"UseFIPS": false,
"UseDualStack": false,
"Endpoint": "https://example.com"
@@ -1793,6 +636,12 @@
"UseDualStack": true,
"Endpoint": "https://example.com"
}
+ },
+ {
+ "documentation": "Missing region",
+ "expect": {
+ "error": "Invalid Configuration: Missing Region"
+ }
}
],
"version": "1.0"
diff --git a/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/service-2.json b/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/service-2.json
index 9b65e8fcf831..098679799516 100644
--- a/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/service-2.json
+++ b/services/dynamodb/src/main/resources/codegen-resources/dynamodbstreams/service-2.json
@@ -314,7 +314,7 @@
"documentation":"CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, and RestoreTableToPointInTime. CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, and RestoreTableToPointInTime. aws:dynamodb.aws:dynamodb.LatestStreamLabel is not a unique identifier for the stream, because it is possible that a stream from another table might have the same timestamp. However, the combination of the following three elements is guaranteed to be unique:
"
+ "documentation":"StreamLabel LatestStreamLabel is not a unique identifier for the stream, because it is possible that a stream from another table might have the same timestamp. However, the combination of the following three elements is guaranteed to be unique:
"
}
},
"documentation":"StreamLabel LatestStreamLabel is not a unique identifier for the stream, because it is possible that a stream from another table might have the same timestamp. However, the combination of the following three elements is guaranteed to be unique:
"
+ "documentation":"StreamLabel LatestStreamLabel is not a unique identifier for the stream, because it is possible that a stream from another table might have the same timestamp. However, the combination of the following three elements is guaranteed to be unique:
"
},
"StreamStatus":{
"shape":"StreamStatus",
@@ -563,7 +563,7 @@
"members":{
"ApproximateCreationDateTime":{
"shape":"Date",
- "documentation":"StreamLabel
Andy Kiesler
💻
Martin
💻
+
Paulo Lieuthier
💻
From a7e35d9dbbba033b17837e20d3e226ed9ed36b1a Mon Sep 17 00:00:00 2001
From: AWS <>
Date: Tue, 13 Jun 2023 18:06:46 +0000
Subject: [PATCH 092/317] AWS CloudTrail Update: This feature allows users to
view dashboards for CloudTrail Lake event data stores.
---
.../feature-AWSCloudTrail-296267d.json | 6 +++
.../codegen-resources/service-2.json | 53 ++++++++++++++-----
2 files changed, 47 insertions(+), 12 deletions(-)
create mode 100644 .changes/next-release/feature-AWSCloudTrail-296267d.json
diff --git a/.changes/next-release/feature-AWSCloudTrail-296267d.json b/.changes/next-release/feature-AWSCloudTrail-296267d.json
new file mode 100644
index 000000000000..7fbf087a3cb1
--- /dev/null
+++ b/.changes/next-release/feature-AWSCloudTrail-296267d.json
@@ -0,0 +1,6 @@
+{
+ "type": "feature",
+ "category": "AWS CloudTrail",
+ "contributor": "",
+ "description": "This feature allows users to view dashboards for CloudTrail Lake event data stores."
+}
diff --git a/services/cloudtrail/src/main/resources/codegen-resources/service-2.json b/services/cloudtrail/src/main/resources/codegen-resources/service-2.json
index f413d344ea0c..9ded7eeb1e5f 100644
--- a/services/cloudtrail/src/main/resources/codegen-resources/service-2.json
+++ b/services/cloudtrail/src/main/resources/codegen-resources/service-2.json
@@ -287,7 +287,7 @@
{"shape":"UnsupportedOperationException"},
{"shape":"NoManagementAccountSLRExistsException"}
],
- "documentation":"
Sébastien Crocquesel
💻EventDataStore, and a value for QueryID.QueryID or a QueryAlias. Specifying the QueryAlias parameter returns information about the last query run for the alias.QueryID value returned by the StartQuery operation, and an ARN for EventDataStore.QueryID value returned by the StartQuery operation.QueryStatement parameter provides your SQL query, enclosed in single quotation marks. Use the optional DeliveryS3Uri parameter to deliver the query results to an S3 bucket.QueryStatement parameter to provide your SQL query, enclosed in single quotation marks. Use the optional DeliveryS3Uri parameter to deliver the query results to an S3 bucket.StartQuery requires you specify either the QueryStatement parameter, or a QueryAlias and any QueryParameters. In the current release, the QueryAlias and QueryParameters parameters are used only for the queries that populate the CloudTrail Lake dashboards.EventDataStore value is an ARN or the ID portion of the ARN. Other parameters are optional, but at least one optional parameter must be specified, or CloudTrail throws an error. RetentionPeriod is in days, and valid values are integers between 90 and 2557. By default, TerminationProtection is enabled.AdvancedEventSelectors includes or excludes management and data events in your event data store. For more information about AdvancedEventSelectors, see PutEventSelectorsRequest$AdvancedEventSelectors. AdvancedEventSelectors includes events of that type in your event data store.EventDataStore value is an ARN or the ID portion of the ARN. Other parameters are optional, but at least one optional parameter must be specified, or CloudTrail throws an error. RetentionPeriod is in days, and valid values are integers between 90 and 2557. By default, TerminationProtection is enabled.AdvancedEventSelectors includes or excludes management and data events in your event data store. For more information about AdvancedEventSelectors, see AdvancedEventSelectors.AdvancedEventSelectors includes events of that type in your event data store.readOnly, eventCategory, eventSource (for management events), eventName, resources.type, and resources.ARN. eventCategory.
"
+ "documentation":"readOnly - Optional. Can be set to Equals a value of true or false. If you do not add this field, CloudTrail logs both read and write events. A value of true logs only read events. A value of false logs only write events.eventSource - For filtering management events only. This can be set only to NotEquals kms.amazonaws.com.eventName - Can use any operator. You can use it to filter in or filter out any data event logged to CloudTrail, such as PutBucket or GetSnapshotBlock. You can have multiple values for this field, separated by commas.eventCategory - This is required and must be set to Equals.
Management or Data. ConfigurationItem. Evidence. ActivityAuditLog. resources.type - This field is required for CloudTrail data events. resources.type can only use the Equals operator, and the value can be one of the following:
AWS::DynamoDB::Table AWS::Lambda::Function AWS::S3::Object AWS::CloudTrail::Channel AWS::Cognito::IdentityPool AWS::DynamoDB::Stream AWS::EC2::Snapshot AWS::FinSpace::Environment AWS::Glue::Table AWS::GuardDuty::Detector AWS::KendraRanking::ExecutionPlan AWS::ManagedBlockchain::Node AWS::SageMaker::ExperimentTrialComponent AWS::SageMaker::FeatureGroup AWS::S3::AccessPoint AWS::S3ObjectLambda::AccessPoint AWS::S3Outposts::Object resources.type field per selector. To log data events on more than one resource type, add another selector.resources.ARN - You can use any operator with resources.ARN, but if you use Equals or NotEquals, the value must exactly match the ARN of a valid resource of the type you've specified in the template as the value of resources.type. For example, if resources.type equals AWS::S3::Object, the ARN must be in one of the following formats. To log all data events for all objects in a specific S3 bucket, use the StartsWith operator, and include only the bucket ARN as the matching value.
arn:<partition>:s3:::<bucket_name>/ arn:<partition>:s3:::<bucket_name>/<object_path>/ AWS::DynamoDB::Table, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:dynamodb:<region>:<account_ID>:table/<table_name> AWS::Lambda::Function, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:lambda:<region>:<account_ID>:function:<function_name> AWS::CloudTrail::Channel, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:cloudtrail:<region>:<account_ID>:channel/<channel_UUID> AWS::Cognito::IdentityPool, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:cognito-identity:<region>:<account_ID>:identitypool/<identity_pool_ID> resources.type equals AWS::DynamoDB::Stream, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:dynamodb:<region>:<account_ID>:table/<table_name>/stream/<date_time> resources.type equals AWS::EC2::Snapshot, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:ec2:<region>::snapshot/<snapshot_ID> resources.type equals AWS::FinSpace::Environment, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:finspace:<region>:<account_ID>:environment/<environment_ID> resources.type equals AWS::Glue::Table, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:glue:<region>:<account_ID>:table/<database_name>/<table_name> resources.type equals AWS::GuardDuty::Detector, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:guardduty:<region>:<account_ID>:detector/<detector_ID> resources.type equals AWS::KendraRanking::ExecutionPlan, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:kendra-ranking:<region>:<account_ID>:rescore-execution-plan/<rescore_execution_plan_ID> resources.type equals AWS::ManagedBlockchain::Node, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:managedblockchain:<region>:<account_ID>:nodes/<node_ID> resources.type equals AWS::SageMaker::ExperimentTrialComponent, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:sagemaker:<region>:<account_ID>:experiment-trial-component/<experiment_trial_component_name> resources.type equals AWS::SageMaker::FeatureGroup, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:sagemaker:<region>:<account_ID>:feature-group/<feature_group_name> resources.type equals AWS::S3::AccessPoint, and the operator is set to Equals or NotEquals, the ARN must be in one of the following formats. To log events on all objects in an S3 access point, we recommend that you use only the access point ARN, don’t include the object path, and use the StartsWith or NotStartsWith operators.
arn:<partition>:s3:<region>:<account_ID>:accesspoint/<access_point_name> arn:<partition>:s3:<region>:<account_ID>:accesspoint/<access_point_name>/object/<object_path> resources.type equals AWS::S3ObjectLambda::AccessPoint, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:s3-object-lambda:<region>:<account_ID>:accesspoint/<access_point_name> resources.type equals AWS::S3Outposts::Object, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:s3-outposts:<region>:<account_ID>:<object_path> readOnly, eventCategory, eventSource (for management events), eventName, resources.type, and resources.ARN. eventCategory.
"
},
"Equals":{
"shape":"Operator",
@@ -1517,7 +1518,7 @@
},
"SnsTopicName":{
"shape":"String",
- "documentation":"readOnly - Optional. Can be set to Equals a value of true or false. If you do not add this field, CloudTrail logs both read and write events. A value of true logs only read events. A value of false logs only write events.eventSource - For filtering management events only. This can be set only to NotEquals kms.amazonaws.com.eventName - Can use any operator. You can use it to filter in or filter out any data event logged to CloudTrail, such as PutBucket or GetSnapshotBlock. You can have multiple values for this field, separated by commas.eventCategory - This is required and must be set to Equals.
Management or Data. ConfigurationItem. Evidence. ActivityAuditLog. resources.type - This field is required for CloudTrail data events. resources.type can only use the Equals operator, and the value can be one of the following:
AWS::DynamoDB::Table AWS::Lambda::Function AWS::S3::Object AWS::CloudTrail::Channel AWS::CodeWhisperer::Profile AWS::Cognito::IdentityPool AWS::DynamoDB::Stream AWS::EC2::Snapshot AWS::EMRWAL::Workspace AWS::FinSpace::Environment AWS::Glue::Table AWS::GuardDuty::Detector AWS::KendraRanking::ExecutionPlan AWS::ManagedBlockchain::Node AWS::SageMaker::ExperimentTrialComponent AWS::SageMaker::FeatureGroup AWS::S3::AccessPoint AWS::S3ObjectLambda::AccessPoint AWS::S3Outposts::Object resources.type field per selector. To log data events on more than one resource type, add another selector.resources.ARN - You can use any operator with resources.ARN, but if you use Equals or NotEquals, the value must exactly match the ARN of a valid resource of the type you've specified in the template as the value of resources.type. For example, if resources.type equals AWS::S3::Object, the ARN must be in one of the following formats. To log all data events for all objects in a specific S3 bucket, use the StartsWith operator, and include only the bucket ARN as the matching value.
arn:<partition>:s3:::<bucket_name>/ arn:<partition>:s3:::<bucket_name>/<object_path>/ AWS::DynamoDB::Table, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:dynamodb:<region>:<account_ID>:table/<table_name> AWS::Lambda::Function, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:lambda:<region>:<account_ID>:function:<function_name> AWS::CloudTrail::Channel, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:cloudtrail:<region>:<account_ID>:channel/<channel_UUID> AWS::CodeWhisperer::Profile, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:codewhisperer:<region>:<account_ID>:profile/<profile_ID> AWS::Cognito::IdentityPool, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:cognito-identity:<region>:<account_ID>:identitypool/<identity_pool_ID> resources.type equals AWS::DynamoDB::Stream, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:dynamodb:<region>:<account_ID>:table/<table_name>/stream/<date_time> resources.type equals AWS::EC2::Snapshot, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:ec2:<region>::snapshot/<snapshot_ID> resources.type equals AWS::EMRWAL::Workspace, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:emrwal:<region>::workspace/<workspace_name> resources.type equals AWS::FinSpace::Environment, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:finspace:<region>:<account_ID>:environment/<environment_ID> resources.type equals AWS::Glue::Table, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:glue:<region>:<account_ID>:table/<database_name>/<table_name> resources.type equals AWS::GuardDuty::Detector, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:guardduty:<region>:<account_ID>:detector/<detector_ID> resources.type equals AWS::KendraRanking::ExecutionPlan, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:kendra-ranking:<region>:<account_ID>:rescore-execution-plan/<rescore_execution_plan_ID> resources.type equals AWS::ManagedBlockchain::Node, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:managedblockchain:<region>:<account_ID>:nodes/<node_ID> resources.type equals AWS::SageMaker::ExperimentTrialComponent, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:sagemaker:<region>:<account_ID>:experiment-trial-component/<experiment_trial_component_name> resources.type equals AWS::SageMaker::FeatureGroup, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:sagemaker:<region>:<account_ID>:feature-group/<feature_group_name> resources.type equals AWS::S3::AccessPoint, and the operator is set to Equals or NotEquals, the ARN must be in one of the following formats. To log events on all objects in an S3 access point, we recommend that you use only the access point ARN, don’t include the object path, and use the StartsWith or NotStartsWith operators.
arn:<partition>:s3:<region>:<account_ID>:accesspoint/<access_point_name> arn:<partition>:s3:<region>:<account_ID>:accesspoint/<access_point_name>/object/<object_path> resources.type equals AWS::S3ObjectLambda::AccessPoint, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:s3-object-lambda:<region>:<account_ID>:accesspoint/<access_point_name> resources.type equals AWS::S3Outposts::Object, and the operator is set to Equals or NotEquals, the ARN must be in the following format:
arn:<partition>:s3-outposts:<region>:<account_ID>:<object_path> SnsTopicARN.
AWS::DynamoDB::Table AWS::Lambda::Function AWS::S3::Object
"
+ "documentation":"AWS::CloudTrail::Channel AWS::Cognito::IdentityPool AWS::DynamoDB::Stream AWS::EC2::Snapshot AWS::FinSpace::Environment AWS::Glue::Table AWS::GuardDuty::Detector AWS::KendraRanking::ExecutionPlan AWS::ManagedBlockchain::Node AWS::SageMaker::ExperimentTrialComponent AWS::SageMaker::FeatureGroup AWS::S3::AccessPoint AWS::S3ObjectLambda::AccessPoint AWS::S3Outposts::Object
AWS::DynamoDB::Table AWS::Lambda::Function AWS::S3::Object
"
},
"Values":{
"shape":"DataResourceValues",
@@ -1689,7 +1690,6 @@
},
"DescribeQueryRequest":{
"type":"structure",
- "required":["QueryId"],
"members":{
"EventDataStore":{
"shape":"EventDataStoreArn",
@@ -1700,6 +1700,10 @@
"QueryId":{
"shape":"UUID",
"documentation":"AWS::CloudTrail::Channel AWS::CodeWhisperer::Profile AWS::Cognito::IdentityPool AWS::DynamoDB::Stream AWS::EC2::Snapshot AWS::EMRWAL::Workspace AWS::FinSpace::Environment AWS::Glue::Table AWS::GuardDuty::Detector AWS::KendraRanking::ExecutionPlan AWS::ManagedBlockchain::Node AWS::SageMaker::ExperimentTrialComponent AWS::SageMaker::FeatureGroup AWS::S3::AccessPoint AWS::S3ObjectLambda::AccessPoint AWS::S3Outposts::Object QueryAlias. SnsTopicARN.SnsTopicARN.
POST requests.https://example.com/web/signup, you would provide the path /web/signup.GET text/html requests.https://example.com/web/register, you would provide the path /web/register.AWSManagedRulesACFPRuleSet. This configuration is used in ManagedRuleGroupConfig. AWSManagedRulesATPRuleSet. This configuration is used in ManagedRuleGroupConfig.
"
+ }
+ },
+ "documentation":"{ \"form\": { \"primaryaddressline1\": \"THE_ADDRESS1\", \"primaryaddressline2\": \"THE_ADDRESS2\", \"primaryaddressline3\": \"THE_ADDRESS3\" } }, the address field idenfiers are /form/primaryaddressline1, /form/primaryaddressline2, and /form/primaryaddressline3.primaryaddressline1, primaryaddressline2, and primaryaddressline3, the address fields identifiers are primaryaddressline1, primaryaddressline2, and primaryaddressline3. RequestInspectionACFP data type.
"
+ }
+ },
+ "documentation":"{ \"form\": { \"email\": \"THE_EMAIL\" } }, the email field specification is /form/email.email1, the email field specification is email1.RequestInspectionACFP data type. host:user-agent:accept:authorization:referer.FieldToMatch type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in FieldToMatch for each rule statement that requires it. To inspect more than one component of the web request, create a separate rule statement for each component.QueryString field to match: \"FieldToMatch\": { \"QueryString\": {} } Method field to match specification: \"FieldToMatch\": { \"Method\": { \"Name\": \"DELETE\" } } AWSManagedRulesATPRuleSet. This is only populated if you are using a rule group in your web ACL that integrates with your applications in this way. For more information, see WAF client application integration in the WAF Developer Guide.AWSManagedRulesATPRuleSet and the account creation fraud prevention managed rule group AWSManagedRulesACFPRuleSet. This is only populated if you are using a rule group in your web ACL that integrates with your applications in this way. For more information, see WAF client application integration in the WAF Developer Guide.
"
}
},
- "documentation":"CONTINUE - Inspect the available headers normally, according to the rule inspection criteria. MATCH - Treat the web request as matching the rule statement. WAF applies the rule action to the request.NO_MATCH - Treat the web request as not matching the rule statement.host:user-agent:accept:authorization:referer.SingleHeader field, the HEADER field in the logs will be REDACTED. UriPath, QueryString, SingleHeader, Method, and JsonBody.SingleHeader field, the HEADER field in the logs will be REDACTED for all rules that use the SingleHeader FieldToMatch setting. FieldToMatch setting, so the SingleHeader redaction doesn't apply to rules that use the Headers FieldToMatch.UriPath, QueryString, SingleHeader, and Method.AWSManagedRulesATPRuleSet RequestInspection. AWSManagedRulesATPRuleSet or AWSManagedRulesACFPRuleSet. AWSManagedRulesATPRuleSet RequestInspection. AWSManagedRulesATPRuleSet or AWSManagedRulesACFPRuleSet. AWSManagedRulesATPRuleSet RequestInspection. AWSManagedRulesATPRuleSet or AWSManagedRulesACFPRuleSet. AWSManagedRulesATPRuleSet. Use this to provide login request information to the rule group. For web ACLs that protect CloudFront distributions, use this to also provide the information about how your distribution responds to login requests. ManagedRuleGroupConfig and provides additional feature configuration. AWSManagedRulesACFPRuleSet. Use this to provide account creation request information to the rule group. For web ACLs that protect CloudFront distributions, use this to also provide the information about how your distribution responds to account creation requests. AWSManagedRulesATPRuleSet configuration object for the account takeover prevention managed rule group, to provide information such as the sign-in page of your application and the type of content to accept or reject from the client. AWSManagedRulesBotControlRuleSet configuration object to configure the protection level that you want the Bot Control rule group to use.
AWSManagedRulesACFPRuleSet configuration object to configure the account creation fraud prevention managed rule group. The configuration includes the registration and sign-up pages of your application and the locations in the account creation request payload of data, such as the user email and phone number fields. AWSManagedRulesATPRuleSet configuration object to configure the account takeover prevention managed rule group. The configuration includes the sign-in page of your application and the locations in the login request payload of data such as the username and password. AWSManagedRulesBotControlRuleSet configuration object to configure the protection level that you want the Bot Control rule group to use. AWSManagedRulesATPRuleSet configuration object for the account takeover prevention managed rule group, to provide information such as the sign-in page of your application and the type of content to accept or reject from the client. AWSManagedRulesBotControlRuleSet configuration object to configure the protection level that you want the Bot Control rule group to use.
"
},
"RuleActionOverrides":{
"shape":"RuleActionOverrides",
"documentation":"AWSManagedRulesACFPRuleSet configuration object to configure the account creation fraud prevention managed rule group. The configuration includes the registration and sign-up pages of your application and the locations in the account creation request payload of data, such as the user email and phone number fields. AWSManagedRulesATPRuleSet configuration object to configure the account takeover prevention managed rule group. The configuration includes the sign-in page of your application and the locations in the login request payload of data such as the username and password. AWSManagedRulesBotControlRuleSet configuration object to configure the protection level that you want the Bot Control rule group to use. Count and then monitor the resulting count metrics to understand how the rule group would handle your web traffic. You can also permanently override some or all actions, to modify how the rule group manages your web traffic.ManagedRuleGroupStatement, for example for use inside a NotStatement or OrStatement. It can only be referenced as a top-level statement within a rule.AWSManagedRulesBotControlRuleSet or the WAF Fraud Control account takeover prevention (ATP) managed rule group AWSManagedRulesATPRuleSet. For more information, see WAF Pricing.ManagedRuleGroupStatement, for example for use inside a NotStatement or OrStatement. It can only be referenced as a top-level statement within a rule.AWSManagedRulesBotControlRuleSet, the WAF Fraud Control account takeover prevention (ATP) managed rule group AWSManagedRulesATPRuleSet, or the WAF Fraud Control account creation fraud prevention (ACFP) managed rule group AWSManagedRulesACFPRuleSet. For more information, see WAF Pricing./form/password.
"
}
},
- "documentation":"{ \"form\": { \"password\": \"THE_PASSWORD\" } }, the password field specification is /form/password.password1, the password field specification is password1.AWSManagedRulesATPRuleSet RequestInspection configuration.RequestInspection and RequestInspectionACFP data types.
"
+ }
+ },
+ "documentation":"{ \"form\": { \"primaryphoneline1\": \"THE_PHONE1\", \"primaryphoneline2\": \"THE_PHONE2\", \"primaryphoneline3\": \"THE_PHONE3\" } }, the phone number field identifiers are /form/primaryphoneline1, /form/primaryphoneline2, and /form/primaryphoneline3.primaryphoneline1, primaryphoneline2, and primaryphoneline3, the phone number field identifiers are primaryphoneline1, primaryphoneline2, and primaryphoneline3. RequestInspectionACFP data type.
"
+ "documentation":"{ \"login\": { \"username\": \"THE_USERNAME\", \"password\": \"THE_PASSWORD\" } }, the username field specification is /login/username and the password field specification is /login/password.username1 and password1, the username field specification is username1 and the password field specification is password1.
"
},
"PasswordField":{
"shape":"PasswordField",
- "documentation":"{ \"form\": { \"username\": \"THE_USERNAME\" } }, the username field specification is /form/username. username1, the username field specification is username1
"
+ "documentation":"{ \"login\": { \"username\": \"THE_USERNAME\", \"password\": \"THE_PASSWORD\" } }, the username field specification is /login/username and the password field specification is /login/password.username1 and password1, the username field specification is username1 and the password field specification is password1.
"
}
},
"documentation":"{ \"form\": { \"password\": \"THE_PASSWORD\" } }, the password field specification is /form/password.password1, the password field specification is password1.AWSManagedRulesATPRuleSet configuration in ManagedRuleGroupConfig.
"
+ },
+ "PasswordField":{
+ "shape":"PasswordField",
+ "documentation":"{ \"form\": { \"username\": \"THE_USERNAME\" } }, the username field specification is /form/username. username1, the username field specification is username1
"
+ },
+ "EmailField":{
+ "shape":"EmailField",
+ "documentation":"{ \"form\": { \"password\": \"THE_PASSWORD\" } }, the password field specification is /form/password.password1, the password field specification is password1.
"
+ },
+ "PhoneNumberFields":{
+ "shape":"PhoneNumberFields",
+ "documentation":"{ \"form\": { \"email\": \"THE_EMAIL\" } }, the email field specification is /form/email.email1, the email field specification is email1.
"
+ },
+ "AddressFields":{
+ "shape":"AddressFields",
+ "documentation":"{ \"form\": { \"primaryphoneline1\": \"THE_PHONE1\", \"primaryphoneline2\": \"THE_PHONE2\", \"primaryphoneline3\": \"THE_PHONE3\" } }, the phone number field identifiers are /form/primaryphoneline1, /form/primaryphoneline2, and /form/primaryphoneline3.primaryphoneline1, primaryphoneline2, and primaryphoneline3, the phone number field identifiers are primaryphoneline1, primaryphoneline2, and primaryphoneline3.
"
+ }
+ },
+ "documentation":"{ \"form\": { \"primaryaddressline1\": \"THE_ADDRESS1\", \"primaryaddressline2\": \"THE_ADDRESS2\", \"primaryaddressline3\": \"THE_ADDRESS3\" } }, the address field idenfiers are /form/primaryaddressline1, /form/primaryaddressline2, and /form/primaryaddressline3.primaryaddressline1, primaryaddressline2, and primaryaddressline3, the address fields identifiers are primaryaddressline1, primaryaddressline2, and primaryaddressline3. AWSManagedRulesACFPRuleSet configuration in ManagedRuleGroupConfig.AWSManagedRulesATPRuleSet configuration in ManagedRuleGroupConfig.AWSManagedRulesATPRuleSet and AWSManagedRulesACFPRuleSet configurations in ManagedRuleGroupConfig.Header or StatusCode. You can't configure more than one component for inspection. If you don't configure any of the response inspection options, response inspection is disabled. \"SuccessStrings\": [ \"Login successful\", \"Welcome to our site!\" ] \"SuccessStrings\": [ \"Login successful\" ] and \"SuccessStrings\": [ \"Account creation successful\", \"Welcome to our site!\" ] \"FailureStrings\": [ \"Login failed\" ] \"FailureStrings\": [ \"Request failed\" ] ResponseInspection configuration for AWSManagedRulesATPRuleSet. ResponseInspection configuration for AWSManagedRulesATPRuleSet and AWSManagedRulesACFPRuleSet. \"Name\": [ \"LoginResult\" ] \"Name\": [ \"RequestResult\" ] \"SuccessValues\": [ \"LoginPassed\", \"Successful login\" ] \"SuccessValues\": [ \"LoginPassed\", \"Successful login\" ] and \"SuccessValues\": [ \"AccountCreated\", \"Successful account creation\" ] \"FailureValues\": [ \"LoginFailed\", \"Failed login\" ] \"FailureValues\": [ \"LoginFailed\", \"Failed login\" ] and \"FailureValues\": [ \"AccountCreationFailed\" ] ResponseInspection configuration for AWSManagedRulesATPRuleSet. ResponseInspection configuration for AWSManagedRulesATPRuleSet and AWSManagedRulesACFPRuleSet. \"Identifier\": [ \"/login/success\" ] \"Identifier\": [ \"/login/success\" ] and \"Identifier\": [ \"/sign-up/success\" ] \"SuccessValues\": [ \"True\", \"Succeeded\" ] \"SuccessValues\": [ \"True\", \"Succeeded\" ] \"FailureValues\": [ \"False\", \"Failed\" ] \"FailureValues\": [ \"False\", \"Failed\" ] ResponseInspection configuration for AWSManagedRulesATPRuleSet. ResponseInspection configuration for AWSManagedRulesATPRuleSet and AWSManagedRulesACFPRuleSet. \"SuccessCodes\": [ 200, 201 ] \"SuccessCodes\": [ 200, 201 ] \"FailureCodes\": [ 400, 404 ] \"FailureCodes\": [ 400, 404 ] ResponseInspection configuration for AWSManagedRulesATPRuleSet. ResponseInspection configuration for AWSManagedRulesATPRuleSet and AWSManagedRulesACFPRuleSet. ManagedRuleGroupStatement, for example for use inside a NotStatement or OrStatement. It can only be referenced as a top-level statement within a rule.AWSManagedRulesBotControlRuleSet or the WAF Fraud Control account takeover prevention (ATP) managed rule group AWSManagedRulesATPRuleSet. For more information, see WAF Pricing.ManagedRuleGroupStatement, for example for use inside a NotStatement or OrStatement. It can only be referenced as a top-level statement within a rule.AWSManagedRulesBotControlRuleSet, the WAF Fraud Control account takeover prevention (ATP) managed rule group AWSManagedRulesATPRuleSet, or the WAF Fraud Control account creation fraud prevention (ACFP) managed rule group AWSManagedRulesACFPRuleSet. For more information, see WAF Pricing./form/username.
"
}
},
- "documentation":"{ \"form\": { \"username\": \"THE_USERNAME\" } }, the username field specification is /form/username. username1, the username field specification is username1 AWSManagedRulesATPRuleSet RequestInspection configuration.RequestInspection and RequestInspectionACFP data types. wellarchitected) cannot be removed from a workload.DELETE /tags/WorkloadArn?tagKeys=key1&tagKeys=key2 DELETE /tags/WorkloadArn?tagKeys=key1&tagKeys=key2 includeCertificateDetails from your request. The response will include only the certificate Amazon Resource Name (ARN), certificate name, domain name, and tags.includeCertificateDetails from your request. The response will include only the certificate Amazon Resource Name (ARN), certificate name, domain name, and tags.GetCertificates action and ommit includeCertificateDetails from your request. The response will include only the certificate Amazon Resource Name (ARN), certificate name, domain name, and tags.GetCertificates action and omit includeCertificateDetails from your request. The response will include only the certificate Amazon Resource Name (ARN), certificate name, domain name, and tags.GetCertificates request. If your results are paginated, the response will return a next page token that you can specify as the page token in a subsequent request.NextPageToken is returned there are more results available. The value of NextPageToken is a unique pagination token for each page. Make the call again using the returned token to retrieve the next page. Keep all other arguments unchanged.
1538424000 as the start time.
1538424000 as the start time.
1538427600 as the end time.
1538427600 as the end time.IdentityType is the string that you provide to the PrincipalEntityType parameter for this operation. The CognitoUserPoolId and CognitoClientId are defined by the Amazon Cognito user pool.
StaticPolicy section of the PolicyDefinition.templateLinked section of the PolicyDefinition. If the policy template is ever updated, any policies linked to the policy template automatically use the updated template.HTTP 200 status code.Allow or Deny, along with a list of the policies that resulted in the decision.Allow or Deny, along with a list of the policies that resulted in the decision.{ \"actionId\": \"<action name>\", \"actionType\": \"Action\" } {\"boolean\": true} \"entityIdentifier\": { \"entityId\": \"<id>\", \"entityType\": \"<entity type>\"} {\"long\": 0} {\"string\": \"abc\"} {\"set\": [ {} ] } {\"record\": { \"keyName\": {} } } \"UserPoolArn\": \"cognito-idp:us-east-1:123456789012:userpool/us-east-1_1a2b3c4d5\" \"ClientIds\": [\"&ExampleCogClientId;\"] \"CognitoUserPoolConfiguration\":{\"UserPoolArn\":\"cognito-idp:us-east-1:123456789012:userpool/us-east-1_1a2b3c4d5\",\"ClientIds\": [\"a1b2c3d4e5f6g7h8i9j0kalbmc\"]} \"configuration\":{\"cognitoUserPoolConfiguration\":{\"userPoolArn\":\"cognito-idp:us-east-1:123456789012:userpool/us-east-1_1a2b3c4d5\",\"clientIds\": [\"a1b2c3d4e5f6g7h8i9j0kalbmc\"]}} userPoolArn, and optionally, a ClientId.\"Context\":{\"<KeyName1>\":{\"boolean\":true},\"<KeyName2>\":{\"long\":1234}} when and unless clauses in a policy.\"context\":{\"Context\":{\"<KeyName1>\":{\"boolean\":true},\"<KeyName2>\":{\"long\":1234}}} ClientToken, but with different parameters, the retry fails with an IdempotentParameterMismatch error.UserPoolArn, and optionally, a ClientId.ClientToken, but with different parameters, the retry fails with an IdempotentParameterMismatch error.PolicyStoreId of the policy store you want to store the policy in.principal isn't specified in the policy content.resource isn't specified in the policy content.ClientToken, but with different parameters, the retry fails with an IdempotentParameterMismatch error.Mode.STRICT mode only after you define a schema. If a schema doesn't exist, then STRICT mode causes any policy to fail validation, and Verified Permissions rejects the policy. You can turn off validation by using the UpdatePolicyStore. Then, when you have a schema defined, use UpdatePolicyStore again to turn validation back on.ClientToken, but with different parameters, the retry fails with an IdempotentParameterMismatch error.\"policyId\":\"SPEXAMPLEabcdefg111111\" \"determiningPolicies\":[{\"policyId\":\"SPEXAMPLEabcdefg111111\"}] \"entityType\":\"typeName\" \"entityId\":\"identifier\" {\"entityId\":\"string\",\"entityType\":\"string\"} { \"id\": { \"entityType\": \"Photo\", \"entityId\": \"VacationPhoto94.jpg\" }, \"Attributes\": {}, \"Parents\": [ { \"entityType\": \"Album\", \"entityId\": \"alice_folder\" } ] } Principal isn't present in the policy content.Resource isn't present in the policy content.https://cognito-idp.<region>.amazonaws.com/<user-pool-id>/.well-known/openid-configuration cognito.https://cognito-idp.<region>.amazonaws.com/<user-pool-id>/.well-known/openid-configuration cognito.AccessToken or an IdentityToken, but not both.AccessToken or an IdentityToken, but not both.NextToken response in the previous request. If you did, it indicates that more output is available. Set this parameter to the value provided by the previous call's NextToken response to request the next page of results.NextToken response element is returned with a value (not null). Include the specified value as the NextToken request parameter in the next call to the operation to get the next part of the results. Note that the service might return fewer results than the maximum even when there are more results available. You should check NextToken after every operation to ensure that you receive all of the results.NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. This indicates that this is the last page of results.NextToken response in the previous request. If you did, it indicates that more output is available. Set this parameter to the value provided by the previous call's NextToken response to request the next page of results.NextToken response element is returned with a value (not null). Include the specified value as the NextToken request parameter in the next call to the operation to get the next part of the results. Note that the service might return fewer results than the maximum even when there are more results available. You should check NextToken after every operation to ensure that you receive all of the results.NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. This indicates that this is the last page of results.NextToken response in the previous request. If you did, it indicates that more output is available. Set this parameter to the value provided by the previous call's NextToken response to request the next page of results.NextToken response element is returned with a value (not null). Include the specified value as the NextToken request parameter in the next call to the operation to get the next part of the results. Note that the service might return fewer results than the maximum even when there are more results available. You should check NextToken after every operation to ensure that you receive all of the results.NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. This indicates that this is the last page of results.NextToken response in the previous request. If you did, it indicates that more output is available. Set this parameter to the value provided by the previous call's NextToken response to request the next page of results.NextToken response element is returned with a value (not null). Include the specified value as the NextToken request parameter in the next call to the operation to get the next part of the results. Note that the service might return fewer results than the maximum even when there are more results available. You should check NextToken after every operation to ensure that you receive all of the results.NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null. This indicates that this is the last page of results.principal and resource. When you use CreatePolicy to create a policy from a template, you specify the exact principal and resource to use for the instantiated policy.static or a templateLinked element.static or a templateLinked element.StaticPolicy or a TemplateLinkedPolicy element.
"
+ },
+ "principal":{
+ "shape":"EntityIdentifier",
+ "documentation":"static templateLinked STRICT, then policies that can't be validated by this schema are rejected by Verified Permissions and can't be stored in the policy store.?principal placeholder in the policy template when it evaluates an authorization request.?resource placeholder in the policy template when it evaluates an authorization request.?principal placeholder in the policy template when it evaluates an authorization request.?resource placeholder in the policy template when it evaluates an authorization request.?principal placeholder in the policy template when it evaluates an authorization request.?resource placeholder in the policy template when it evaluates an authorization request.userPoolArn, and optionally, a ClientId.userPoolArn, and optionally, a ClientId.
action referenced by the policy.when or unless clauses.
"
+ }
+ }
+ },
+ "UpdatePolicyOutput":{
+ "type":"structure",
+ "required":[
+ "policyStoreId",
+ "policyId",
+ "policyType",
+ "createdDate",
+ "lastUpdatedDate"
+ ],
+ "members":{
+ "policyStoreId":{
+ "shape":"PolicyStoreId",
+ "documentation":"static to templateLinked.permit or forbid.principal referenced by the policy.resource referenced by the policy.Principal isn't present in the policy content.Resource isn't present in the policy content.
action referenced by the policy template.when or unless clauses.
"
+ }
+ }
+ },
+ "UpdatePolicyTemplateOutput":{
+ "type":"structure",
+ "required":[
+ "policyStoreId",
+ "policyTemplateId",
+ "createdDate",
+ "lastUpdatedDate"
+ ],
+ "members":{
+ "policyStoreId":{
+ "shape":"PolicyStoreId",
+ "documentation":"permit or forbid) of the policy template.principal referenced by the policy template.resource referenced by the policy template.
action referenced by the policy.when or unless clauses.
StaticPolicy to TemplateLinkedPolicy.permit or forbid) of the policy.principal referenced by the policy.resource referenced by the policy.
",
+ "exception":true
+ },
+ "ValidationExceptionField":{
+ "type":"structure",
+ "required":[
+ "path",
+ "message"
+ ],
+ "members":{
+ "path":{
+ "shape":"String",
+ "documentation":"set, or the types of expressions used in an if...then...else clause aren't compatible in this context.
Mode=STRICT and the policy store doesn't contain a schema, Verified Permissions rejects all static policies and policy templates because there is no schema to validate against.
jane leaves the company, and you later let someone else use the name jane, then that new user automatically gets access to everything granted by policies that still reference User::\"jane\". Cedar can’t distinguish between the new user and the old. This applies to both principal and resource identifiers. Always use identifiers that are guaranteed unique and never reused to ensure that you don’t unintentionally grant access because of the presence of an old identifier in a policy.
"
+}
diff --git a/services/verifiedpermissions/src/main/resources/codegen-resources/waiters-2.json b/services/verifiedpermissions/src/main/resources/codegen-resources/waiters-2.json
new file mode 100644
index 000000000000..13f60ee66be6
--- /dev/null
+++ b/services/verifiedpermissions/src/main/resources/codegen-resources/waiters-2.json
@@ -0,0 +1,5 @@
+{
+ "version": 2,
+ "waiters": {
+ }
+}
From fcb0cf7c5f57bd9c1cdb16f35d220b3179ea8fab Mon Sep 17 00:00:00 2001
From: AWS <>
Date: Tue, 13 Jun 2023 18:06:59 +0000
Subject: [PATCH 097/317] EC2 Image Builder Update: Change the Image Builder
ImagePipeline dateNextRun field to more accurately describe the data.
---
.../feature-EC2ImageBuilder-4e0962f.json | 6 ++
.../codegen-resources/endpoint-tests.json | 102 +++++++++---------
.../codegen-resources/service-2.json | 2 +-
3 files changed, 58 insertions(+), 52 deletions(-)
create mode 100644 .changes/next-release/feature-EC2ImageBuilder-4e0962f.json
diff --git a/.changes/next-release/feature-EC2ImageBuilder-4e0962f.json b/.changes/next-release/feature-EC2ImageBuilder-4e0962f.json
new file mode 100644
index 000000000000..467e358a8570
--- /dev/null
+++ b/.changes/next-release/feature-EC2ImageBuilder-4e0962f.json
@@ -0,0 +1,6 @@
+{
+ "type": "feature",
+ "category": "EC2 Image Builder",
+ "contributor": "",
+ "description": "Change the Image Builder ImagePipeline dateNextRun field to more accurately describe the data."
+}
diff --git a/services/imagebuilder/src/main/resources/codegen-resources/endpoint-tests.json b/services/imagebuilder/src/main/resources/codegen-resources/endpoint-tests.json
index ed29f944ecff..bcfa0a4ab2f7 100644
--- a/services/imagebuilder/src/main/resources/codegen-resources/endpoint-tests.json
+++ b/services/imagebuilder/src/main/resources/codegen-resources/endpoint-tests.json
@@ -8,9 +8,9 @@
}
},
"params": {
- "UseDualStack": true,
+ "Region": "us-east-1",
"UseFIPS": true,
- "Region": "us-east-1"
+ "UseDualStack": true
}
},
{
@@ -21,9 +21,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-east-1",
"UseFIPS": true,
- "Region": "us-east-1"
+ "UseDualStack": false
}
},
{
@@ -34,9 +34,9 @@
}
},
"params": {
- "UseDualStack": true,
+ "Region": "us-east-1",
"UseFIPS": false,
- "Region": "us-east-1"
+ "UseDualStack": true
}
},
{
@@ -47,9 +47,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-east-1",
"UseFIPS": false,
- "Region": "us-east-1"
+ "UseDualStack": false
}
},
{
@@ -60,9 +60,9 @@
}
},
"params": {
- "UseDualStack": true,
+ "Region": "cn-north-1",
"UseFIPS": true,
- "Region": "cn-north-1"
+ "UseDualStack": true
}
},
{
@@ -73,9 +73,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "cn-north-1",
"UseFIPS": true,
- "Region": "cn-north-1"
+ "UseDualStack": false
}
},
{
@@ -86,9 +86,9 @@
}
},
"params": {
- "UseDualStack": true,
+ "Region": "cn-north-1",
"UseFIPS": false,
- "Region": "cn-north-1"
+ "UseDualStack": true
}
},
{
@@ -99,9 +99,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "cn-north-1",
"UseFIPS": false,
- "Region": "cn-north-1"
+ "UseDualStack": false
}
},
{
@@ -112,9 +112,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-gov-east-1",
"UseFIPS": false,
- "Region": "us-gov-east-1"
+ "UseDualStack": false
}
},
{
@@ -125,9 +125,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-gov-east-1",
"UseFIPS": true,
- "Region": "us-gov-east-1"
+ "UseDualStack": false
}
},
{
@@ -138,9 +138,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-gov-west-1",
"UseFIPS": false,
- "Region": "us-gov-west-1"
+ "UseDualStack": false
}
},
{
@@ -151,9 +151,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-gov-west-1",
"UseFIPS": true,
- "Region": "us-gov-west-1"
+ "UseDualStack": false
}
},
{
@@ -164,9 +164,9 @@
}
},
"params": {
- "UseDualStack": true,
+ "Region": "us-gov-east-1",
"UseFIPS": true,
- "Region": "us-gov-east-1"
+ "UseDualStack": true
}
},
{
@@ -177,9 +177,9 @@
}
},
"params": {
- "UseDualStack": true,
+ "Region": "us-gov-east-1",
"UseFIPS": false,
- "Region": "us-gov-east-1"
+ "UseDualStack": true
}
},
{
@@ -188,9 +188,9 @@
"error": "FIPS and DualStack are enabled, but this partition does not support one or both"
},
"params": {
- "UseDualStack": true,
+ "Region": "us-iso-east-1",
"UseFIPS": true,
- "Region": "us-iso-east-1"
+ "UseDualStack": true
}
},
{
@@ -201,9 +201,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-iso-east-1",
"UseFIPS": true,
- "Region": "us-iso-east-1"
+ "UseDualStack": false
}
},
{
@@ -212,9 +212,9 @@
"error": "DualStack is enabled but this partition does not support DualStack"
},
"params": {
- "UseDualStack": true,
+ "Region": "us-iso-east-1",
"UseFIPS": false,
- "Region": "us-iso-east-1"
+ "UseDualStack": true
}
},
{
@@ -225,9 +225,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-iso-east-1",
"UseFIPS": false,
- "Region": "us-iso-east-1"
+ "UseDualStack": false
}
},
{
@@ -236,9 +236,9 @@
"error": "FIPS and DualStack are enabled, but this partition does not support one or both"
},
"params": {
- "UseDualStack": true,
+ "Region": "us-isob-east-1",
"UseFIPS": true,
- "Region": "us-isob-east-1"
+ "UseDualStack": true
}
},
{
@@ -249,9 +249,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-isob-east-1",
"UseFIPS": true,
- "Region": "us-isob-east-1"
+ "UseDualStack": false
}
},
{
@@ -260,9 +260,9 @@
"error": "DualStack is enabled but this partition does not support DualStack"
},
"params": {
- "UseDualStack": true,
+ "Region": "us-isob-east-1",
"UseFIPS": false,
- "Region": "us-isob-east-1"
+ "UseDualStack": true
}
},
{
@@ -273,9 +273,9 @@
}
},
"params": {
- "UseDualStack": false,
+ "Region": "us-isob-east-1",
"UseFIPS": false,
- "Region": "us-isob-east-1"
+ "UseDualStack": false
}
},
{
@@ -286,9 +286,9 @@
}
},
"params": {
- "UseDualStack": false,
- "UseFIPS": false,
"Region": "us-east-1",
+ "UseFIPS": false,
+ "UseDualStack": false,
"Endpoint": "https://example.com"
}
},
@@ -300,8 +300,8 @@
}
},
"params": {
- "UseDualStack": false,
"UseFIPS": false,
+ "UseDualStack": false,
"Endpoint": "https://example.com"
}
},
@@ -311,9 +311,9 @@
"error": "Invalid Configuration: FIPS and custom endpoint are not supported"
},
"params": {
- "UseDualStack": false,
- "UseFIPS": true,
"Region": "us-east-1",
+ "UseFIPS": true,
+ "UseDualStack": false,
"Endpoint": "https://example.com"
}
},
@@ -323,9 +323,9 @@
"error": "Invalid Configuration: Dualstack and custom endpoint are not supported"
},
"params": {
- "UseDualStack": true,
- "UseFIPS": false,
"Region": "us-east-1",
+ "UseFIPS": false,
+ "UseDualStack": true,
"Endpoint": "https://example.com"
}
},
diff --git a/services/imagebuilder/src/main/resources/codegen-resources/service-2.json b/services/imagebuilder/src/main/resources/codegen-resources/service-2.json
index dee0502afd3c..3f1b55a9e8e0 100644
--- a/services/imagebuilder/src/main/resources/codegen-resources/service-2.json
+++ b/services/imagebuilder/src/main/resources/codegen-resources/service-2.json
@@ -3501,7 +3501,7 @@
},
"dateNextRun":{
"shape":"DateTime",
- "documentation":"Detail are used in Get operations.Item are used in List operations.scanName and a findingId. You retrieve the findingId when you call GetFindings.scanName, findingId, errorCode and error message.Security or All. The Secuirty type only generates findings related to security. The All type generates both security findings and quality findings. Defaults to Security type if missing.STANDARD scan type. If not specified, it will be auto generated. Standard or Express. Defaults to Standard type if missing.Express scans run on limited resources and use a limited set of detectors to analyze your code in near-real time. Standard scans have standard resource limits and use the full set of detectors to analyze your code.
"
+ }
+ }
+ },
+ "CreateScanResponse":{
+ "type":"structure",
+ "required":[
+ "resourceId",
+ "runId",
+ "scanName",
+ "scanState"
+ ],
+ "members":{
+ "resourceId":{
+ "shape":"ResourceId",
+ "documentation":"CostCenter, Environment, or Secret. Tag keys are case sensitive.111122223333, Production, or a team name. Omitting the tag value is the same as using an empty string. Tag values are case sensitive.InProgress, Successful, or Failed.scanName when you call CreateScan on the code resource you upload to this URL.requestHeaders using any HTTP client.CodeLine objects that describe where the security vulnerability appears in your code.EncryptionConfig object that contains the KMS key ARN to use for encryption. By default, CodeGuru Security uses an AWS-managed key for encryption. To specify your own key, call UpdateAccountConfiguration.nextToken element is returned in the response. Use nextToken in a subsequent request to retrieve additional results.nextToken value returned from the previous request to continue listing results after the first page.Open, Closed, or All.GetFindings to continue listing results after the current page. CreateScan operation. Defaults to the latest scan run if missing.Security or All. The Security type only generates findings related to security. The All type generates both security findings and quality findings.InProgress, Successful, or Failed.STANDARD scan types.nextToken element is returned in the response. Use nextToken in a subsequent request to retrieve additional results.nextToken value returned from the previous request to continue listing results after the first page.AccountFindingsMetric objects retrieved from the specified time interval.ListFindingMetrics to continue listing results after the current page. nextToken element is returned in the response. Use nextToken in a subsequent request to retrieve additional results.nextToken value returned from the previous request to continue listing results after the first page.ListScans to continue listing results after the current page.ScanSummary objects with information about all scans in an account.ScanName object. You can retrieve this ARN by calling ListScans or GetScan.
"
+ }
+ }
+ },
+ "Long":{
+ "type":"long",
+ "box":true
+ },
+ "MetricsSummary":{
+ "type":"structure",
+ "members":{
+ "categoriesWithMostFindings":{
+ "shape":"CategoriesWithMostFindings",
+ "documentation":"CostCenter, Environment, or Secret. Tag keys are case sensitive.111122223333, Production, or a team name. Omitting the tag value is the same as using an empty string. Tag values are case sensitive.CategoryWithFindingNum objects for the top 5 finding categories with the most open findings in an account.ScanNameWithFindingNum objects for the top 3 scans with the most number of open findings in an account.ScanNameWithFindingNum objects for the top 3 scans with the most number of open critical findings in an account.SuggestedFix objects. Each object contains information about a suggested code fix to remediate the finding.In Progress, Complete, or Failed. ScanName object. You can retrieve this ARN by calling ListScans or GetScan.
"
+ }
+ }
+ },
+ "TagResourceResponse":{
+ "type":"structure",
+ "members":{
+ }
+ },
+ "TagValue":{
+ "type":"string",
+ "max":256,
+ "min":0
+ },
+ "ThrottlingException":{
+ "type":"structure",
+ "required":[
+ "errorCode",
+ "message"
+ ],
+ "members":{
+ "errorCode":{
+ "shape":"String",
+ "documentation":"CostCenter, Environment, or Secret. Tag keys are case sensitive.111122223333, Production, or a team name. Omitting the tag value is the same as using an empty string. Tag values are case sensitive.ScanName object. You can retrieve this ARN by calling ListScans or GetScan.EncryptionConfig object that contains the KMS key ARN to use for encryption.SnapshotS3Location to start your simulation from a snapshot.SnapshotS3Location then you can't provide a SchemaS3Location.SnapshotS3Location to start your simulation from a snapshot.SnapshotS3Location then you can't provide a SchemaS3Location.StandardsArn. To obtain the ARN for a standard, use the DescribeStandards operation.BatchImportFindings must be called by one of the following:
BatchImportFindings from needs to be the same as the AwsAccountId attribute for the finding.BatchImportFindings from the allow-listed account and send findings from different customer accounts in the same batch.BatchImportFindings cannot be used to update the following finding fields and objects, which Security Hub customers use to manage their investigation workflow.
Note UserDefinedFields VerificationState Workflow BatchImportFindings to update the following attributes.
Confidence Criticality RelatedFindings Severity Types FindingProviderFields to provide values for these attributes.CreateMembers action to create the member account in Security Hub.
"
},
+ "ActionList":{
+ "type":"list",
+ "member":{"shape":"AutomationRulesAction"},
+ "max":1,
+ "min":1
+ },
"ActionLocalIpDetails":{
"type":"structure",
"members":{
@@ -1363,6 +1455,309 @@
"DEFAULT"
]
},
+ "AutomationRulesAction":{
+ "type":"structure",
+ "members":{
+ "Type":{
+ "shape":"AutomationRulesActionType",
+ "documentation":"Types finding field. The Types finding field provides one or more finding types in the format of namespace/category/classifier that classify a finding. For more information, see Types taxonomy for ASFF in the Security Hub User Guide. >ENABLED, Security Hub will apply the rule to findings and finding updates after the rule is created. true for a rule, Security Hub applies the rule action to a finding that matches the rule criteria and won't evaluate other rules for the finding.
The default value of this field is false. date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, 2020-03-22T13:22:13.933Z.date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, 2020-03-22T13:22:13.933Z.VerificationState field of a finding. Confidence field of a finding. Criticality field of a finding. Types field of a finding. UserDefinedFields field of a finding. date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, 2020-03-22T13:22:13.933Z.date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, 2020-03-22T13:22:13.933Z.date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, 2020-03-22T13:22:13.933Z.date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, 2020-03-22T13:22:13.933Z.Confidence is scored on a 0–100 basis using a ratio scale. A value of 0 means 0 percent confidence, and a value of 100 means 100 percent confidence. For example, a data exfiltration detection based on a statistical deviation of network traffic has low confidence because an actual exfiltration hasn't been verified. For more information, see Confidence in the Security Hub User Guide. Criticality is scored on a 0–100 basis, using a ratio scale that supports only full integers. A score of 0 means that the underlying resources have no criticality, and a score of 100 is reserved for the most critical resources. For more information, see Criticality in the Security Hub User Guide.2020-03-22T13:22:13.933Z. ENABLED, Security Hub will apply the rule to findings and finding updates after the rule is created. To change the value of this parameter after creating a rule, use BatchUpdateAutomationRules. true for a rule, Security Hub applies the rule action to a finding that matches the rule criteria and won't evaluate other rules for the finding.
The default value of this field is false. date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, 2020-03-22T13:22:13.933Z.date-time format specified in RFC 3339 section 5.6, Internet Date/Time Format. The value cannot contain spaces. For example, 2020-03-22T13:22:13.933Z.RuleStatus of ENABLED and DISABLED. RuleArn, ErrorCode, and ErrorMessage. This parameter tells you which automation rules the request didn't delete and why. RuleArn, ErrorCode, and ErrorMessage. This parameter tells you which automation rules the request didn't retrieve and why. RuleStatus and RuleOrder. RuleArn, ErrorCode, and ErrorMessage. This parameter tells you which automation rules the request didn't update and why. Enabled, Security Hub will apply the rule to findings and finding updates after the rule is created. To change the value of this parameter after creating a rule, use BatchUpdateAutomationRules. true for a rule, Security Hub applies the rule action to a finding that matches the rule criteria and won't evaluate other rules for the finding. The default value of this field is false. Criteria. NextToken from a previously truncated response. On your first call to the ListAutomationRules API, set the value of this parameter to NULL. RuleStatus of ENABLED and DISABLED. RuleArn, ErrorCode, and ErrorMessage. This parameter tells you which automation rules the request didn't process and why. ENABLED, Security Hub will apply the rule to findings and finding updates after the rule is created. To change the value of this parameter after creating a rule, use BatchUpdateAutomationRules. true for a rule, Security Hub applies the rule action to a finding that matches the rule criteria and won't evaluate other rules for the finding.
The default value of this field is false. Criteria. NoReboot parameter to true in the API request, or use the --no-reboot option in the CLI to prevent Amazon EC2 from shutting down and rebooting the instance.NoReboot parameter to true in the API request, or by using the --no-reboot option in the CLI, we can't guarantee the file system integrity of the created image.instanceType | kernel | ramdisk | userData | disableApiTermination | instanceInitiatedShutdownBehavior | rootDeviceName | blockDeviceMapping | productCodes | sourceDestCheck | groupSet | ebsOptimized | sriovNetSupport InvalidIPAddress.InUse).AuthFailure error if the address is already allocated to another Amazon Web Services account.InvalidIPAddress.InUse).AuthFailure error if the address is already allocated to another Amazon Web Services account.standard) or instances in a VPC (vpc).vpc).standard. Otherwise, the default is vpc.vpc).vpc) or instances in EC2-Classic (standard).vpc).DryRunOperation. Otherwise, it is UnauthorizedOperation.true or false.
true, your client's IP address is used when you connect to a resource.false, the elastic network interface IP address is used when you connect to a resource.true interface.efa and trunk.interface.interface, efa, and trunk.DryRunOperation. Otherwise, it is UnauthorizedOperation.
",
+ "documentation":"allocation-id - [EC2-VPC] The allocation ID for the address.association-id - [EC2-VPC] The association ID for the address.domain - Indicates whether the address is for use in EC2-Classic (standard) or in a VPC (vpc).instance-id - The ID of the instance the address is associated with, if any.network-border-group - A unique set of Availability Zones, Local Zones, or Wavelength Zones from where Amazon Web Services advertises IP addresses. network-interface-id - [EC2-VPC] The ID of the network interface that the address is associated with, if any.network-interface-owner-id - The Amazon Web Services account ID of the owner.private-ip-address - [EC2-VPC] The private IP address associated with the Elastic IP address.public-ip - The Elastic IP address, or the carrier IP address.tag:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner and the value TeamA, specify tag:Owner for the filter name and TeamA for the filter value.tag-key - The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
",
"locationName":"Filter"
},
"PublicIps":{
@@ -18038,7 +18140,7 @@
},
"AllocationIds":{
"shape":"AllocationIdList",
- "documentation":"allocation-id - The allocation ID for the address.association-id - The association ID for the address.instance-id - The ID of the instance the address is associated with, if any.network-border-group - A unique set of Availability Zones, Local Zones, or Wavelength Zones from where Amazon Web Services advertises IP addresses. network-interface-id - The ID of the network interface that the address is associated with, if any.network-interface-owner-id - The Amazon Web Services account ID of the owner.private-ip-address - The private IP address associated with the Elastic IP address.public-ip - The Elastic IP address, or the carrier IP address.tag:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner and the value TeamA, specify tag:Owner for the filter name and TeamA for the filter value.tag-key - The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.DryRunOperation. Otherwise, it is UnauthorizedOperation.
",
+ "locationName":"Filter"
+ },
+ "InstanceConnectEndpointIds":{
+ "shape":"ValueStringList",
+ "documentation":"instance-connect-endpoint-id - The ID of the EC2 Instance Connect Endpoint.state - The state of the EC2 Instance Connect Endpoint (create-in-progress | create-complete | create-failed | delete-in-progress | delete-complete | delete-failed).subnet-id - The ID of the subnet in which the EC2 Instance Connect Endpoint was created.tag:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner and the value TeamA, specify tag:Owner for the filter name and TeamA for the filter value.tag-key - The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.tag-value - The value of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific value, regardless of tag key.vpc-id - The ID of the VPC in which the EC2 Instance Connect Endpoint was created.null when there are no more items to return.true or false.
true, your client's IP address is used when you connect to a resource.false, the elastic network interface IP address is used when you connect to a resource.true
Creates an EFS access point. An access point is an application-specific view into an EFS file system that applies an operating system user and group, and a file system path, to any file system request made through the access point. The operating system user and group override any identity information provided by the NFS client. The file system path is exposed as the access point's root directory. Applications using the access point can only access data in the application's own directory and any subdirectories. To learn more, see Mounting a file system using EFS access points.
If multiple requests to create access points on the same file system are sent in quick succession, and the file system is near the limit of 1000 access points, you may experience a throttling response for these requests. This is to ensure that the file system does not exceed the stated access point limit.
This operation requires permissions for the elasticfilesystem:CreateAccessPoint action.
Creates an EFS access point. An access point is an application-specific view into an EFS file system that applies an operating system user and group, and a file system path, to any file system request made through the access point. The operating system user and group override any identity information provided by the NFS client. The file system path is exposed as the access point's root directory. Applications using the access point can only access data in the application's own directory and any subdirectories. To learn more, see Mounting a file system using EFS access points.
If multiple requests to create access points on the same file system are sent in quick succession, and the file system is near the limit of 1,000 access points, you may experience a throttling response for these requests. This is to ensure that the file system does not exceed the stated access point limit.
This operation requires permissions for the elasticfilesystem:CreateAccessPoint action.
Access points can be tagged on creation. If tags are specified in the creation action, IAM performs additional authorization on the elasticfilesystem:TagResource action to verify if users have permissions to create tags. Therefore, you must grant explicit permissions to use the elasticfilesystem:TagResource action. For more information, see Granting permissions to tag resources during creation.
Creates a new, empty file system. The operation requires a creation token in the request that Amazon EFS uses to ensure idempotent creation (calling the operation with same creation token has no effect). If a file system does not currently exist that is owned by the caller's Amazon Web Services account with the specified creation token, this operation does the following:
Creates a new, empty file system. The file system will have an Amazon EFS assigned ID, and an initial lifecycle state creating.
Returns with the description of the created file system.
Otherwise, this operation returns a FileSystemAlreadyExists error with the ID of the existing file system.
For basic use cases, you can use a randomly generated UUID for the creation token.
The idempotent operation allows you to retry a CreateFileSystem call without risk of creating an extra file system. This can happen when an initial call fails in a way that leaves it uncertain whether or not a file system was actually created. An example might be that a transport level timeout occurred or your connection was reset. As long as you use the same creation token, if the initial call had succeeded in creating a file system, the client can learn of its existence from the FileSystemAlreadyExists error.
For more information, see Creating a file system in the Amazon EFS User Guide.
The CreateFileSystem call returns while the file system's lifecycle state is still creating. You can check the file system creation status by calling the DescribeFileSystems operation, which among other things returns the file system state.
This operation accepts an optional PerformanceMode parameter that you choose for your file system. We recommend generalPurpose performance mode for most file systems. File systems using the maxIO performance mode can scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for most file operations. The performance mode can't be changed after the file system has been created. For more information, see Amazon EFS performance modes.
You can set the throughput mode for the file system using the ThroughputMode parameter.
After the file system is fully created, Amazon EFS sets its lifecycle state to available, at which point you can create one or more mount targets for the file system in your VPC. For more information, see CreateMountTarget. You mount your Amazon EFS file system on an EC2 instances in your VPC by using the mount target. For more information, see Amazon EFS: How it Works.
This operation requires permissions for the elasticfilesystem:CreateFileSystem action.
Creates a new, empty file system. The operation requires a creation token in the request that Amazon EFS uses to ensure idempotent creation (calling the operation with same creation token has no effect). If a file system does not currently exist that is owned by the caller's Amazon Web Services account with the specified creation token, this operation does the following:
Creates a new, empty file system. The file system will have an Amazon EFS assigned ID, and an initial lifecycle state creating.
Returns with the description of the created file system.
Otherwise, this operation returns a FileSystemAlreadyExists error with the ID of the existing file system.
For basic use cases, you can use a randomly generated UUID for the creation token.
The idempotent operation allows you to retry a CreateFileSystem call without risk of creating an extra file system. This can happen when an initial call fails in a way that leaves it uncertain whether or not a file system was actually created. An example might be that a transport level timeout occurred or your connection was reset. As long as you use the same creation token, if the initial call had succeeded in creating a file system, the client can learn of its existence from the FileSystemAlreadyExists error.
For more information, see Creating a file system in the Amazon EFS User Guide.
The CreateFileSystem call returns while the file system's lifecycle state is still creating. You can check the file system creation status by calling the DescribeFileSystems operation, which among other things returns the file system state.
This operation accepts an optional PerformanceMode parameter that you choose for your file system. We recommend generalPurpose performance mode for most file systems. File systems using the maxIO performance mode can scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for most file operations. The performance mode can't be changed after the file system has been created. For more information, see Amazon EFS performance modes.
You can set the throughput mode for the file system using the ThroughputMode parameter.
After the file system is fully created, Amazon EFS sets its lifecycle state to available, at which point you can create one or more mount targets for the file system in your VPC. For more information, see CreateMountTarget. You mount your Amazon EFS file system on an EC2 instances in your VPC by using the mount target. For more information, see Amazon EFS: How it Works.
This operation requires permissions for the elasticfilesystem:CreateFileSystem action.
File systems can be tagged on creation. If tags are specified in the creation action, IAM performs additional authorization on the elasticfilesystem:TagResource action to verify if users have permissions to create tags. Therefore, you must grant explicit permissions to use the elasticfilesystem:TagResource action. For more information, see Granting permissions to tag resources during creation.
Describes the status of the destination Amazon EFS file system. If the status is ERROR, the destination file system in the replication configuration is in a failed state and is unrecoverable. To access the file system data, restore a backup of the failed file system to a new file system.
Describes the status of the destination Amazon EFS file system.
The Paused state occurs as a result of opting out of the source or destination Region after the replication configuration was created. To resume replication for the file system, you need to again opt in to the Amazon Web Services Region. For more information, see Managing Amazon Web Services Regions in the Amazon Web Services General Reference Guide.
The Error state occurs when either the source or the destination file system (or both) is in a failed state and is unrecoverable. For more information, see Monitoring replication status in the Amazon EFS User Guide. You must delete the replication configuration, and then restore the most recent backup of the failed file system (either the source or the destination) to a new file system.
Creates member accounts of the current Amazon Web Services account by specifying a list of Amazon Web Services account IDs. This step is a prerequisite for managing the associated member accounts either by invitation or through an organization.
When using Create Members as an organizations delegated administrator this action will enable GuardDuty in the added member accounts, with the exception of the organization delegated administrator account, which must enable GuardDuty prior to being added as a member.
If you are adding accounts by invitation, use this action after GuardDuty has bee enabled in potential member accounts and before using InviteMembers.
" + "documentation":"Creates member accounts of the current Amazon Web Services account by specifying a list of Amazon Web Services account IDs. This step is a prerequisite for managing the associated member accounts either by invitation or through an organization.
As a delegated administrator, using CreateMembers will enable GuardDuty in the added member accounts, with the exception of the organization delegated administrator account. A delegated administrator must enable GuardDuty prior to being added as a member.
If you are adding accounts by invitation, before using InviteMembers, use CreateMembers after GuardDuty has been enabled in potential member accounts.
If you disassociate a member from a GuardDuty delegated administrator, the member account details obtained from this API, including the associated email addresses, will be retained. This is done so that the delegated administrator can invoke the InviteMembers API without the need to invoke the CreateMembers API again. To remove the details associated with a member account, the delegated administrator must invoke the DeleteMembers API.
" }, "CreatePublishingDestination":{ "name":"CreatePublishingDestination", @@ -357,7 +357,7 @@ {"shape":"BadRequestException"}, {"shape":"InternalServerErrorException"} ], - "documentation":"Disassociates the current GuardDuty member account from its administrator account.
With autoEnableOrganizationMembers configuration for your organization set to ALL, you'll receive an error if you attempt to disable GuardDuty in a member account.
Disassociates the current GuardDuty member account from its administrator account.
When you disassociate an invited member from a GuardDuty delegated administrator, the member account details obtained from the CreateMembers API, including the associated email addresses, are retained. This is done so that the delegated administrator can invoke the InviteMembers API without the need to invoke the CreateMembers API again. To remove the details associated with a member account, the delegated administrator must invoke the DeleteMembers API.
With autoEnableOrganizationMembers configuration for your organization set to ALL, you'll receive an error if you attempt to disable GuardDuty in a member account.
Disassociates the current GuardDuty member account from its administrator account.
", + "documentation":"Disassociates the current GuardDuty member account from its administrator account.
When you disassociate an invited member from a GuardDuty delegated administrator, the member account details obtained from the CreateMembers API, including the associated email addresses, are retained. This is done so that the delegated administrator can invoke the InviteMembers API without the need to invoke the CreateMembers API again. To remove the details associated with a member account, the delegated administrator must invoke the DeleteMembers API.
", "deprecated":true, "deprecatedMessage":"This operation is deprecated, use DisassociateFromAdministratorAccount instead" }, @@ -389,7 +389,7 @@ {"shape":"BadRequestException"}, {"shape":"InternalServerErrorException"} ], - "documentation":"Disassociates GuardDuty member accounts (to the current administrator account) specified by the account IDs.
With autoEnableOrganizationMembers configuration for your organization set to ALL, you'll receive an error if you attempt to disassociate a member account before removing them from your Amazon Web Services organization.
Disassociates GuardDuty member accounts (from the current administrator account) specified by the account IDs.
When you disassociate an invited member from a GuardDuty delegated administrator, the member account details obtained from the CreateMembers API, including the associated email addresses, are retained. This is done so that the delegated administrator can invoke the InviteMembers API without the need to invoke the CreateMembers API again. To remove the details associated with a member account, the delegated administrator must invoke the DeleteMembers API.
With autoEnableOrganizationMembers configuration for your organization set to ALL, you'll receive an error if you attempt to disassociate a member account before removing them from your Amazon Web Services organization.
Invites other Amazon Web Services accounts (created as members of the current Amazon Web Services account by CreateMembers) to enable GuardDuty, and allow the current Amazon Web Services account to view and manage these accounts' findings on their behalf as the GuardDuty administrator account.
" + "documentation":"Invites Amazon Web Services accounts to become members of an organization administered by the Amazon Web Services account that invokes this API. If you are using Amazon Web Services Organizations to manager your GuardDuty environment, this step is not needed. For more information, see Managing accounts with Amazon Web Services Organizations.
To invite Amazon Web Services accounts, the first step is to ensure that GuardDuty has been enabled in the potential member accounts. You can now invoke this API to add accounts by invitation. The invited accounts can either accept or decline the invitation from their GuardDuty accounts. Each invited Amazon Web Services account can choose to accept the invitation from only one Amazon Web Services account. For more information, see Managing GuardDuty accounts by invitation.
After the invite has been accepted and you choose to disassociate a member account (by using DisassociateMembers) from your account, the details of the member account obtained by invoking CreateMembers, including the associated email addresses, will be retained. This is done so that you can invoke InviteMembers without the need to invoke CreateMembers again. To remove the details associated with a member account, you must also invoke DeleteMembers.
" }, "ListCoverage":{ "name":"ListCoverage", @@ -3096,7 +3096,7 @@ "members":{ "Domain":{ "shape":"String", - "documentation":"The domain information for the API request.
", + "documentation":"The domain information for the DNS query.
", "locationName":"domain" }, "Protocol":{ From 1c1632a4354844bd43f04128c6380df2d39f82dc Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 15 Jun 2023 18:09:30 +0000 Subject: [PATCH 111/317] Amazon Location Service Update: Amazon Location Service adds categories to places, including filtering on those categories in searches. Also, you can now add metadata properties to your geofences. --- ...feature-AmazonLocationService-2fd046a.json | 6 ++ .../codegen-resources/service-2.json | 90 +++++++++++++++++-- 2 files changed, 89 insertions(+), 7 deletions(-) create mode 100644 .changes/next-release/feature-AmazonLocationService-2fd046a.json diff --git a/.changes/next-release/feature-AmazonLocationService-2fd046a.json b/.changes/next-release/feature-AmazonLocationService-2fd046a.json new file mode 100644 index 000000000000..d0173ff6b5bc --- /dev/null +++ b/.changes/next-release/feature-AmazonLocationService-2fd046a.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Location Service", + "contributor": "", + "description": "Amazon Location Service adds categories to places, including filtering on those categories in searches. Also, you can now add metadata properties to your geofences." +} diff --git a/services/location/src/main/resources/codegen-resources/service-2.json b/services/location/src/main/resources/codegen-resources/service-2.json index c5dd73f14f9a..e58db3bed5ba 100644 --- a/services/location/src/main/resources/codegen-resources/service-2.json +++ b/services/location/src/main/resources/codegen-resources/service-2.json @@ -144,7 +144,7 @@ {"shape":"ValidationException"}, {"shape":"ThrottlingException"} ], - "documentation":"Uploads position update data for one or more devices to a tracker resource. Amazon Location uses the data when it reports the last known device position and position history. Amazon Location retains location data for 30 days.
Position updates are handled based on the PositionFiltering property of the tracker. When PositionFiltering is set to TimeBased, updates are evaluated against linked geofence collections, and location data is stored at a maximum of one position per 30 second interval. If your update frequency is more often than every 30 seconds, only one update per 30 seconds is stored for each unique device ID.
When PositionFiltering is set to DistanceBased filtering, location data is stored and evaluated against linked geofence collections only if the device has moved more than 30 m (98.4 ft).
When PositionFiltering is set to AccuracyBased filtering, location data is stored and evaluated against linked geofence collections only if the device has moved more than the measured accuracy. For example, if two consecutive updates from a device have a horizontal accuracy of 5 m and 10 m, the second update is neither stored or evaluated if the device has moved less than 15 m. If PositionFiltering is set to AccuracyBased filtering, Amazon Location uses the default value { \"Horizontal\": 0} when accuracy is not provided on a DevicePositionUpdate.
Uploads position update data for one or more devices to a tracker resource (up to 10 devices per batch). Amazon Location uses the data when it reports the last known device position and position history. Amazon Location retains location data for 30 days.
Position updates are handled based on the PositionFiltering property of the tracker. When PositionFiltering is set to TimeBased, updates are evaluated against linked geofence collections, and location data is stored at a maximum of one position per 30 second interval. If your update frequency is more often than every 30 seconds, only one update per 30 seconds is stored for each unique device ID.
When PositionFiltering is set to DistanceBased filtering, location data is stored and evaluated against linked geofence collections only if the device has moved more than 30 m (98.4 ft).
When PositionFiltering is set to AccuracyBased filtering, location data is stored and evaluated against linked geofence collections only if the device has moved more than the measured accuracy. For example, if two consecutive updates from a device have a horizontal accuracy of 5 m and 10 m, the second update is neither stored or evaluated if the device has moved less than 15 m. If PositionFiltering is set to AccuracyBased filtering, Amazon Location uses the default value { \"Horizontal\": 0} when accuracy is not provided on a DevicePositionUpdate.
The identifier for the geofence to be stored in a given geofence collection.
" }, + "GeofenceProperties":{ + "shape":"PropertyMap", + "documentation":"Specifies additional user-defined properties to store with the Geofence. An array of key-value pairs.
" + }, "Geometry":{ "shape":"GeofenceGeometry", "documentation":"Contains the details of the position of the geofence. Can be either a polygon or a circle. Including both will return a validation error.
Each geofence polygon can have a maximum of 1,000 vertices.
Contains the position update details for each device.
" + "documentation":"Contains the position update details for each device, up to 10 devices.
" } } }, @@ -2996,6 +3000,12 @@ "type":"double", "box":true }, + "FilterPlaceCategoryList":{ + "type":"list", + "member":{"shape":"PlaceCategory"}, + "max":5, + "min":1 + }, "GeoArn":{ "type":"string", "max":1600, @@ -3167,6 +3177,10 @@ "shape":"Id", "documentation":"The geofence identifier.
" }, + "GeofenceProperties":{ + "shape":"PropertyMap", + "documentation":"Contains additional user-defined properties stored with the geofence. An array of key-value pairs.
" + }, "Geometry":{ "shape":"GeofenceGeometry", "documentation":"Contains the geofence geometry details describing a polygon or a circle.
" @@ -3734,6 +3748,10 @@ "shape":"Id", "documentation":"The geofence identifier.
" }, + "GeofenceProperties":{ + "shape":"PropertyMap", + "documentation":"Contains additional user-defined properties stored with the geofence. An array of key-value pairs.
" + }, "Geometry":{ "shape":"GeofenceGeometry", "documentation":"Contains the geofence geometry details describing a polygon or a circle.
" @@ -4273,6 +4291,10 @@ "shape":"String", "documentation":"The numerical portion of an address, such as a building number.
" }, + "Categories":{ + "shape":"PlaceCategoryList", + "documentation":"The Amazon Location categories that describe this Place.
For more information about using categories, including a list of Amazon Location categories, see Categories and filtering, in the Amazon Location Service Developer Guide.
" + }, "Country":{ "shape":"String", "documentation":"A country/region specified using ISO 3166 3-digit country/region code. For example, CAN.
A county, or an area that's part of a larger region. For example, Metro Vancouver.
Categories from the data provider that describe the Place that are not mapped to any Amazon Location categories.
" + }, "TimeZone":{ "shape":"TimeZone", - "documentation":"The time zone in which the Place is located. Returned only when using HERE as the selected partner.
The time zone in which the Place is located. Returned only when using HERE or Grab as the selected partner.
For addresses with multiple units, the unit identifier. Can include numbers and letters, for example 3B or Unit 123.
Returned only for a place index that uses Esri as a data provider. Is not returned for SearchPlaceIndexForPosition.
For addresses with multiple units, the unit identifier. Can include numbers and letters, for example 3B or Unit 123.
Returned only for a place index that uses Esri or Grab as a data provider. Is not returned for SearchPlaceIndexForPosition.
For addresses with a UnitNumber, the type of unit. For example, Apartment.
For addresses with a UnitNumber, the type of unit. For example, Apartment.
Returned only for a place index that uses Esri as a data provider.
Contains details about addresses or points of interest that match the search criteria.
Not all details are included with all responses. Some details may only be returned by specific data partners.
" }, + "PlaceCategory":{ + "type":"string", + "max":35, + "min":0 + }, + "PlaceCategoryList":{ + "type":"list", + "member":{"shape":"PlaceCategory"}, + "max":10, + "min":1 + }, "PlaceGeometry":{ "type":"structure", "members":{ @@ -4341,6 +4378,17 @@ "max":50, "min":1 }, + "PlaceSupplementalCategory":{ + "type":"string", + "max":35, + "min":0 + }, + "PlaceSupplementalCategoryList":{ + "type":"list", + "member":{"shape":"PlaceSupplementalCategory"}, + "max":10, + "min":1 + }, "Position":{ "type":"list", "member":{"shape":"Double"}, @@ -4419,6 +4467,10 @@ "location":"uri", "locationName":"GeofenceId" }, + "GeofenceProperties":{ + "shape":"PropertyMap", + "documentation":"Specifies additional user-defined properties to store with the Geofence. An array of key-value pairs.
" + }, "Geometry":{ "shape":"GeofenceGeometry", "documentation":"Contains the details to specify the position of the geofence. Can be either a polygon or a circle. Including both will return a validation error.
Each geofence polygon can have a maximum of 1,000 vertices.
The Amazon Location categories that describe the Place.
For more information about using categories, including a list of Amazon Location categories, see Categories and filtering, in the Amazon Location Service Developer Guide.
" + }, "PlaceId":{ "shape":"PlaceId", - "documentation":"The unique identifier of the place. You can use this with the GetPlace operation to find the place again later.
For SearchPlaceIndexForSuggestions operations, the PlaceId is returned by place indexes that use Esri, Grab, or HERE as data providers.
The unique identifier of the Place. You can use this with the GetPlace operation to find the place again later, or to get full information for the Place.
The GetPlace request must use the same PlaceIndex resource as the SearchPlaceIndexForSuggestions that generated the Place ID.
For SearchPlaceIndexForSuggestions operations, the PlaceId is returned by place indexes that use Esri, Grab, or HERE as data providers.
Categories from the data provider that describe the Place that are not mapped to any Amazon Location categories.
" }, "Text":{ "shape":"String", @@ -4715,6 +4775,10 @@ "shape":"BoundingBox", "documentation":"An optional parameter that limits the search results by returning only suggestions within a specified bounding box.
If provided, this parameter must contain a total of four consecutive numbers in two pairs. The first pair of numbers represents the X and Y coordinates (longitude and latitude, respectively) of the southwest corner of the bounding box; the second pair of numbers represents the X and Y coordinates (longitude and latitude, respectively) of the northeast corner of the bounding box.
For example, [-12.7935, -37.4835, -12.0684, -36.9542] represents a bounding box where the southwest corner has longitude -12.7935 and latitude -37.4835, and the northeast corner has longitude -12.0684 and latitude -36.9542.
FilterBBox and BiasPosition are mutually exclusive. Specifying both options results in an error.
A list of one or more Amazon Location categories to filter the returned places. If you include more than one category, the results will include results that match any of the categories listed.
For more information about using categories, including a list of Amazon Location categories, see Categories and filtering, in the Amazon Location Service Developer Guide.
" + }, "FilterCountries":{ "shape":"CountryCodeList", "documentation":"An optional parameter that limits the search results by returning only suggestions within the provided list of countries.
Use the ISO 3166 3-digit country code. For example, Australia uses three upper-case characters: AUS.
Contains the coordinates for the optional bounding box specified in the request.
" }, + "FilterCategories":{ + "shape":"FilterPlaceCategoryList", + "documentation":"The optional category filter specified in the request.
" + }, "FilterCountries":{ "shape":"CountryCodeList", "documentation":"Contains the optional country filter specified in the request.
" @@ -4821,6 +4889,10 @@ "shape":"BoundingBox", "documentation":"An optional parameter that limits the search results by returning only places that are within the provided bounding box.
If provided, this parameter must contain a total of four consecutive numbers in two pairs. The first pair of numbers represents the X and Y coordinates (longitude and latitude, respectively) of the southwest corner of the bounding box; the second pair of numbers represents the X and Y coordinates (longitude and latitude, respectively) of the northeast corner of the bounding box.
For example, [-12.7935, -37.4835, -12.0684, -36.9542] represents a bounding box where the southwest corner has longitude -12.7935 and latitude -37.4835, and the northeast corner has longitude -12.0684 and latitude -36.9542.
FilterBBox and BiasPosition are mutually exclusive. Specifying both options results in an error.
A list of one or more Amazon Location categories to filter the returned places. If you include more than one category, the results will include results that match any of the categories listed.
For more information about using categories, including a list of Amazon Location categories, see Categories and filtering, in the Amazon Location Service Developer Guide.
" + }, "FilterCountries":{ "shape":"CountryCodeList", "documentation":"An optional parameter that limits the search results by returning only places that are in a specified list of countries.
Valid values include ISO 3166 3-digit country codes. For example, Australia uses three upper-case characters: AUS.
Contains the coordinates for the optional bounding box specified in the request.
" }, + "FilterCategories":{ + "shape":"FilterPlaceCategoryList", + "documentation":"The optional category filter specified in the request.
" + }, "FilterCountries":{ "shape":"CountryCodeList", "documentation":"Contains the optional country filter specified in the request.
" From ff36749425db185b2880a255fa4a05f7ce376724 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 15 Jun 2023 18:09:34 +0000 Subject: [PATCH 112/317] AWS Audit Manager Update: This release introduces 2 Audit Manager features: CSV exports and new manual evidence options. You can now export your evidence finder results in CSV format. In addition, you can now add manual evidence to a control by entering free-form text or uploading a file from your browser. --- .../feature-AWSAuditManager-6dfbb70.json | 6 + .../codegen-resources/endpoint-tests.json | 166 +++++++++++------ .../codegen-resources/service-2.json | 169 ++++++++++++++---- 3 files changed, 245 insertions(+), 96 deletions(-) create mode 100644 .changes/next-release/feature-AWSAuditManager-6dfbb70.json diff --git a/.changes/next-release/feature-AWSAuditManager-6dfbb70.json b/.changes/next-release/feature-AWSAuditManager-6dfbb70.json new file mode 100644 index 000000000000..7234d0eaf20f --- /dev/null +++ b/.changes/next-release/feature-AWSAuditManager-6dfbb70.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Audit Manager", + "contributor": "", + "description": "This release introduces 2 Audit Manager features: CSV exports and new manual evidence options. You can now export your evidence finder results in CSV format. In addition, you can now add manual evidence to a control by entering free-form text or uploading a file from your browser." +} diff --git a/services/auditmanager/src/main/resources/codegen-resources/endpoint-tests.json b/services/auditmanager/src/main/resources/codegen-resources/endpoint-tests.json index 7b3557950f12..6b6545622735 100644 --- a/services/auditmanager/src/main/resources/codegen-resources/endpoint-tests.json +++ b/services/auditmanager/src/main/resources/codegen-resources/endpoint-tests.json @@ -9,8 +9,8 @@ }, "params": { "Region": "ap-northeast-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -22,8 +22,8 @@ }, "params": { "Region": "ap-south-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -35,8 +35,8 @@ }, "params": { "Region": "ap-southeast-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -48,8 +48,8 @@ }, "params": { "Region": "ap-southeast-2", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -61,8 +61,8 @@ }, "params": { "Region": "ca-central-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -74,8 +74,8 @@ }, "params": { "Region": "eu-central-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -87,8 +87,8 @@ }, "params": { "Region": "eu-west-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -100,8 +100,8 @@ }, "params": { "Region": "eu-west-2", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -113,8 +113,8 @@ }, "params": { "Region": "us-east-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -126,8 +126,8 @@ }, "params": { "Region": "us-east-2", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -139,8 +139,8 @@ }, "params": { "Region": "us-west-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -152,8 +152,8 @@ }, "params": { "Region": "us-west-2", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -165,8 +165,8 @@ }, "params": { "Region": "us-east-1", - "UseDualStack": true, - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": true } }, { @@ -178,8 +178,8 @@ }, "params": { "Region": "us-east-1", - "UseDualStack": false, - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": false } }, { @@ -191,8 +191,8 @@ }, "params": { "Region": "us-east-1", - "UseDualStack": true, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": true } }, { @@ -204,8 +204,8 @@ }, "params": { "Region": "cn-north-1", - "UseDualStack": true, - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": true } }, { @@ -217,8 +217,8 @@ }, "params": { "Region": "cn-north-1", - "UseDualStack": false, - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": false } }, { @@ -230,8 +230,8 @@ }, "params": { "Region": "cn-north-1", - "UseDualStack": true, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": true } }, { @@ -243,8 +243,8 @@ }, "params": { "Region": "cn-north-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -256,8 +256,8 @@ }, "params": { "Region": "us-gov-east-1", - "UseDualStack": true, - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": true } }, { @@ -269,8 +269,8 @@ }, "params": { "Region": "us-gov-east-1", - "UseDualStack": false, - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": false } }, { @@ -282,8 +282,8 @@ }, "params": { "Region": "us-gov-east-1", - "UseDualStack": true, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": true } }, { @@ -295,8 +295,19 @@ }, "params": { "Region": "us-gov-east-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": true, + "UseDualStack": true } }, { @@ -308,8 +319,19 @@ }, "params": { "Region": "us-iso-east-1", - "UseDualStack": false, - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-iso-east-1", + "UseFIPS": false, + "UseDualStack": true } }, { @@ -321,8 +343,19 @@ }, "params": { "Region": "us-iso-east-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "error": "FIPS and DualStack are enabled, but this partition does not support one or both" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": true, + "UseDualStack": true } }, { @@ -334,8 +367,19 @@ }, "params": { "Region": "us-isob-east-1", - "UseDualStack": false, - "UseFIPS": true + "UseFIPS": true, + "UseDualStack": false + } + }, + { + "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack enabled", + "expect": { + "error": "DualStack is enabled but this partition does not support DualStack" + }, + "params": { + "Region": "us-isob-east-1", + "UseFIPS": false, + "UseDualStack": true } }, { @@ -347,8 +391,8 @@ }, "params": { "Region": "us-isob-east-1", - "UseDualStack": false, - "UseFIPS": false + "UseFIPS": false, + "UseDualStack": false } }, { @@ -360,8 +404,8 @@ }, "params": { "Region": "us-east-1", - "UseDualStack": false, "UseFIPS": false, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -373,8 +417,8 @@ } }, "params": { - "UseDualStack": false, "UseFIPS": false, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -385,8 +429,8 @@ }, "params": { "Region": "us-east-1", - "UseDualStack": false, "UseFIPS": true, + "UseDualStack": false, "Endpoint": "https://example.com" } }, @@ -397,10 +441,16 @@ }, "params": { "Region": "us-east-1", - "UseDualStack": true, "UseFIPS": false, + "UseDualStack": true, "Endpoint": "https://example.com" } + }, + { + "documentation": "Missing region", + "expect": { + "error": "Invalid Configuration: Missing Region" + } } ], "version": "1.0" diff --git a/services/auditmanager/src/main/resources/codegen-resources/service-2.json b/services/auditmanager/src/main/resources/codegen-resources/service-2.json index 7bef4e5f657c..8420122febbb 100644 --- a/services/auditmanager/src/main/resources/codegen-resources/service-2.json +++ b/services/auditmanager/src/main/resources/codegen-resources/service-2.json @@ -104,9 +104,10 @@ {"shape":"ResourceNotFoundException"}, {"shape":"AccessDeniedException"}, {"shape":"ValidationException"}, - {"shape":"InternalServerException"} + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"} ], - "documentation":"Uploads one or more pieces of evidence to a control in an Audit Manager assessment. You can upload manual evidence from any Amazon Simple Storage Service (Amazon S3) bucket by specifying the S3 URI of the evidence.
You must upload manual evidence to your S3 bucket before you can upload it to your assessment. For instructions, see CreateBucket and PutObject in the Amazon Simple Storage Service API Reference.
The following restrictions apply to this action:
Maximum size of an individual evidence file: 100 MB
Number of daily manual evidence uploads per control: 100
Supported file formats: See Supported file types for manual evidence in the Audit Manager User Guide
For more information about Audit Manager service restrictions, see Quotas and restrictions for Audit Manager.
" + "documentation":"Adds one or more pieces of evidence to a control in an Audit Manager assessment.
You can import manual evidence from any S3 bucket by specifying the S3 URI of the object. You can also upload a file from your browser, or enter plain text in response to a risk assessment question.
The following restrictions apply to this action:
manualEvidence can be only one of the following: evidenceFileName, s3ResourcePath, or textResponse
Maximum size of an individual evidence file: 100 MB
Number of daily manual evidence uploads per control: 100
Supported file formats: See Supported file types for manual evidence in the Audit Manager User Guide
For more information about Audit Manager service restrictions, see Quotas and restrictions for Audit Manager.
" }, "CreateAssessment":{ "name":"CreateAssessment", @@ -253,7 +254,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Deletes a custom control in Audit Manager.
" + "documentation":"Deletes a custom control in Audit Manager.
When you invoke this operation, the custom control is deleted from any frameworks or assessments that it’s currently part of. As a result, Audit Manager will stop collecting evidence for that custom control in all of your assessments. This includes assessments that you previously created before you deleted the custom control.
Returns the registration status of an account in Audit Manager.
" + "documentation":"Gets the registration status of an account in Audit Manager.
" }, "GetAssessment":{ "name":"GetAssessment", @@ -330,7 +331,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns an assessment from Audit Manager.
" + "documentation":"Gets information about a specified assessment.
" }, "GetAssessmentFramework":{ "name":"GetAssessmentFramework", @@ -346,7 +347,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns a framework from Audit Manager.
" + "documentation":"Gets information about a specified framework.
" }, "GetAssessmentReportUrl":{ "name":"GetAssessmentReportUrl", @@ -362,7 +363,7 @@ {"shape":"InternalServerException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"Returns the URL of an assessment report in Audit Manager.
" + "documentation":"Gets the URL of an assessment report in Audit Manager.
" }, "GetChangeLogs":{ "name":"GetChangeLogs", @@ -378,7 +379,7 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns a list of changelogs from Audit Manager.
" + "documentation":"Gets a list of changelogs from Audit Manager.
" }, "GetControl":{ "name":"GetControl", @@ -394,7 +395,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns a control from Audit Manager.
" + "documentation":"Gets information about a specified control.
" }, "GetDelegations":{ "name":"GetDelegations", @@ -409,7 +410,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns a list of delegations from an audit owner to a delegate.
" + "documentation":"Gets a list of delegations from an audit owner to a delegate.
" }, "GetEvidence":{ "name":"GetEvidence", @@ -425,7 +426,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns evidence from Audit Manager.
" + "documentation":"Gets information about a specified evidence item.
" }, "GetEvidenceByEvidenceFolder":{ "name":"GetEvidenceByEvidenceFolder", @@ -441,7 +442,23 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns all evidence from a specified evidence folder in Audit Manager.
" + "documentation":"Gets all evidence from a specified evidence folder in Audit Manager.
" + }, + "GetEvidenceFileUploadUrl":{ + "name":"GetEvidenceFileUploadUrl", + "http":{ + "method":"GET", + "requestUri":"/evidenceFileUploadUrl" + }, + "input":{"shape":"GetEvidenceFileUploadUrlRequest"}, + "output":{"shape":"GetEvidenceFileUploadUrlResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"} + ], + "documentation":"Creates a presigned Amazon S3 URL that can be used to upload a file as manual evidence. For instructions on how to use this operation, see Upload a file from your browser in the Audit Manager User Guide.
The following restrictions apply to this operation:
Maximum size of an individual evidence file: 100 MB
Number of daily manual evidence uploads per control: 100
Supported file formats: See Supported file types for manual evidence in the Audit Manager User Guide
For more information about Audit Manager service restrictions, see Quotas and restrictions for Audit Manager.
" }, "GetEvidenceFolder":{ "name":"GetEvidenceFolder", @@ -457,7 +474,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns an evidence folder from the specified assessment in Audit Manager.
" + "documentation":"Gets an evidence folder from a specified assessment in Audit Manager.
" }, "GetEvidenceFoldersByAssessment":{ "name":"GetEvidenceFoldersByAssessment", @@ -473,7 +490,7 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns the evidence folders from a specified assessment in Audit Manager.
" + "documentation":"Gets the evidence folders from a specified assessment in Audit Manager.
" }, "GetEvidenceFoldersByAssessmentControl":{ "name":"GetEvidenceFoldersByAssessmentControl", @@ -489,7 +506,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns a list of evidence folders that are associated with a specified control in an Audit Manager assessment.
" + "documentation":"Gets a list of evidence folders that are associated with a specified control in an Audit Manager assessment.
" }, "GetInsights":{ "name":"GetInsights", @@ -535,7 +552,7 @@ {"shape":"InternalServerException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"Returns the name of the delegated Amazon Web Services administrator account for the organization.
" + "documentation":"Gets the name of the delegated Amazon Web Services administrator account for a specified organization.
" }, "GetServicesInScope":{ "name":"GetServicesInScope", @@ -550,7 +567,7 @@ {"shape":"ValidationException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns a list of all of the Amazon Web Services that you can choose to include in your assessment. When you create an assessment, specify which of these services you want to include to narrow the assessment's scope.
" + "documentation":"Gets a list of all of the Amazon Web Services that you can choose to include in your assessment. When you create an assessment, specify which of these services you want to include to narrow the assessment's scope.
" }, "GetSettings":{ "name":"GetSettings", @@ -564,7 +581,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Returns the settings for the specified Amazon Web Services account.
" + "documentation":"Gets the settings for a specified Amazon Web Services account.
" }, "ListAssessmentControlInsightsByControlDomain":{ "name":"ListAssessmentControlInsightsByControlDomain", @@ -1651,7 +1668,7 @@ }, "destination":{ "shape":"S3Url", - "documentation":"The destination of the assessment report.
" + "documentation":"The destination bucket where Audit Manager stores assessment reports.
" } }, "documentation":"The location where Audit Manager saves assessment reports for the given assessment.
" @@ -1994,7 +2011,7 @@ }, "type":{ "shape":"ControlType", - "documentation":"The type of control, such as a custom control or a standard control.
" + "documentation":"Specifies whether the control is a standard control or a custom control.
" }, "name":{ "shape":"ControlName", @@ -2195,7 +2212,7 @@ "sourceKeyword":{"shape":"SourceKeyword"}, "sourceFrequency":{ "shape":"SourceFrequency", - "documentation":"The frequency of evidence collection for the control mapping source.
" + "documentation":"Specifies how often evidence is collected from the control mapping source.
" }, "troubleshootingText":{ "shape":"TroubleshootingText", @@ -2504,7 +2521,7 @@ "sourceKeyword":{"shape":"SourceKeyword"}, "sourceFrequency":{ "shape":"SourceFrequency", - "documentation":"The frequency of evidence collection for the control mapping source.
" + "documentation":"Specifies how often evidence is collected from the control mapping source.
" }, "troubleshootingText":{ "shape":"TroubleshootingText", @@ -2598,6 +2615,20 @@ "min":1, "pattern":"^[a-zA-Z0-9\\s-_()\\[\\]]+$" }, + "DefaultExportDestination":{ + "type":"structure", + "members":{ + "destinationType":{ + "shape":"ExportDestinationType", + "documentation":"The destination type, such as Amazon S3.
" + }, + "destination":{ + "shape":"S3Url", + "documentation":"The destination bucket where Audit Manager stores exported files.
" + } + }, + "documentation":"The default s3 bucket where Audit Manager saves the files that you export from evidence finder.
" + }, "Delegation":{ "type":"structure", "members":{ @@ -3055,6 +3086,10 @@ "type":"list", "member":{"shape":"NonEmptyString"} }, + "ExportDestinationType":{ + "type":"string", + "enum":["S3"] + }, "Filename":{ "type":"string", "max":255, @@ -3078,11 +3113,11 @@ }, "type":{ "shape":"FrameworkType", - "documentation":"The framework type, such as a custom framework or a standard framework.
" + "documentation":"Specifies whether the framework is a standard framework or a custom framework.
" }, "complianceType":{ "shape":"ComplianceType", - "documentation":"The compliance type that the new custom framework supports, such as CIS or HIPAA.
" + "documentation":"The compliance type that the framework supports, such as CIS or HIPAA.
" }, "description":{ "shape":"FrameworkDescription", @@ -3094,7 +3129,7 @@ }, "controlSources":{ "shape":"ControlSources", - "documentation":"The sources that Audit Manager collects evidence from for the control.
" + "documentation":"The control data sources where Audit Manager collects evidence from.
" }, "controlSets":{ "shape":"ControlSets", @@ -3321,7 +3356,7 @@ "members":{ "control":{ "shape":"Control", - "documentation":" The name of the control that the GetControl API returned.
The details of the control that the GetControl API returned.
The file that you want to upload. For a list of supported file formats, see Supported file types for manual evidence in the Audit Manager User Guide.
", + "location":"querystring", + "locationName":"fileName" + } + } + }, + "GetEvidenceFileUploadUrlResponse":{ + "type":"structure", + "members":{ + "evidenceFileName":{ + "shape":"NonEmptyString", + "documentation":"The name of the uploaded manual evidence file that the presigned URL was generated for.
" + }, + "uploadUrl":{ + "shape":"NonEmptyString", + "documentation":"The presigned URL that was generated.
" + } + } + }, "GetEvidenceFolderRequest":{ "type":"structure", "required":[ @@ -3757,7 +3817,11 @@ }, "KeywordInputType":{ "type":"string", - "enum":["SELECT_FROM_LIST"] + "enum":[ + "SELECT_FROM_LIST", + "UPLOAD_FILE", + "INPUT_TEXT" + ] }, "KeywordValue":{ "type":"string", @@ -3893,7 +3957,7 @@ "members":{ "frameworkMetadataList":{ "shape":"FrameworkMetadataList", - "documentation":"The list of metadata objects for the framework.
" + "documentation":" A list of metadata that the ListAssessmentFrameworks API returns for each framework.
The metadata that's associated with the assessment.
" + "documentation":"The metadata that the ListAssessments API returns for each assessment.
The list of control metadata objects that the ListControls API returned.
A list of metadata that the ListControls API returns for each control.
The Amazon S3 URL that points to a manual evidence object.
" + "documentation":"The S3 URL of the object that's imported as manual evidence.
" + }, + "textResponse":{ + "shape":"ManualEvidenceTextResponse", + "documentation":"The plain text response that's entered and saved as manual evidence.
" + }, + "evidenceFileName":{ + "shape":"ManualEvidenceLocalFileName", + "documentation":"The name of the file that's uploaded as manual evidence. This name is populated using the evidenceFileName value from the GetEvidenceFileUploadUrl API response.
Evidence that's uploaded to Audit Manager manually.
" + "documentation":" Evidence that's manually added to a control in Audit Manager. manualEvidence can be one of the following: evidenceFileName, s3ResourcePath, or textResponse.
The default storage destination for assessment reports.
" + "documentation":"The default S3 destination bucket for storing assessment reports.
" }, "defaultProcessOwners":{ "shape":"Roles", @@ -4513,6 +4598,10 @@ "deregistrationPolicy":{ "shape":"DeregistrationPolicy", "documentation":"The deregistration policy for your Audit Manager data. You can use this attribute to determine how your data is handled when you deregister Audit Manager.
" + }, + "defaultExportDestination":{ + "shape":"DefaultExportDestination", + "documentation":"The default S3 destination bucket for storing evidence finder exports.
" } }, "documentation":"The settings object that holds all supported Audit Manager settings.
" @@ -4574,14 +4663,14 @@ "members":{ "keywordInputType":{ "shape":"KeywordInputType", - "documentation":"The input method for the keyword.
" + "documentation":"The input method for the keyword.
SELECT_FROM_LIST is used when mapping a data source for automated evidence.
When keywordInputType is SELECT_FROM_LIST, a keyword must be selected to collect automated evidence. For example, this keyword can be a CloudTrail event name, a rule name for Config, a Security Hub control, or the name of an Amazon Web Services API call.
UPLOAD_FILE and INPUT_TEXT are only used when mapping a data source for manual evidence.
When keywordInputType is UPLOAD_FILE, a file must be uploaded as manual evidence.
When keywordInputType is INPUT_TEXT, text must be entered as manual evidence.
The value of the keyword that's used when mapping a control data source. For example, this can be a CloudTrail event name, a rule name for Config, a Security Hub control, or the name of an Amazon Web Services API call.
If you’re mapping a data source to a rule in Config, the keywordValue that you specify depends on the type of rule:
For managed rules, you can use the rule identifier as the keywordValue. You can find the rule identifier from the list of Config managed rules.
Managed rule name: s3-bucket-acl-prohibited
keywordValue: S3_BUCKET_ACL_PROHIBITED
For custom rules, you form the keywordValue by adding the Custom_ prefix to the rule name. This prefix distinguishes the rule from a managed rule.
Custom rule name: my-custom-config-rule
keywordValue: Custom_my-custom-config-rule
For service-linked rules, you form the keywordValue by adding the Custom_ prefix to the rule name. In addition, you remove the suffix ID that appears at the end of the rule name.
Service-linked rule name: CustomRuleForAccount-conformance-pack-szsm1uv0w
keywordValue: Custom_CustomRuleForAccount-conformance-pack
Service-linked rule name: OrgConfigRule-s3-bucket-versioning-enabled-dbgzf8ba
keywordValue: Custom_OrgConfigRule-s3-bucket-versioning-enabled
The value of the keyword that's used when mapping a control data source. For example, this can be a CloudTrail event name, a rule name for Config, a Security Hub control, or the name of an Amazon Web Services API call.
If you’re mapping a data source to a rule in Config, the keywordValue that you specify depends on the type of rule:
For managed rules, you can use the rule identifier as the keywordValue. You can find the rule identifier from the list of Config managed rules. For some rules, the rule identifier is different from the rule name. For example, the rule name restricted-ssh has the following rule identifier: INCOMING_SSH_DISABLED. Make sure to use the rule identifier, not the rule name.
Keyword example for managed rules:
Managed rule name: s3-bucket-acl-prohibited
keywordValue: S3_BUCKET_ACL_PROHIBITED
For custom rules, you form the keywordValue by adding the Custom_ prefix to the rule name. This prefix distinguishes the custom rule from a managed rule.
Keyword example for custom rules:
Custom rule name: my-custom-config-rule
keywordValue: Custom_my-custom-config-rule
For service-linked rules, you form the keywordValue by adding the Custom_ prefix to the rule name. In addition, you remove the suffix ID that appears at the end of the rule name.
Keyword examples for service-linked rules:
Service-linked rule name: CustomRuleForAccount-conformance-pack-szsm1uv0w
keywordValue: Custom_CustomRuleForAccount-conformance-pack
Service-linked rule name: OrgConfigRule-s3-bucket-versioning-enabled-dbgzf8ba
keywordValue: Custom_OrgConfigRule-s3-bucket-versioning-enabled
The keywordValue is case sensitive. If you enter a value incorrectly, Audit Manager might not recognize the data source mapping. As a result, you might not successfully collect evidence from that data source as intended.
Keep in mind the following requirements, depending on the data source type that you're using.
For Config:
For managed rules, make sure that the keywordValue is the rule identifier in ALL_CAPS_WITH_UNDERSCORES. For example, CLOUDWATCH_LOG_GROUP_ENCRYPTED. For accuracy, we recommend that you reference the list of supported Config managed rules.
For custom rules, make sure that the keywordValue has the Custom_ prefix followed by the custom rule name. The format of the custom rule name itself may vary. For accuracy, we recommend that you visit the Config console to verify your custom rule name.
For Security Hub: The format varies for Security Hub control names. For accuracy, we recommend that you reference the list of supported Security Hub controls.
For Amazon Web Services API calls: Make sure that the keywordValue is written as serviceprefix_ActionName. For example, iam_ListGroups. For accuracy, we recommend that you reference the list of supported API calls.
For CloudTrail: Make sure that the keywordValue is written as serviceprefix_ActionName. For example, cloudtrail_StartLogging. For accuracy, we recommend that you review the Amazon Web Service prefix and action names in the Service Authorization Reference.
The keyword to search for in CloudTrail logs, Config rules, Security Hub checks, and Amazon Web Services API names.
To learn more about the supported keywords that you can use when mapping a control data source, see the following pages in the Audit Manager User Guide:
A keyword that relates to the control data source.
For manual evidence, this keyword indicates if the manual evidence is a file or text.
For automated evidence, this keyword identifies a specific CloudTrail event, Config rule, Security Hub control, or Amazon Web Services API name.
To learn more about the supported keywords that you can use when mapping a control data source, see the following pages in the Audit Manager User Guide:
The default storage destination for assessment reports.
" + "documentation":"The default S3 destination bucket for storing assessment reports.
" }, "defaultProcessOwners":{ "shape":"Roles", @@ -5109,6 +5198,10 @@ "deregistrationPolicy":{ "shape":"DeregistrationPolicy", "documentation":"The deregistration policy for your Audit Manager data. You can use this attribute to determine how your data is handled when you deregister Audit Manager.
" + }, + "defaultExportDestination":{ + "shape":"DefaultExportDestination", + "documentation":"The default S3 destination bucket for storing evidence finder exports.
" } } }, From 6b21cabf2990f7536d6366fcd5d76e241b7dde4d Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 15 Jun 2023 18:10:34 +0000 Subject: [PATCH 113/317] Updated endpoints.json and partitions.json. --- .../feature-AWSSDKforJavav2-0443982.json | 6 +++ .../regions/internal/region/endpoints.json | 48 +++++++++++++++++++ 2 files changed, 54 insertions(+) create mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json new file mode 100644 index 000000000000..e5b5ee3ca5e3 --- /dev/null +++ b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." +} diff --git a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json index 3cf59c9ee448..808dcb33d6a3 100644 --- a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json +++ b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json @@ -16335,6 +16335,37 @@ } } }, + "verifiedpermissions" : { + "endpoints" : { + "af-south-1" : { }, + "ap-east-1" : { }, + "ap-northeast-1" : { }, + "ap-northeast-2" : { }, + "ap-northeast-3" : { }, + "ap-south-1" : { }, + "ap-south-2" : { }, + "ap-southeast-1" : { }, + "ap-southeast-2" : { }, + "ap-southeast-3" : { }, + "ap-southeast-4" : { }, + "ca-central-1" : { }, + "eu-central-1" : { }, + "eu-central-2" : { }, + "eu-north-1" : { }, + "eu-south-1" : { }, + "eu-south-2" : { }, + "eu-west-1" : { }, + "eu-west-2" : { }, + "eu-west-3" : { }, + "me-central-1" : { }, + "me-south-1" : { }, + "sa-east-1" : { }, + "us-east-1" : { }, + "us-east-2" : { }, + "us-west-1" : { }, + "us-west-2" : { } + } + }, "voice-chime" : { "endpoints" : { "ap-northeast-1" : { }, @@ -24344,6 +24375,23 @@ "regionRegex" : "^eu\\-isoe\\-\\w+\\-\\d+$", "regions" : { }, "services" : { } + }, { + "defaults" : { + "hostname" : "{service}.{region}.{dnsSuffix}", + "protocols" : [ "https" ], + "signatureVersions" : [ "v4" ], + "variants" : [ { + "dnsSuffix" : "csp.hci.ic.gov", + "hostname" : "{service}-fips.{region}.{dnsSuffix}", + "tags" : [ "fips" ] + } ] + }, + "dnsSuffix" : "csp.hci.ic.gov", + "partition" : "aws-iso-f", + "partitionName" : "AWS ISOF", + "regionRegex" : "^us\\-isof\\-\\w+\\-\\d+$", + "regions" : { }, + "services" : { } } ], "version" : 3 } \ No newline at end of file From c0285b210c04356b2d8afd0506487311cce8ceb6 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Thu, 15 Jun 2023 18:11:37 +0000 Subject: [PATCH 114/317] Release 2.20.86. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.20.86.json | 48 +++++++++++++++++++ ...bugfix-AmazonDynamoDBEnhanced-66db474.json | 6 --- .../feature-AWSAuditManager-6dfbb70.json | 6 --- .../feature-AWSSDKforJavav2-0443982.json | 6 --- ...ature-AmazonElasticFileSystem-dffc8bb.json | 6 --- .../feature-AmazonGuardDuty-a721999.json | 6 --- ...feature-AmazonLocationService-2fd046a.json | 6 --- ...eature-DynamoDBEnhancedClient-270c65a.json | 6 --- CHANGELOG.md | 39 +++++++++++++++ README.md | 8 ++-- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 2 +- bom-internal/pom.xml | 2 +- bom/pom.xml | 2 +- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth-crt/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/crt-core/pom.xml | 2 +- core/endpoints-spi/pom.xml | 2 +- core/imds/pom.xml | 2 +- core/json-utils/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services-custom/s3-transfer-manager/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/account/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/amplifyuibuilder/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appconfigdata/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationcostprofiler/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/apprunner/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/arczonalshift/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/backupgateway/pom.xml | 2 +- services/backupstorage/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/billingconductor/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/chimesdkidentity/pom.xml | 2 +- services/chimesdkmediapipelines/pom.xml | 2 +- services/chimesdkmeetings/pom.xml | 2 +- services/chimesdkmessaging/pom.xml | 2 +- services/chimesdkvoice/pom.xml | 2 +- services/cleanrooms/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/cloudcontrol/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudtraildata/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecatalyst/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codegurusecurity/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcampaigns/pom.xml | 2 +- services/connectcases/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/controltower/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/docdbelastic/pom.xml | 2 +- services/drs/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/emrserverless/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/evidently/pom.xml | 2 +- services/finspace/pom.xml | 2 +- services/finspacedata/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/gamesparks/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/grafana/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/inspector2/pom.xml | 2 +- services/internetmonitor/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotfleetwise/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotroborunner/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iottwinmaker/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/ivschat/pom.xml | 2 +- services/ivsrealtime/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kafkaconnect/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kendraranking/pom.xml | 2 +- services/keyspaces/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kinesisvideowebrtcstorage/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- .../licensemanagerlinuxsubscriptions/pom.xml | 2 +- .../licensemanagerusersubscriptions/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutequipment/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/m2/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagev2/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/memorydb/pom.xml | 2 +- services/mgn/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/migrationhuborchestrator/pom.xml | 2 +- services/migrationhubrefactorspaces/pom.xml | 2 +- services/migrationhubstrategy/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/nimble/pom.xml | 2 +- services/oam/pom.xml | 2 +- services/omics/pom.xml | 2 +- services/opensearch/pom.xml | 2 +- services/opensearchserverless/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/osis/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/panorama/pom.xml | 2 +- services/paymentcryptography/pom.xml | 2 +- services/paymentcryptographydata/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/pinpointsmsvoicev2/pom.xml | 2 +- services/pipes/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 2 +- services/pricing/pom.xml | 2 +- services/privatenetworks/pom.xml | 2 +- services/proton/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rbin/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/redshiftserverless/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resiliencehub/pom.xml | 2 +- services/resourceexplorer2/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/rolesanywhere/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53recoverycluster/pom.xml | 2 +- services/route53recoverycontrolconfig/pom.xml | 2 +- services/route53recoveryreadiness/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/rum/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakergeospatial/pom.xml | 2 +- services/sagemakermetrics/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/scheduler/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- services/securitylake/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/simspaceweaver/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/snowdevicemanagement/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/ssmcontacts/pom.xml | 2 +- services/ssmincidents/pom.xml | 2 +- services/ssmsap/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/supportapp/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/tnb/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/verifiedpermissions/pom.xml | 2 +- services/voiceid/pom.xml | 2 +- services/vpclattice/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/wisdom/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/workspacesweb/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/auth-tests/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/region-testing/pom.xml | 2 +- test/ruleset-testing-core/pom.xml | 2 +- test/s3-benchmarks/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- third-party/pom.xml | 2 +- third-party/third-party-jackson-core/pom.xml | 2 +- .../pom.xml | 2 +- utils/pom.xml | 2 +- 421 files changed, 502 insertions(+), 457 deletions(-) create mode 100644 .changes/2.20.86.json delete mode 100644 .changes/next-release/bugfix-AmazonDynamoDBEnhanced-66db474.json delete mode 100644 .changes/next-release/feature-AWSAuditManager-6dfbb70.json delete mode 100644 .changes/next-release/feature-AWSSDKforJavav2-0443982.json delete mode 100644 .changes/next-release/feature-AmazonElasticFileSystem-dffc8bb.json delete mode 100644 .changes/next-release/feature-AmazonGuardDuty-a721999.json delete mode 100644 .changes/next-release/feature-AmazonLocationService-2fd046a.json delete mode 100644 .changes/next-release/feature-DynamoDBEnhancedClient-270c65a.json diff --git a/.changes/2.20.86.json b/.changes/2.20.86.json new file mode 100644 index 000000000000..33a7fb16a578 --- /dev/null +++ b/.changes/2.20.86.json @@ -0,0 +1,48 @@ +{ + "version": "2.20.86", + "date": "2023-06-15", + "entries": [ + { + "type": "bugfix", + "category": "Amazon DynamoDB Enhanced", + "contributor": "breader124", + "description": "Thanks to this bugfix it'll be possible to create DynamoDB table containing\nsecondary indices when using no arugments `createTable` method from `DefaultDynamoDbTable`\nclass. Information about their presence might be expressed using annotations, but it was ignored\nand created tables didn't contain specified indices. Plase note that it is still not possible\nto specify projections for indices using annotations. By default, all fields will be projected." + }, + { + "type": "feature", + "category": "AWS Audit Manager", + "contributor": "", + "description": "This release introduces 2 Audit Manager features: CSV exports and new manual evidence options. You can now export your evidence finder results in CSV format. In addition, you can now add manual evidence to a control by entering free-form text or uploading a file from your browser." + }, + { + "type": "feature", + "category": "Amazon Elastic File System", + "contributor": "", + "description": "Documentation updates for EFS." + }, + { + "type": "feature", + "category": "Amazon GuardDuty", + "contributor": "", + "description": "Updated descriptions for some APIs." + }, + { + "type": "feature", + "category": "Amazon Location Service", + "contributor": "", + "description": "Amazon Location Service adds categories to places, including filtering on those categories in searches. Also, you can now add metadata properties to your geofences." + }, + { + "type": "feature", + "category": "DynamoDB Enhanced Client", + "contributor": "bmaizels", + "description": "Add EnhancedType parameters to static builder methods of StaticTableSchema and StaticImmitableTableSchema" + }, + { + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated endpoint and partition metadata." + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/bugfix-AmazonDynamoDBEnhanced-66db474.json b/.changes/next-release/bugfix-AmazonDynamoDBEnhanced-66db474.json deleted file mode 100644 index d69e49886b06..000000000000 --- a/.changes/next-release/bugfix-AmazonDynamoDBEnhanced-66db474.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "category": "Amazon DynamoDB Enhanced", - "contributor": "breader124", - "type": "bugfix", - "description": "Thanks to this bugfix it'll be possible to create DynamoDB table containing\nsecondary indices when using no arugments `createTable` method from `DefaultDynamoDbTable`\nclass. Information about their presence might be expressed using annotations, but it was ignored\nand created tables didn't contain specified indices. Plase note that it is still not possible\nto specify projections for indices using annotations. By default, all fields will be projected." -} diff --git a/.changes/next-release/feature-AWSAuditManager-6dfbb70.json b/.changes/next-release/feature-AWSAuditManager-6dfbb70.json deleted file mode 100644 index 7234d0eaf20f..000000000000 --- a/.changes/next-release/feature-AWSAuditManager-6dfbb70.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Audit Manager", - "contributor": "", - "description": "This release introduces 2 Audit Manager features: CSV exports and new manual evidence options. You can now export your evidence finder results in CSV format. In addition, you can now add manual evidence to a control by entering free-form text or uploading a file from your browser." -} diff --git a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json b/.changes/next-release/feature-AWSSDKforJavav2-0443982.json deleted file mode 100644 index e5b5ee3ca5e3..000000000000 --- a/.changes/next-release/feature-AWSSDKforJavav2-0443982.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS SDK for Java v2", - "contributor": "", - "description": "Updated endpoint and partition metadata." -} diff --git a/.changes/next-release/feature-AmazonElasticFileSystem-dffc8bb.json b/.changes/next-release/feature-AmazonElasticFileSystem-dffc8bb.json deleted file mode 100644 index 834420253aa5..000000000000 --- a/.changes/next-release/feature-AmazonElasticFileSystem-dffc8bb.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Elastic File System", - "contributor": "", - "description": "Documentation updates for EFS." -} diff --git a/.changes/next-release/feature-AmazonGuardDuty-a721999.json b/.changes/next-release/feature-AmazonGuardDuty-a721999.json deleted file mode 100644 index 8be824a7340c..000000000000 --- a/.changes/next-release/feature-AmazonGuardDuty-a721999.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon GuardDuty", - "contributor": "", - "description": "Updated descriptions for some APIs." -} diff --git a/.changes/next-release/feature-AmazonLocationService-2fd046a.json b/.changes/next-release/feature-AmazonLocationService-2fd046a.json deleted file mode 100644 index d0173ff6b5bc..000000000000 --- a/.changes/next-release/feature-AmazonLocationService-2fd046a.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Location Service", - "contributor": "", - "description": "Amazon Location Service adds categories to places, including filtering on those categories in searches. Also, you can now add metadata properties to your geofences." -} diff --git a/.changes/next-release/feature-DynamoDBEnhancedClient-270c65a.json b/.changes/next-release/feature-DynamoDBEnhancedClient-270c65a.json deleted file mode 100644 index df22b45c116c..000000000000 --- a/.changes/next-release/feature-DynamoDBEnhancedClient-270c65a.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "category": "DynamoDB Enhanced Client", - "contributor": "bmaizels", - "type": "feature", - "description": "Add EnhancedType parameters to static builder methods of StaticTableSchema and StaticImmitableTableSchema" -} diff --git a/CHANGELOG.md b/CHANGELOG.md index 88f428005fd1..e3eaf0959c03 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,42 @@ +# __2.20.86__ __2023-06-15__ +## __AWS Audit Manager__ + - ### Features + - This release introduces 2 Audit Manager features: CSV exports and new manual evidence options. You can now export your evidence finder results in CSV format. In addition, you can now add manual evidence to a control by entering free-form text or uploading a file from your browser. + +## __AWS SDK for Java v2__ + - ### Features + - Updated endpoint and partition metadata. + +## __Amazon DynamoDB Enhanced__ + - ### Bugfixes + - Thanks to this bugfix it'll be possible to create DynamoDB table containing + secondary indices when using no arugments `createTable` method from `DefaultDynamoDbTable` + class. Information about their presence might be expressed using annotations, but it was ignored + and created tables didn't contain specified indices. Plase note that it is still not possible + to specify projections for indices using annotations. By default, all fields will be projected. + - Contributed by: [@breader124](https://github.com/breader124) + +## __Amazon Elastic File System__ + - ### Features + - Documentation updates for EFS. + +## __Amazon GuardDuty__ + - ### Features + - Updated descriptions for some APIs. + +## __Amazon Location Service__ + - ### Features + - Amazon Location Service adds categories to places, including filtering on those categories in searches. Also, you can now add metadata properties to your geofences. + +## __DynamoDB Enhanced Client__ + - ### Features + - Add EnhancedType parameters to static builder methods of StaticTableSchema and StaticImmitableTableSchema + - Contributed by: [@bmaizels](https://github.com/bmaizels) + +## __Contributors__ +Special thanks to the following contributors to this release: + +[@bmaizels](https://github.com/bmaizels), [@breader124](https://github.com/breader124) # __2.20.85__ __2023-06-13__ ## __AWS CloudTrail__ - ### Features diff --git a/README.md b/README.md index 2dd1fe2fa3fd..fc19e960f97a 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ To automatically manage module versions (currently all modules have the same ver+ * This stores values in DynamoDB as a string. + * + *
+ * Use EnumAttributeConverter::create in order to use Enum::toString as the enum identifier + * + *
+ * Use EnumAttributeConverter::createWithNameAsKeys in order to use Enum::name as the enum identifier + * + *
+ * This can be created via {@link #create(Class)}.
+ */
+@SdkPublicApi
+public final class EnumAttributeConverter
+ * Uses Enum::toString as the enum identifier.
+ *
+ * @param enumClass The enum class to be used
+ * @return an EnumAttributeConverter
+ * @param
+ * Uses Enum::name as the enum identifier.
+ *
+ * @param enumClass The enum class to be used
+ * @return an EnumAttributeConverter
+ * @param
- * This stores values in DynamoDB as a string.
- *
- *
- * This can be created via {@link #create(Class)}.
- */
-@SdkInternalApi
-public class EnumAttributeConverter Lists agents or connectors as specified by ID or other filters. All agents/connectors associated with your user account can be listed if you call Lists agents or collectors as specified by ID or other filters. All agents/collectors associated with your user can be listed if you call Lists exports as specified by ID. All continuous exports associated with your user account can be listed if you call Lists exports as specified by ID. All continuous exports associated with your user can be listed if you call Retrieves a list of configuration items that have tags as specified by the key-value pairs, name and value, passed to the optional parameter There are three valid tag filter names: tagKey tagValue configurationId Also, all configuration items associated with your user account that have tags can be listed if you call Retrieves a list of configuration items that have tags as specified by the key-value pairs, name and value, passed to the optional parameter There are three valid tag filter names: tagKey tagValue configurationId Also, all configuration items associated with your user that have tags can be listed if you call Instructs the specified agents or connectors to start collecting data. Instructs the specified agents to start collecting data. Begins the export of discovered data to an S3 bucket. If you specify If you do not include an Begins the export of a discovered data report to an Amazon S3 bucket managed by Amazon Web Services. Exports might provide an estimate of fees and savings based on certain information that you provide. Fee estimates do not include any taxes that might apply. Your actual fees and savings depend on a variety of factors, including your actual usage of Amazon Web Services services, which might vary from the estimates provided in this report. If you do not specify If you specify If you enable Starts an import task, which allows you to import details of your on-premises environment directly into Amazon Web Services Migration Hub without having to use the Application Discovery Service (ADS) tools such as the Discovery Connector or Discovery Agent. This gives you the option to perform migration assessment and planning directly from your imported data, including the ability to group your devices as applications and track their migration status. To start an import request, do this: Download the specially formatted comma separated value (CSV) import template, which you can find here: https://s3.us-west-2.amazonaws.com/templates-7cffcf56-bd96-4b1c-b45b-a5b42f282e46/import_template.csv. Fill out the template with your server and application data. Upload your import file to an Amazon S3 bucket, and make a note of it's Object URL. Your import file must be in the CSV format. Use the console or the For more information, including step-by-step procedures, see Migration Hub Import in the Amazon Web Services Application Discovery Service User Guide. There are limits to the number of import tasks you can create (and delete) in an Amazon Web Services account. For more information, see Amazon Web Services Application Discovery Service Limits in the Amazon Web Services Application Discovery Service User Guide. Starts an import task, which allows you to import details of your on-premises environment directly into Amazon Web Services Migration Hub without having to use the Amazon Web Services Application Discovery Service (Application Discovery Service) tools such as the Amazon Web Services Application Discovery Service Agentless Collector or Application Discovery Agent. This gives you the option to perform migration assessment and planning directly from your imported data, including the ability to group your devices as applications and track their migration status. To start an import request, do this: Download the specially formatted comma separated value (CSV) import template, which you can find here: https://s3.us-west-2.amazonaws.com/templates-7cffcf56-bd96-4b1c-b45b-a5b42f282e46/import_template.csv. Fill out the template with your server and application data. Upload your import file to an Amazon S3 bucket, and make a note of it's Object URL. Your import file must be in the CSV format. Use the console or the For more information, including step-by-step procedures, see Migration Hub Import in the Amazon Web Services Application Discovery Service User Guide. There are limits to the number of import tasks you can create (and delete) in an Amazon Web Services account. For more information, see Amazon Web Services Application Discovery Service Limits in the Amazon Web Services Application Discovery Service User Guide. Instructs the specified agents or connectors to stop collecting data. Instructs the specified agents to stop collecting data. The agent/connector ID. The agent ID. Information about the status of the Information about the status of the A description of the operation performed. Information about agents or connectors that were instructed to start collecting data. Information includes the agent/connector ID, a description of the operation, and whether the agent/connector configuration was updated. Information about agents that were instructed to start collecting data. Information includes the agent ID, a description of the operation, and whether the agent configuration was updated. The agent or connector ID. The agent or collector ID. The name of the host where the agent or connector resides. The host can be a server or virtual machine. The name of the host where the agent or collector resides. The host can be a server or virtual machine. Network details about the host where the agent or connector resides. Network details about the host where the agent or collector resides. The agent or connector version. The agent or collector version. The health of the agent or connector. The health of the agent. Time since agent or connector health was reported. Time since agent health was reported. Status of the collection process for an agent or connector. Status of the collection process for an agent. Agent's first registration timestamp in UTC. Information about agents or connectors associated with the user’s Amazon Web Services account. Information includes agent or connector IDs, IP addresses, media access control (MAC) addresses, agent or connector health, hostname where the agent or connector resides, and agent version for each agent. Information about agents associated with the user’s Amazon Web Services account. Information includes agent IDs, IP addresses, media access control (MAC) addresses, agent or collector status, hostname where the agent resides, and agent version for each agent. The IP address for the host where the agent/connector resides. The IP address for the host where the agent/collector resides. The MAC address for the host where the agent/connector resides. The MAC address for the host where the agent/collector resides. Network details about the host where the agent/connector resides. Network details about the host where the agent/collector resides. The Amazon Web Services user account does not have permission to perform the action. Check the IAM policy associated with this account. The user does not have permission to perform the action. Check the IAM policy associated with this user. Contains information about any errors that have occurred. This data type can have the following values: ACCESS_DENIED - You don’t have permission to start Data Exploration in Amazon Athena. Contact your Amazon Web Services administrator for help. For more information, see Setting Up Amazon Web Services Application Discovery Service in the Application Discovery Service User Guide. DELIVERY_STREAM_LIMIT_FAILURE - You reached the limit for Amazon Kinesis Data Firehose delivery streams. Reduce the number of streams or request a limit increase and try again. For more information, see Kinesis Data Streams Limits in the Amazon Kinesis Data Streams Developer Guide. FIREHOSE_ROLE_MISSING - The Data Exploration feature is in an error state because your IAM User is missing the AWSApplicationDiscoveryServiceFirehose role. Turn on Data Exploration in Amazon Athena and try again. For more information, see Step 3: Provide Application Discovery Service Access to Non-Administrator Users by Attaching Policies in the Application Discovery Service User Guide. FIREHOSE_STREAM_DOES_NOT_EXIST - The Data Exploration feature is in an error state because your IAM User is missing one or more of the Kinesis data delivery streams. INTERNAL_FAILURE - The Data Exploration feature is in an error state because of an internal failure. Try again later. If this problem persists, contact Amazon Web Services Support. LAKE_FORMATION_ACCESS_DENIED - You don't have sufficient lake formation permissions to start continuous export. For more information, see Upgrading Amazon Web Services Glue Data Permissions to the Amazon Web Services Lake Formation Model in the Amazon Web Services Lake Formation Developer Guide. You can use one of the following two ways to resolve this issue. If you don’t want to use the Lake Formation permission model, you can change the default Data Catalog settings to use only Amazon Web Services Identity and Access Management (IAM) access control for new databases. For more information, see Change Data Catalog Settings in the Lake Formation Developer Guide. You can give the service-linked IAM roles AWSServiceRoleForApplicationDiscoveryServiceContinuousExport and AWSApplicationDiscoveryServiceFirehose the required Lake Formation permissions. For more information, see Granting Database Permissions in the Lake Formation Developer Guide. AWSServiceRoleForApplicationDiscoveryServiceContinuousExport - Grant database creator permissions, which gives the role database creation ability and implicit permissions for any created tables. For more information, see Implicit Lake Formation Permissions in the Lake Formation Developer Guide. AWSApplicationDiscoveryServiceFirehose - Grant describe permissions for all tables in the database. S3_BUCKET_LIMIT_FAILURE - You reached the limit for Amazon S3 buckets. Reduce the number of S3 buckets or request a limit increase and try again. For more information, see Bucket Restrictions and Limitations in the Amazon Simple Storage Service Developer Guide. S3_NOT_SIGNED_UP - Your account is not signed up for the Amazon S3 service. You must sign up before you can use Amazon S3. You can sign up at the following URL: https://aws.amazon.com/s3. Contains information about any errors that have occurred. This data type can have the following values: ACCESS_DENIED - You don’t have permission to start Data Exploration in Amazon Athena. Contact your Amazon Web Services administrator for help. For more information, see Setting Up Amazon Web Services Application Discovery Service in the Application Discovery Service User Guide. DELIVERY_STREAM_LIMIT_FAILURE - You reached the limit for Amazon Kinesis Data Firehose delivery streams. Reduce the number of streams or request a limit increase and try again. For more information, see Kinesis Data Streams Limits in the Amazon Kinesis Data Streams Developer Guide. FIREHOSE_ROLE_MISSING - The Data Exploration feature is in an error state because your user is missing the Amazon Web ServicesApplicationDiscoveryServiceFirehose role. Turn on Data Exploration in Amazon Athena and try again. For more information, see Creating the Amazon Web ServicesApplicationDiscoveryServiceFirehose Role in the Application Discovery Service User Guide. FIREHOSE_STREAM_DOES_NOT_EXIST - The Data Exploration feature is in an error state because your user is missing one or more of the Kinesis data delivery streams. INTERNAL_FAILURE - The Data Exploration feature is in an error state because of an internal failure. Try again later. If this problem persists, contact Amazon Web Services Support. LAKE_FORMATION_ACCESS_DENIED - You don't have sufficient lake formation permissions to start continuous export. For more information, see Upgrading Amazon Web Services Glue Data Permissions to the Amazon Web Services Lake Formation Model in the Amazon Web Services Lake Formation Developer Guide. You can use one of the following two ways to resolve this issue. If you don’t want to use the Lake Formation permission model, you can change the default Data Catalog settings to use only Amazon Web Services Identity and Access Management (IAM) access control for new databases. For more information, see Change Data Catalog Settings in the Lake Formation Developer Guide. You can give the service-linked IAM roles AWSServiceRoleForApplicationDiscoveryServiceContinuousExport and AWSApplicationDiscoveryServiceFirehose the required Lake Formation permissions. For more information, see Granting Database Permissions in the Lake Formation Developer Guide. AWSServiceRoleForApplicationDiscoveryServiceContinuousExport - Grant database creator permissions, which gives the role database creation ability and implicit permissions for any created tables. For more information, see Implicit Lake Formation Permissions in the Lake Formation Developer Guide. AWSApplicationDiscoveryServiceFirehose - Grant describe permissions for all tables in the database. S3_BUCKET_LIMIT_FAILURE - You reached the limit for Amazon S3 buckets. Reduce the number of S3 buckets or request a limit increase and try again. For more information, see Bucket Restrictions and Limitations in the Amazon Simple Storage Service Developer Guide. S3_NOT_SIGNED_UP - Your account is not signed up for the Amazon S3 service. You must sign up before you can use Amazon S3. You can sign up at the following URL: https://aws.amazon.com/s3. The number of active Agentless Collector collectors. The number of healthy Agentless Collector collectors. The number of deny-listed Agentless Collector collectors. The number of Agentless Collector collectors with The number of unhealthy Agentless Collector collectors. The total number of Agentless Collector collectors. The number of unknown Agentless Collector collectors. The inventory data for installed Agentless Collector collectors. The agent or the Connector IDs for which you want information. If you specify no IDs, the system returns information about all agents/Connectors associated with your Amazon Web Services user account. The agent or the collector IDs for which you want information. If you specify no IDs, the system returns information about all agents/collectors associated with your user. The total number of agents/Connectors to return in a single page of output. The maximum value is 100. The total number of agents/collectors to return in a single page of output. The maximum value is 100. Lists agents or the Connector by ID or lists all agents/Connectors associated with your user account if you did not specify an agent/Connector ID. The output includes agent/Connector IDs, IP addresses, media access control (MAC) addresses, agent/Connector health, host name where the agent/Connector resides, and the version number of each agent/Connector. Lists agents or the collector by ID or lists all agents/collectors associated with your user, if you did not specify an agent/collector ID. The output includes agent/collector IDs, IP addresses, media access control (MAC) addresses, agent/collector health, host name where the agent/collector resides, and the version number of each agent/collector. If set to true, the export preferences is set to The recommended EC2 instance type that matches the CPU usage metric of server performance data. The recommended EC2 instance type that matches the Memory usage metric of server performance data. The target tenancy to use for your recommended EC2 instances. An array of instance types to exclude from recommendations. The target Amazon Web Services Region for the recommendations. You can use any of the Region codes available for the chosen service, as listed in Amazon Web Services service endpoints in the Amazon Web Services General Reference. The contract type for a reserved instance. If blank, we assume an On-Demand instance is preferred. Indicates that the exported data must include EC2 instance type matches for on-premises servers that are discovered through Amazon Web Services Application Discovery Service. Information regarding the export status of discovered data. The value is an array of objects. If enabled, exported data includes EC2 instance type matches for on-premises servers discovered through Amazon Web Services Application Discovery Service. Indicates the type of data that is being exported. Only one Details about Migration Evaluator collectors, including collector status and health. Details about Agentless Collector collectors, including status. The home region is not set. Set the home region to continue. The home Region is not set. Set the home Region to continue. The payment plan to use for your Reserved Instance. The flexibility to change the instance types needed for your Reserved Instance. The preferred duration of the Reserved Instance term. Used to provide Reserved Instance preferences for the recommendation. The IDs of the agents or connectors from which to start collecting data. If you send a request to an agent/connector ID that you do not have permission to contact, according to your Amazon Web Services account, the service does not throw an exception. Instead, it returns the error in the Description field. If you send a request to multiple agents/connectors and you do not have permission to contact some of those agents/connectors, the system does not throw an exception. Instead, the system shows The IDs of the agents from which to start collecting data. If you send a request to an agent ID that you do not have permission to contact, according to your Amazon Web Services account, the service does not throw an exception. Instead, it returns the error in the Description field. If you send a request to multiple agents and you do not have permission to contact some of those agents, the system does not throw an exception. Instead, the system shows Information about agents or the connector that were instructed to start collecting data. Information includes the agent/connector ID, a description of the operation performed, and whether the agent/connector configuration was updated. Information about agents that were instructed to start collecting data. Information includes the agent ID, a description of the operation performed, and whether the agent configuration was updated. If a filter is present, it selects the single If a filter is present, it selects the single The end timestamp for exported data from the single Application Discovery Agent selected in the filters. If no value is specified, exported data includes the most recent data collected by the agent. Indicates the type of data that needs to be exported. Only one ExportPreferences can be enabled at any time. The IDs of the agents or connectors from which to stop collecting data. The IDs of the agents from which to stop collecting data. Information about the agents or connector that were instructed to stop collecting data. Information includes the agent/connector ID, a description of the operation performed, and whether the agent/connector configuration was updated. Information about the agents that were instructed to stop collecting data. Information includes the agent ID, a description of the operation performed, and whether the agent configuration was updated. A utilization metric that is used by the recommendations. Specifies the percentage of the specified utilization metric that is used by the recommendations. Specifies the performance metrics to use for the server that is used for recommendations. Amazon Web Services Application Discovery Service helps you plan application migration projects. It automatically identifies servers, virtual machines (VMs), and network dependencies in your on-premises data centers. For more information, see the Amazon Web Services Application Discovery Service FAQ. Application Discovery Service offers three ways of performing discovery and collecting data about your on-premises servers: Agentless discovery is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. It does not work in non-VMware environments. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about network dependencies, only agent-based discovery collects that information. Agent-based discovery collects a richer set of data than agentless discovery by using the Amazon Web Services Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of running processes, system performance information, resource utilization, and network dependencies. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud. Amazon Web Services Partner Network (APN) solutions integrate with Application Discovery Service, enabling you to import details of your on-premises environment directly into Migration Hub without using the discovery connector or discovery agent. Third-party application discovery tools can query Amazon Web Services Application Discovery Service, and they can write to the Application Discovery Service database using the public API. In this way, you can import data into Migration Hub and view it, so that you can associate applications with servers and track migrations. Recommendations We recommend that you use agent-based discovery for non-VMware environments, and whenever you want to collect information about network dependencies. You can run agent-based and agentless discovery simultaneously. Use agentless discovery to complete the initial infrastructure assessment quickly, and then install agents on select hosts to collect additional information. Working With This Guide This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see Amazon Web Services SDKs. Remember that you must set your Migration Hub home region before you call any of these APIs. You must make API calls for write actions (create, notify, associate, disassociate, import, or put) while in your home region, or a API calls for read actions (list, describe, stop, and delete) are permitted outside of your home region. Although it is unlikely, the Migration Hub home region could change. If you call APIs outside the home region, an You must call This guide is intended for use with the Amazon Web Services Application Discovery Service User Guide. All data is handled according to the Amazon Web Services Privacy Policy. You can operate Application Discovery Service offline to inspect collected data before it is shared with the service. Amazon Web Services Application Discovery Service (Application Discovery Service) helps you plan application migration projects. It automatically identifies servers, virtual machines (VMs), and network dependencies in your on-premises data centers. For more information, see the Amazon Web Services Application Discovery Service FAQ. Application Discovery Service offers three ways of performing discovery and collecting data about your on-premises servers: Agentless discovery using Amazon Web Services Application Discovery Service Agentless Collector (Agentless Collector), which doesn't require you to install an agent on each host. Agentless Collector gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless Collector doesn't collect information about network dependencies, only agent-based discovery collects that information. Agent-based discovery using the Amazon Web Services Application Discovery Agent (Application Discovery Agent) collects a richer set of data than agentless discovery, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of running processes, system performance information, resource utilization, and network dependencies. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the Amazon Web Services cloud. For more information, see Amazon Web Services Application Discovery Agent. Amazon Web Services Partner Network (APN) solutions integrate with Application Discovery Service, enabling you to import details of your on-premises environment directly into Amazon Web Services Migration Hub (Migration Hub) without using Agentless Collector or Application Discovery Agent. Third-party application discovery tools can query Amazon Web Services Application Discovery Service, and they can write to the Application Discovery Service database using the public API. In this way, you can import data into Migration Hub and view it, so that you can associate applications with servers and track migrations. Working With This Guide This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see Amazon Web Services SDKs. Remember that you must set your Migration Hub home Region before you call any of these APIs. You must make API calls for write actions (create, notify, associate, disassociate, import, or put) while in your home Region, or a API calls for read actions (list, describe, stop, and delete) are permitted outside of your home Region. Although it is unlikely, the Migration Hub home Region could change. If you call APIs outside the home Region, an You must call This guide is intended for use with the Amazon Web Services Application Discovery Service User Guide. All data is handled according to the Amazon Web Services Privacy Policy. You can operate Application Discovery Service offline to inspect collected data before it is shared with the service. Creates a copy of an object that is already stored in Amazon S3. You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API. All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account. A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the If the copy is successful, you receive a response with information about the copied object. If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body. The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs. To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the To only copy an object under certain conditions, such as whether the If both the If both the All headers with the Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don't specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a default encryption configuration that uses server-side encryption with an Key Management Service (KMS) key (SSE-KMS), or a customer-provided encryption key (SSE-C), Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy. When you perform a CopyObject operation, if you want to use a different type of encryption setting for the target object, you can use other appropriate encryption-related headers to encrypt the target object with a KMS key, an Amazon S3 managed key, or a customer-provided key. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. If the encryption setting in your request is different from the default encryption configuration of the destination bucket, the encryption setting in your request takes precedence. If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying. For more information about server-side encryption, see Using Server-Side Encryption. If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide. When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API. If the bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide. If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. When copying an object, if it has a checksum, that checksum will be copied to the new object by default. When you copy the object over, you may optionally specify a different checksum algorithm to use with the You can use the If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject. For more information, see Copying Objects. By default, If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null. The following operations are related to Creates a copy of an object that is already stored in Amazon S3. You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. For more information, see Copy Object Using the REST Multipart Upload API. All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account. A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the If the copy is successful, you receive a response with information about the copied object. If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body. The copy request charge is based on the storage class and Region that you specify for the destination object. For pricing information, see Amazon S3 pricing. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 When copying an object, you can preserve all metadata (the default) or specify new metadata. However, the access control list (ACL) is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs. To specify whether you want the object metadata copied from the source object or replaced with metadata provided in the request, you can optionally add the To only copy an object under certain conditions, such as whether the If both the If both the All headers with the Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don't specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a default encryption configuration that uses server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), or server-side encryption with customer-provided encryption keys (SSE-C), Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy. When you perform a If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object. For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide. When copying an object, you can optionally use headers to grant ACL-based permissions. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual Amazon Web Services accounts or to predefined groups that are defined by Amazon S3. These permissions are then added to the ACL on the object. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API. If the bucket that you're copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide. If your bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner. When copying an object, if it has a checksum, that checksum will be copied to the new object by default. When you copy the object over, you can optionally specify a different checksum algorithm to use with the You can use the If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see RestoreObject. For more information, see Copying Objects. By default, If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null. The following operations are related to Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner. Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules. If you want to create an Amazon S3 on Outposts bucket, see Create Bucket. By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the Europe (Ireland) Region. For more information, see Accessing a bucket. If you send your create bucket request to the When creating a bucket using this operation, you can optionally configure the bucket ACL to specify the accounts or groups that should be granted specific permissions on the bucket. If your CreateBucket request sets bucket owner enforced for S3 Object Ownership and specifies a bucket ACL that provides access to an external Amazon Web Services account, your request fails with a There are two ways to grant the appropriate permissions using the request headers. Specify a canned ACL using the Specify access permissions explicitly using the You specify each grantee as a type=value pair, where the type is one of the following: Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions: DescribeAgents as is without passing any parameters.DescribeAgents as is without passing any parameters.DescribeContinuousExports as is without passing any parameters.DescribeContinuousExports as is without passing any parameters.filters.
DescribeTags as is without passing any parameters.filters.
DescribeTags as is without passing any parameters.agentIds in a filter, the task exports up to 72 hours of detailed data collected by the identified Application Discovery Agent, including network, process, and performance details. A time range for exported agent data may be set by using startTime and endTime. Export of detailed agent data is limited to five concurrently running exports. agentIds filter, summary data is exported that includes both Amazon Web Services Agentless Discovery Connector data and summary data from Amazon Web Services Discovery Agents. Export of summary data is limited to two exports per day. preferences or agentIds in the filter, a summary of all servers, applications, tags, and performance is generated. This data is an aggregation of all server data collected through on-premises tooling, file import, application grouping and applying tags.agentIds in a filter, the task exports up to 72 hours of detailed data collected by the identified Application Discovery Agent, including network, process, and performance details. A time range for exported agent data may be set by using startTime and endTime. Export of detailed agent data is limited to five concurrently running exports. Export of detailed agent data is limited to two exports per day.ec2RecommendationsPreferences in preferences , an Amazon EC2 instance matching the characteristics of each server in Application Discovery Service is generated. Changing the attributes of the ec2RecommendationsPreferences changes the criteria of the recommendation.
StartImportTask command with the Amazon Web Services CLI or one of the Amazon Web Services SDKs to import the records from your file.
StartImportTask command with the Amazon Web Services CLI or one of the Amazon Web Services SDKs to import the records from your file.StartDataCollection and StopDataCollection operations. The system has recorded the data collection operation. The agent/connector receives this command the next time it polls for a new command. StartDataCollection and StopDataCollection operations. The system has recorded the data collection operation. The agent receives this command the next time it polls for a new command.
"
+ "documentation":"
"
},
"s3Bucket":{
"shape":"S3Bucket",
@@ -910,14 +910,36 @@
"unknownAgentlessCollectors"
],
"members":{
- "activeAgentlessCollectors":{"shape":"Integer"},
- "healthyAgentlessCollectors":{"shape":"Integer"},
- "denyListedAgentlessCollectors":{"shape":"Integer"},
- "shutdownAgentlessCollectors":{"shape":"Integer"},
- "unhealthyAgentlessCollectors":{"shape":"Integer"},
- "totalAgentlessCollectors":{"shape":"Integer"},
- "unknownAgentlessCollectors":{"shape":"Integer"}
- }
+ "activeAgentlessCollectors":{
+ "shape":"Integer",
+ "documentation":"
SHUTDOWN status. Ec2RecommendationsExportPreferences. ExportPreferences can be enabled for a StartExportTask action. Failed in the Description field.Failed in the Description field.agentId of the Application Discovery Agent for which data is exported. The agentId can be found in the results of the DescribeAgents API or CLI. If no filter is present, startTime and endTime are ignored and exported data includes both Agentless Discovery Connector data and summary data from Application Discovery agents. agentId of the Application Discovery Agent for which data is exported. The agentId can be found in the results of the DescribeAgents API or CLI. If no filter is present, startTime and endTime are ignored and exported data includes both Amazon Web Services Application Discovery Service Agentless Collector collectors data and summary data from Application Discovery Agent agents.
HomeRegionNotSetException error is returned.InvalidInputException is returned.GetHomeRegion to obtain the latest Migration Hub home region.
HomeRegionNotSetException error is returned.InvalidInputException is returned.GetHomeRegion to obtain the latest Migration Hub home Region.200 OK response. This means that a 200 OK response can contain either a success or an error. If you call the S3 API directly, make sure to design your application to parse the contents of the response and handle it appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the embedded error and apply error handling per your configuration settings (including automatically retrying the request as appropriate). If the condition persists, the SDKs throws an exception (or, for the SDKs that don't use exceptions, they return the error).Bad Request error. For more information, see Transfer Acceleration.
x-amz-metadata-directive header. When you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.x-amz-website-redirect-location is unique to each object and must be specified in the request headers to copy the value.Etag matches or whether the object was modified before or after a specified date, use the following request parameters:
x-amz-copy-source-if-match x-amz-copy-source-if-none-match x-amz-copy-source-if-unmodified-since x-amz-copy-source-if-modified-since x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:
x-amz-copy-source-if-match condition evaluates to truex-amz-copy-source-if-unmodified-since condition evaluates to falsex-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed response code:
x-amz-copy-source-if-none-match condition evaluates to falsex-amz-copy-source-if-modified-since condition evaluates to truex-amz- prefix, including x-amz-copy-source, must be signed.bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format.x-amz-checksum-algorithm header.CopyObject action to change the storage class of an object that is already stored in Amazon S3 using the StorageClass parameter. For more information, see Storage Classes in the Amazon S3 User Guide.x-amz-copy-source identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use the versionId subresource.x-amz-version-id response header in the response.CopyObject:200 OK response. This means that a 200 OK response can contain either a success or an error. If you call the S3 API directly, make sure to design your application to parse the contents of the response and handle it appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the embedded error and apply error handling per your configuration settings (including automatically retrying the request as appropriate). If the condition persists, the SDKs throws an exception (or, for the SDKs that don't use exceptions, they return the error).Bad Request error. For more information, see Transfer Acceleration.
x-amz-metadata-directive header. When you grant permissions, you can use the s3:x-amz-metadata-directive condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Specifying Conditions in a Policy in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition keys, see Actions, Resources, and Condition Keys for Amazon S3.x-amz-website-redirect-location is unique to each object and must be specified in the request headers to copy the value.Etag matches or whether the object was modified before or after a specified date, use the following request parameters:
x-amz-copy-source-if-match x-amz-copy-source-if-none-match x-amz-copy-source-if-unmodified-since x-amz-copy-source-if-modified-since x-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:
x-amz-copy-source-if-match condition evaluates to truex-amz-copy-source-if-unmodified-since condition evaluates to falsex-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed response code:
x-amz-copy-source-if-none-match condition evaluates to falsex-amz-copy-source-if-modified-since condition evaluates to truex-amz- prefix, including x-amz-copy-source, must be signed.CopyObject operation, if you want to use a different type of encryption setting for the target object, you can use other appropriate encryption-related headers to encrypt the target object with a KMS key, an Amazon S3 managed key, or a customer-provided key. With server-side encryption, Amazon S3 encrypts your data as it writes your data to disks in its data centers and decrypts the data when you access it. If the encryption setting in your request is different from the default encryption configuration of the destination bucket, the encryption setting in your request takes precedence. If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying. For more information about server-side encryption, see Using Server-Side Encryption.PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format.x-amz-checksum-algorithm header.CopyObject action to change the storage class of an object that is already stored in Amazon S3 by using the StorageClass parameter. For more information, see Storage Classes in the Amazon S3 User Guide.x-amz-copy-source header identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use the versionId subresource.x-amz-version-id response header in the response.CopyObject:s3.amazonaws.com endpoint, the request goes to the us-east-1 Region. Accordingly, the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual hosting of buckets.
400 error and returns the InvalidBucketAclWithObjectOwnership error code. For more information, see Controlling object ownership in the Amazon S3 User Guide.
x-amz-acl request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.x-amz-grant-read, x-amz-grant-write, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions Amazon S3 supports in an ACL. For more information, see Access control list (ACL) overview.
id – if the value specified is the canonical user ID of an Amazon Web Services accounturi – if you are granting permissions to a predefined groupemailAddress – if the value specified is the email address of an Amazon Web Services account