From dab3b0a97d8213105549f35914670dd7bd91d619 Mon Sep 17 00:00:00 2001
From: John DiSanti AWS IAM Access Analyzer helps identify potential resource-access risks by enabling you to identify\n any policies that grant access to an external principal. It does this by using logic-based\n reasoning to analyze resource-based policies in your AWS environment. An external\n principal can be another AWS account, a root user, an IAM user or role, a federated\n user, an AWS service, or an anonymous user. You can also use Access Analyzer to preview and\n validate public and cross-account access to your resources before deploying permissions\n changes. This guide describes the AWS IAM Access Analyzer operations that you can call\n programmatically. For general information about Access Analyzer, see AWS IAM Access Analyzer in the IAM User Guide. To start using Access Analyzer, you first need to create an analyzer. You can use Amazon Web Services Certificate Manager (ACM) to manage SSL/TLS certificates for your Amazon Web Services-based websites\n and applications. For more information about using ACM, see the Amazon Web Services Certificate Manager User Guide. This is the ACM Private CA API Reference. It provides descriptions,\n\t\t\tsyntax, and usage examples for each of the actions and data types involved in creating\n\t\t\tand managing private certificate authorities (CA) for your organization. The documentation for each action shows the Query API request parameters and the XML\n\t\t\tresponse. Alternatively, you can use one of the AWS SDKs to access an API that's\n\t\t\ttailored to the programming language or platform that you're using. For more\n\t\t\tinformation, see AWS\n\t\t\tSDKs. Each ACM Private CA API action has a quota that determines the number of times the action\n\t\t\t\tcan be called per second. For more information, see API Rate Quotas in ACM Private CA\n\t\t\t\tin the ACM Private CA user guide. Alexa for Business helps you use Alexa in your organization. Alexa for Business provides you with the tools\n to manage Alexa devices, enroll your users, and assign skills, at scale. You can build your\n own context-aware voice skills using the Alexa Skills Kit and the Alexa for Business API operations.\n You can also make these available as private skills for your organization. Alexa for Business makes it\n efficient to voice-enable your products and services, thus providing context-aware voice\n experiences for your customers. Device makers building with the Alexa Voice Service (AVS)\n can create fully integrated solutions, register their products with Alexa for Business, and manage them\n as shared devices in their organization. Amplify enables developers to develop and deploy cloud-powered mobile and web apps.\n The Amplify Console provides a continuous delivery and hosting service for web\n applications. For more information, see the Amplify Console User Guide. The\n Amplify Framework is a comprehensive set of SDKs, libraries, tools, and documentation\n for client app development. For more information, see the Amplify Framework.\n AWS Amplify Admin API Amazon API Gateway helps developers deliver robust, secure, and scalable mobile and web application back ends. API Gateway allows developers to securely connect mobile and web applications to APIs that run on AWS Lambda, Amazon EC2, or other publicly addressable web services that are hosted outside of AWS. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. [Required] The name of the DomainName resource. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. If specified, API Gateway performs two-way authentication between the client and the server. Clients must present a trusted certificate to access your custom domain name. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The endpoint configuration of this DomainName showing the endpoint types of the domain name. The status of the DomainName migration. The valid values are The status of the DomainName migration. The valid values are An optional text message containing detailed information about status of the DomainName migration. The mutual TLS authentication configuration for a custom domain name. If specified, API Gateway performs two-way authentication between the client and the server. Clients must present a trusted certificate to access your API. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The custom domain name as an API host name, for example, The status of the DomainName migration. The valid values are The status of the DomainName migration. The valid values are The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The endpoint configuration of this DomainName showing the endpoint types of the domain name. The status of the DomainName migration. The valid values are The status of the DomainName migration. The valid values are An optional text message containing detailed information about status of the DomainName migration. The mutual TLS authentication configuration for a custom domain name. If specified, API Gateway performs two-way authentication between the client and the server. Clients must present a trusted certificate to access your API. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The custom domain name as an API host name, for example, The status of the DomainName migration. The valid values are The status of the DomainName migration. The valid values are The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The endpoint configuration of this DomainName showing the endpoint types of the domain name. The status of the DomainName migration. The valid values are The status of the DomainName migration. The valid values are An optional text message containing detailed information about status of the DomainName migration. The mutual TLS authentication configuration for a custom domain name. If specified, API Gateway performs two-way authentication between the client and the server. Clients must present a trusted certificate to access your API. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The custom domain name as an API host name, for example, The status of the DomainName migration. The valid values are The status of the DomainName migration. The valid values are The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The endpoint configuration of this DomainName showing the endpoint types of the domain name. The status of the DomainName migration. The valid values are The status of the DomainName migration. The valid values are An optional text message containing detailed information about status of the DomainName migration. The mutual TLS authentication configuration for a custom domain name. If specified, API Gateway performs two-way authentication between the client and the server. Clients must present a trusted certificate to access your API. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The custom domain name as an API host name, for example, The status of the DomainName migration. The valid values are The status of the DomainName migration. The valid values are The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn. The Amazon API Gateway Management API allows you to directly manage runtime aspects of your deployed APIs. To use it, you must explicitly set the SDK's endpoint to point to the endpoint of your deployed API. The endpoint will be of the form https://{api-id}.execute-api.{region}.amazonaws.com/{stage}, or will be the endpoint corresponding to your API's custom domain and base path, if applicable. Amazon API Gateway V2 The timestamp when the certificate that was used by edge-optimized endpoint for this domain name was uploaded. The status of the domain name migration. The valid values are AVAILABLE and UPDATING. If the status is UPDATING, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE, the domain can be updated. The status of the domain name migration. The valid values are AVAILABLE, UPDATING, PENDING_CERTIFICATE_REIMPORT, and PENDING_OWNERSHIP_VERIFICATION. If the status is UPDATING, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE, the domain can be updated. An optional text message containing detailed information about status of the domain name migration. The Transport Layer Security (TLS) version of the security policy for this domain name. The valid values are TLS_1_0 and TLS_1_2. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn A domain name for the API. The status of the domain name migration. The valid values are AVAILABLE and UPDATING. If the status is UPDATING, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE, the domain can be updated. The status of the domain name migration. The valid values are AVAILABLE, UPDATING, PENDING_CERTIFICATE_REIMPORT, and PENDING_OWNERSHIP_VERIFICATION. If the status is UPDATING, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE, the domain can be updated. The ARN of the public certificate issued by ACM to validate ownership of your custom domain. Only required when configuring mutual TLS and using an ACM imported or private CA certificate ARN as the regionalCertificateArn The status of the domain name migration. The valid values are AVAILABLE and UPDATING. If the status is UPDATING, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE, the domain can be updated. The status of the domain name migration. The valid values are AVAILABLE, UPDATING, PENDING_CERTIFICATE_REIMPORT, and PENDING_OWNERSHIP_VERIFICATION. If the status is UPDATING, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE, the domain can be updated. Use AWS AppConfig, a capability of AWS Systems Manager, to create, manage, and quickly deploy\n application configurations. AppConfig supports controlled deployments to applications of any\n size and includes built-in validation checks and monitoring. You can use AppConfig with\n applications hosted on Amazon EC2 instances, AWS Lambda, containers, mobile applications, or IoT\n devices. To prevent errors when deploying application configurations, especially for production\n systems where a simple typo could cause an unexpected outage, AppConfig includes validators.\n A validator provides a syntactic or semantic check to ensure that the configuration you\n want to deploy works as intended. To validate your application configuration data, you\n provide a schema or a Lambda function that runs against the configuration. The\n configuration deployment or update can only proceed when the configuration data is\n valid. During a configuration deployment, AppConfig monitors the application to ensure that the\n deployment is successful. If the system encounters an error, AppConfig rolls back the change\n to minimize impact for your application users. You can configure a deployment strategy for\n each application or environment that includes deployment criteria, including velocity, bake\n time, and alarms to monitor. Similar to error monitoring, if a deployment triggers an\n alarm, AppConfig automatically rolls back to the previous version. AppConfig supports multiple use cases. Here are some examples. \n Application tuning: Use AppConfig to carefully\n introduce changes to your application that can only be tested with production\n traffic. \n Feature toggle: Use AppConfig to turn on new\n features that require a timely deployment, such as a product launch or announcement.\n \n Allow list: Use AppConfig to allow premium\n subscribers to access paid content. \n Operational issues: Use AppConfig to reduce stress\n on your application when a dependency or other external factor impacts the\n system. This reference is intended to be used with the AWS AppConfig User Guide. Welcome to the Amazon AppFlow API reference. This guide is for developers who need\n detailed information about the Amazon AppFlow API operations, data types, and errors. Amazon AppFlow is a fully managed integration service that enables you to securely\n transfer data between software as a service (SaaS) applications like Salesforce, Marketo,\n Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift. Use the following links to get started on the Amazon AppFlow API: \n Actions: An alphabetical list of all Amazon AppFlow API operations. \n Data\n types: An alphabetical list of all Amazon AppFlow data types. \n Common parameters: Parameters that all Query operations can use. \n Common\n errors: Client and server errors that all operations can return. If you're new to Amazon AppFlow, we recommend that you review the Amazon AppFlow User\n Guide. Amazon AppFlow API users can use vendor-specific mechanisms for OAuth, and include\n applicable OAuth attributes (such as The Amazon AppIntegrations service enables you to configure and reuse connections to external applications. For information about how you can use external applications with Amazon Connect, see Set up pre-built integrations in the Amazon Connect Administrator Guide. With Application Auto Scaling, you can configure automatic scaling for the following\n resources: Amazon ECS services Amazon EC2 Spot Fleet requests Amazon EMR clusters Amazon AppStream 2.0 fleets Amazon DynamoDB tables and global secondary indexes throughput capacity Amazon Aurora Replicas Amazon SageMaker endpoint variants Custom resources provided by your own applications or services Amazon Comprehend document classification and entity recognizer endpoints AWS Lambda function provisioned concurrency Amazon Keyspaces (for Apache Cassandra) tables Amazon Managed Streaming for Apache Kafka broker storage \n API Summary\n The Application Auto Scaling service API includes three key sets of actions: Register and manage scalable targets - Register AWS or custom resources as scalable\n targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and\n retrieve information on existing scalable targets. Configure and manage automatic scaling - Define scaling policies to dynamically scale\n your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions,\n and retrieve your recent scaling activity history. Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by\n calling the RegisterScalableTarget API action for any Application Auto Scaling scalable target. You can\n suspend and resume (individually or in combination) scale-out activities that are\n triggered by a scaling policy, scale-in activities that are triggered by a scaling policy,\n and scheduled scaling. To learn more about Application Auto Scaling, including information about granting IAM users required\n permissions for Application Auto Scaling actions, see the Application Auto Scaling User\n Guide. This reference provides descriptions of the AWS Application Cost Profiler API. The AWS Application Cost Profiler API provides programmatic access to view, create, update, and delete\n application cost report definitions, as well as to import your usage data into the Application Cost Profiler\n service. For more information about using this service, see the AWS Application Cost\n Profiler User Guide. AWS Application Discovery Service helps you plan application migration projects. It\n automatically identifies servers, virtual machines (VMs), and network dependencies in your\n on-premises data centers. For more information, see the AWS Application Discovery Service\n FAQ. Application Discovery Service offers three ways of performing discovery and\n collecting data about your on-premises servers: \n Agentless discovery is recommended for environments\n that use VMware vCenter Server. This mode doesn't require you to install an agent on each\n host. It does not work in non-VMware environments. Agentless discovery gathers server information regardless of the operating\n systems, which minimizes the time required for initial on-premises infrastructure\n assessment. Agentless discovery doesn't collect information about network dependencies, only\n agent-based discovery collects that information. \n Agent-based discovery collects a richer set of data\n than agentless discovery by using the AWS Application Discovery Agent, which you install\n on one or more hosts in your data center. The agent captures infrastructure and application information, including an\n inventory of running processes, system performance information, resource utilization,\n and network dependencies. The information collected by agents is secured at rest and in transit to the\n Application Discovery Service database in the cloud. \n AWS Partner Network (APN) solutions integrate with\n Application Discovery Service, enabling you to import details of your on-premises\n environment directly into Migration Hub without using the discovery connector or discovery\n agent. Third-party application discovery tools can query AWS Application Discovery\n Service, and they can write to the Application Discovery Service database using the\n public API. In this way, you can import data into Migration Hub and view it, so that you can\n associate applications with servers and track migrations. \n Recommendations\n We recommend that you use agent-based discovery for non-VMware environments, and\n whenever you want to collect information about network dependencies. You can run agent-based\n and agentless discovery simultaneously. Use agentless discovery to complete the initial\n infrastructure assessment quickly, and then install agents on select hosts to collect\n additional information. \n Working With This Guide\n This API reference provides descriptions, syntax, and usage examples for each of the\n actions and data types for Application Discovery Service. The topic for each action shows the\n API request parameters and the response. Alternatively, you can use one of the AWS SDKs to\n access an API that is tailored to the programming language or platform that you're using. For\n more information, see AWS\n SDKs. Remember that you must set your Migration Hub home region before you call any of\n these APIs. You must make API calls for write actions (create, notify, associate, disassociate,\n import, or put) while in your home region, or a API calls for read actions (list, describe, stop, and delete) are permitted outside\n of your home region. Although it is unlikely, the Migration Hub home region could change. If you call\n APIs outside the home region, an You must call This guide is intended for use with the AWS Application\n Discovery Service User Guide. All data is handled according to the AWS\n Privacy Policy. You can operate Application Discovery Service offline to inspect\n collected data before it is shared with the service. Amazon CloudWatch Application Insights is a service that\n helps you detect common problems with your applications. It\n enables you to pinpoint the source of issues in your applications (built with technologies\n such as Microsoft IIS, .NET, and Microsoft SQL Server), by providing key insights into\n detected problems. After you onboard your application, CloudWatch Application Insights identifies, \n recommends, and sets up metrics and logs. It continuously analyzes and\n correlates your metrics and logs for unusual behavior to surface actionable problems with\n your application. For example, if your application is slow and unresponsive and leading to\n HTTP 500 errors in your Application Load Balancer (ALB), Application Insights informs you\n that a memory pressure problem with your SQL Server database is occurring. It bases this\n analysis on impactful metrics and log errors. App Mesh is a service mesh based on the Envoy proxy that makes it easy to monitor and\n control microservices. App Mesh standardizes how your microservices communicate, giving you\n end-to-end visibility and helping to ensure high availability for your applications. App Mesh gives you consistent visibility and network traffic controls for every\n microservice in an application. You can use App Mesh with Amazon Web Services Fargate, Amazon ECS, Amazon EKS,\n Kubernetes on Amazon Web Services, and Amazon EC2. App Mesh supports microservice applications that use service discovery naming for their\n components. For more information about service discovery on Amazon ECS, see Service Discovery in the Amazon Elastic Container Service Developer Guide. Kubernetes\n AWS App Runner is an application service that provides a fast, simple, and cost-effective way to go directly from an existing container image or source code\n to a running service in the AWS cloud in seconds. You don't need to learn new technologies, decide which compute service to use, or understand how to\n provision and configure AWS resources. App Runner connects directly to your container registry or source code repository. It provides an automatic delivery pipeline with fully managed operations,\n high performance, scalability, and security. For more information about App Runner, see the AWS App Runner Developer Guide.\n For release information, see the AWS App Runner Release Notes. \n To install the Software Development Kits (SDKs), Integrated\n Development Environment (IDE) Toolkits, and command line tools that you can use to access the API, see Tools for\n Amazon Web Services. \n Endpoints\n For a list of Region-specific endpoints that App Runner supports, see AWS App Runner\n endpoints and quotas in the AWS General Reference. This is the Amazon AppStream 2.0 API Reference. This documentation provides descriptions and syntax for each of the actions and data types in AppStream 2.0. AppStream 2.0 is a fully managed, secure application streaming service that lets you stream desktop applications to users without rewriting applications. AppStream 2.0 manages the AWS resources that are required to host and run your applications, scales automatically, and provides access to your users on demand. You can call the AppStream 2.0 API operations by using an interface VPC endpoint (interface endpoint). For more information, see Access AppStream 2.0 API Operations and CLI Commands Through an Interface VPC Endpoint in the Amazon AppStream 2.0 Administration Guide. To learn more about AppStream 2.0, see the following resources: AppSync provides API actions for creating and interacting with data sources using\n GraphQL from your application. Amazon Athena is an interactive query service that lets you use standard SQL to\n analyze data directly in Amazon S3. You can point Athena at your data in Amazon S3 and\n run ad-hoc queries and get results in seconds. Athena is serverless, so there is no\n infrastructure to set up or manage. You pay only for the queries you run. Athena scales\n automatically—executing queries in parallel—so results are fast, even with large\n datasets and complex queries. For more information, see What is Amazon\n Athena in the Amazon Athena User Guide. If you connect to Athena using the JDBC driver, use version 1.1.0 of the driver or\n later with the Amazon Athena API. Earlier version drivers do not support the API. For\n more information and to download the driver, see Accessing\n Amazon Athena with JDBC. For code samples using the AWS SDK for Java, see Examples and\n Code Samples in the Amazon Athena User Guide. Amazon Athena is an interactive query service that lets you use standard SQL\n to analyze data directly in Amazon S3. You can point Athena at your\n data in Amazon S3 and run ad-hoc queries and get results in seconds. Athena is serverless, so there is no infrastructure to set up or manage. You pay\n only for the queries you run. Athena scales automatically—executing queries\n in parallel—so results are fast, even with large datasets and complex queries. For more\n information, see What is Amazon Athena in the Amazon Athena User\n Guide. If you connect to Athena using the JDBC driver, use version 1.1.0 of the\n driver or later with the Amazon Athena API. Earlier version drivers do not\n support the API. For more information and to download the driver, see Accessing\n Amazon Athena with JDBC. For code samples using the Amazon Web Services SDK for Java, see Examples and\n Code Samples in the Amazon Athena User\n Guide. The name of the data catalog to create. The catalog name must be unique for the AWS
- /// account and can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen
- /// characters. The name of the data catalog to create. The catalog name must be unique for the
+ /// Amazon Web Services account and can use a maximum of 128 alphanumeric, underscore, at
+ /// sign, or hyphen characters. The type of data catalog to create: Do not use the The type of data catalog to create: Specifies the Lambda function or functions to use for creating the data catalog. This
- /// is a mapping whose values depend on the catalog type. Specifies the Lambda function or functions to use for creating the data
+ /// catalog. This is a mapping whose values depend on the catalog type. For the If you have one Lambda function that processes metadata and another
- /// for reading the actual data, use the following syntax. Both parameters
- /// are required. If you have one Lambda function that processes metadata
+ /// and another for reading the actual data, use the following syntax. Both
+ /// parameters are required.
/// If you have a composite Lambda function that processes both metadata
- /// and data, use the following syntax to specify your Lambda
- /// function. If you have a composite Lambda function that processes
+ /// both metadata and data, use the following syntax to specify your Lambda function.
/// The
+ /// The Queries that specify a Glue Data Catalog other than the default
+ /// In Regions where Athena engine version 2 is not available,
+ /// creating new Glue data catalogs results in an
+ /// AVAILABLE
and UPDATING
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
, UPDATING
, PENDING_CERTIFICATE_REIMPORT
, and PENDING_OWNERSHIP_VERIFICATION
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.my-api.example.com
.AVAILABLE
and UPDATING
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
, UPDATING
, PENDING_CERTIFICATE_REIMPORT
, and PENDING_OWNERSHIP_VERIFICATION
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
and UPDATING
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
, UPDATING
, PENDING_CERTIFICATE_REIMPORT
, and PENDING_OWNERSHIP_VERIFICATION
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.my-api.example.com
.AVAILABLE
and UPDATING
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
, UPDATING
, PENDING_CERTIFICATE_REIMPORT
, and PENDING_OWNERSHIP_VERIFICATION
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
and UPDATING
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
, UPDATING
, PENDING_CERTIFICATE_REIMPORT
, and PENDING_OWNERSHIP_VERIFICATION
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.my-api.example.com
.AVAILABLE
and UPDATING
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
, UPDATING
, PENDING_CERTIFICATE_REIMPORT
, and PENDING_OWNERSHIP_VERIFICATION
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
and UPDATING
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
, UPDATING
, PENDING_CERTIFICATE_REIMPORT
, and PENDING_OWNERSHIP_VERIFICATION
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.my-api.example.com
.AVAILABLE
and UPDATING
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.AVAILABLE
, UPDATING
, PENDING_CERTIFICATE_REIMPORT
, and PENDING_OWNERSHIP_VERIFICATION
. If the status is UPDATING
, the domain cannot be modified further until the existing operation is complete. If it is AVAILABLE
, the domain can be updated.\n
\n \n
\n\n auth-code
and redirecturi
) with\n the connector-specific ConnectorProfileProperties
when creating a new connector\n profile using Amazon AppFlow API operations. For example, Salesforce users can refer to the\n \n Authorize Apps with OAuth\n documentation.\n
\n \n
\n\n \n \n
\n\n \n
\n \n
\n\n \n
\n \n
\n\n\n \n
\n \n
\n HomeRegionNotSetException
\n error is returned.InvalidInputException
is returned.GetHomeRegion
to obtain the latest Migration Hub home\n region.kube-dns
and coredns
are supported. For more information,\n see DNS\n for Services and Pods in the Kubernetes documentation.\n
"
authors = ["AWS Rust SDK Team LAMBDA
for a federated catalog or
- /// HIVE
for an external hive metastore.GLUE
type. This refers to the
- /// AwsDataCatalog
that already exists in your account, of which you
- /// can have only one. Specifying the GLUE
type will result in an
- /// INVALID_INPUT
error.LAMBDA
for a federated catalog,
+ /// HIVE
for an external hive metastore, or GLUE
for an
+ /// Glue Data Catalog.
///
pub fn parameters(
mut self,
@@ -443,9 +466,10 @@ pub mod fluent_builders {
/// received, the same response is returned and another query is not created. If a parameter
/// has changed, for example, the HIVE
data catalog type, use the following syntax. The
@@ -320,9 +315,9 @@ pub mod fluent_builders {
/// of required parameters, but not both.
///
/// metadata-function=lambda_arn,
/// record-function=lambda_arn
@@ -330,9 +325,8 @@ pub mod fluent_builders {
///
function=lambda_arn
///
@@ -340,6 +334,35 @@ pub mod fluent_builders {
/// GLUE
type takes a catalog ID parameter and is required. The
+ ///
+ /// catalog_id
+ ///
is the account ID of the
+ /// Amazon Web Services account to which the Glue Data Catalog
+ /// belongs.catalog-id=catalog_id
+ ///
+ ///
+ ///
+ /// GLUE
data catalog type also applies to the default
+ /// AwsDataCatalog
that already exists in your account, of
+ /// which you can have only one and cannot modify.AwsDataCatalog
must be run on Athena engine
+ /// version 2.INVALID_INPUT
error.QueryString
, an error is returned.
This token is listed as not required because AWS SDKs (for example the AWS SDK for - /// Java) auto-generate the token for users. If you are not using the AWS SDK or the AWS - /// CLI, you must provide this token or the action will fail.
+ ///This token is listed as not required because Amazon Web Services SDKs (for example + /// the Amazon Web Services SDK for Java) auto-generate the token for users. If you are + /// not using the Amazon Web Services SDK or the Amazon Web Services CLI, you must provide + /// this token or the action will fail.
///The configuration for the workgroup, which includes the location in Amazon S3 where - /// query results are stored, the encryption configuration, if any, used for encrypting - /// query results, whether the Amazon CloudWatch Metrics are enabled for the workgroup, the - /// limit for the amount of bytes scanned (cutoff) per query, if it is specified, and - /// whether workgroup's settings (specified with EnforceWorkGroupConfiguration) in the - /// WorkGroupConfiguration override client-side settings. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
+ ///The configuration for the workgroup, which includes the location in Amazon S3
+ /// where query results are stored, the encryption configuration, if any, used for
+ /// encrypting query results, whether the Amazon CloudWatch Metrics are enabled for the
+ /// workgroup, the limit for the amount of bytes scanned (cutoff) per query, if it is
+ /// specified, and whether workgroup's settings (specified with
+ /// EnforceWorkGroupConfiguration
) in the
+ /// WorkGroupConfiguration
override client-side settings. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if - /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken - /// from the response object of the previous page call.
+ ///A token generated by the Athena service that specifies where to continue + /// pagination if a previous request was truncated. To obtain the next set of pages, pass in + /// the NextToken from the response object of the previous page call.
pub fn next_token(mut self, input: impl IntoA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if - /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken - /// from the response object of the previous page call.
+ ///A token generated by the Athena service that specifies where to continue + /// pagination if a previous request was truncated. To obtain the next set of pages, pass in + /// the NextToken from the response object of the previous page call.
pub fn next_token(mut self, input: impl IntoA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
QueryString
, an error is returned.
/// This token is listed as not required because AWS SDKs (for example the AWS SDK for - /// Java) auto-generate the token for users. If you are not using the AWS SDK or the AWS - /// CLI, you must provide this token or the action will fail.
+ ///This token is listed as not required because Amazon Web Services SDKs (for example + /// the Amazon Web Services SDK for Java) auto-generate the token for users. If you are + /// not using the Amazon Web Services SDK or the Amazon Web Services CLI, you must provide + /// this token or the action will fail.
///Specifies the ARN of the Athena resource (workgroup or data catalog) to which tags are - /// to be added.
+ ///Specifies the ARN of the Athena resource (workgroup or data catalog) to + /// which tags are to be added.
pub fn resource_arn(mut self, input: impl IntoA collection of one or more tags, separated by commas, to be added to an Athena - /// workgroup or data catalog resource.
+ ///A collection of one or more tags, separated by commas, to be added to an Athena workgroup or data catalog resource.
pub fn tags(mut self, inp: impl IntoThe name of the data catalog to update. The catalog name must be unique for the AWS - /// account and can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen - /// characters.
+ ///The name of the data catalog to update. The catalog name must be unique for the + /// Amazon Web Services account and can use a maximum of 128 alphanumeric, underscore, at + /// sign, or hyphen characters.
pub fn name(mut self, input: impl IntoSpecifies the type of data catalog to update. Specify LAMBDA
for a
- /// federated catalog or HIVE
for an external hive metastore.
Do not use the GLUE
type. This refers to the
- /// AwsDataCatalog
that already exists in your account, of which you
- /// can have only one. Specifying the GLUE
type will result in an
- /// INVALID_INPUT
error.
HIVE
for an external hive metastore, or
+ /// GLUE
for an Glue Data Catalog.
pub fn r#type(mut self, input: crate::model::DataCatalogType) -> Self {
self.inner = self.inner.r#type(input);
self
@@ -2086,8 +2106,8 @@ pub mod fluent_builders {
self.inner = self.inner.set_description(input);
self
}
- /// Specifies the Lambda function or functions to use for updating the data catalog. This - /// is a mapping whose values depend on the catalog type.
+ ///Specifies the Lambda function or functions to use for updating the data + /// catalog. This is a mapping whose values depend on the catalog type.
///For the HIVE
data catalog type, use the following syntax. The
@@ -2105,9 +2125,9 @@ pub mod fluent_builders {
/// of required parameters, but not both.
If you have one Lambda function that processes metadata and another - /// for reading the actual data, use the following syntax. Both parameters - /// are required.
+ ///If you have one Lambda function that processes metadata + /// and another for reading the actual data, use the following syntax. Both + /// parameters are required.
///
/// metadata-function=lambda_arn,
/// record-function=lambda_arn
@@ -2115,9 +2135,8 @@ pub mod fluent_builders {
///
If you have a composite Lambda function that processes both metadata - /// and data, use the following syntax to specify your Lambda - /// function.
+ ///If you have a composite Lambda function that processes + /// both metadata and data, use the following syntax to specify your Lambda function.
///
/// function=lambda_arn
///
diff --git a/sdk/athena/src/error.rs b/sdk/athena/src/error.rs
index 8bfb64e95faf..941cf0e0e8ce 100644
--- a/sdk/athena/src/error.rs
+++ b/sdk/athena/src/error.rs
@@ -3429,11 +3429,12 @@ impl TooManyRequestsException {
}
}
-///
An exception that Athena received when it called a custom metastore. Occurs if the
-/// error is not caused by user input (InvalidRequestException
) or from the
-/// Athena platform (InternalServerException
). For example, if a user-created
-/// Lambda function is missing permissions, the Lambda 4XX
exception is
-/// returned in a MetadataException
.
An exception that Athena received when it called a custom metastore.
+/// Occurs if the error is not caused by user input (InvalidRequestException
)
+/// or from the Athena platform (InternalServerException
). For
+/// example, if a user-created Lambda function is missing permissions, the
+/// Lambda
+/// 4XX
exception is returned in a MetadataException
.
The name of the data catalog to create. The catalog name must be unique for the AWS - /// account and can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen - /// characters.
+ ///The name of the data catalog to create. The catalog name must be unique for the + /// Amazon Web Services account and can use a maximum of 128 alphanumeric, underscore, at + /// sign, or hyphen characters.
pub fn name(mut self, input: impl IntoThe type of data catalog to create: LAMBDA
for a federated catalog or
- /// HIVE
for an external hive metastore.
Do not use the GLUE
type. This refers to the
- /// AwsDataCatalog
that already exists in your account, of which you
- /// can have only one. Specifying the GLUE
type will result in an
- /// INVALID_INPUT
error.
The type of data catalog to create: LAMBDA
for a federated catalog,
+ /// HIVE
for an external hive metastore, or GLUE
for an
+ /// Glue Data Catalog.
QueryString
, an error is returned.
/// This token is listed as not required because AWS SDKs (for example the AWS SDK for - /// Java) auto-generate the token for users. If you are not using the AWS SDK or the AWS - /// CLI, you must provide this token or the action will fail.
+ ///This token is listed as not required because Amazon Web Services SDKs (for example + /// the Amazon Web Services SDK for Java) auto-generate the token for users. If you are + /// not using the Amazon Web Services SDK or the Amazon Web Services CLI, you must provide + /// this token or the action will fail.
///The configuration for the workgroup, which includes the location in Amazon S3 where - /// query results are stored, the encryption configuration, if any, used for encrypting - /// query results, whether the Amazon CloudWatch Metrics are enabled for the workgroup, the - /// limit for the amount of bytes scanned (cutoff) per query, if it is specified, and - /// whether workgroup's settings (specified with EnforceWorkGroupConfiguration) in the - /// WorkGroupConfiguration override client-side settings. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
+ ///The configuration for the workgroup, which includes the location in Amazon S3
+ /// where query results are stored, the encryption configuration, if any, used for
+ /// encrypting query results, whether the Amazon CloudWatch Metrics are enabled for the
+ /// workgroup, the limit for the amount of bytes scanned (cutoff) per query, if it is
+ /// specified, and whether workgroup's settings (specified with
+ /// EnforceWorkGroupConfiguration
) in the
+ /// WorkGroupConfiguration
override client-side settings. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if - /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken - /// from the response object of the previous page call.
+ ///A token generated by the Athena service that specifies where to continue + /// pagination if a previous request was truncated. To obtain the next set of pages, pass in + /// the NextToken from the response object of the previous page call.
pub fn next_token(mut self, input: impl IntoA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue pagination if - /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken - /// from the response object of the previous page call.
+ ///A token generated by the Athena service that specifies where to continue + /// pagination if a previous request was truncated. To obtain the next set of pages, pass in + /// the NextToken from the response object of the previous page call.
pub fn next_token(mut self, input: impl IntoA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
QueryString
, an error is returned.
/// This token is listed as not required because AWS SDKs (for example the AWS SDK for - /// Java) auto-generate the token for users. If you are not using the AWS SDK or the AWS - /// CLI, you must provide this token or the action will fail.
+ ///This token is listed as not required because Amazon Web Services SDKs (for example + /// the Amazon Web Services SDK for Java) auto-generate the token for users. If you are + /// not using the Amazon Web Services SDK or the Amazon Web Services CLI, you must provide + /// this token or the action will fail.
///Specifies the ARN of the Athena resource (workgroup or data catalog) to which tags are - /// to be added.
+ ///Specifies the ARN of the Athena resource (workgroup or data catalog) to + /// which tags are to be added.
pub fn resource_arn(mut self, input: impl IntoThe name of the data catalog to update. The catalog name must be unique for the AWS - /// account and can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen - /// characters.
+ ///The name of the data catalog to update. The catalog name must be unique for the + /// Amazon Web Services account and can use a maximum of 128 alphanumeric, underscore, at + /// sign, or hyphen characters.
pub fn name(mut self, input: impl IntoSpecifies the type of data catalog to update. Specify LAMBDA
for a
- /// federated catalog or HIVE
for an external hive metastore.
Do not use the GLUE
type. This refers to the
- /// AwsDataCatalog
that already exists in your account, of which you
- /// can have only one. Specifying the GLUE
type will result in an
- /// INVALID_INPUT
error.
HIVE
for an external hive metastore, or
+ /// GLUE
for an Glue Data Catalog.
pub fn r#type(mut self, input: crate::model::DataCatalogType) -> Self {
self.r#type = Some(input);
self
@@ -5569,23 +5562,18 @@ impl std::fmt::Debug for UpdatePreparedStatementInput {
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct UpdateDataCatalogInput {
- /// The name of the data catalog to update. The catalog name must be unique for the AWS - /// account and can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen - /// characters.
+ ///The name of the data catalog to update. The catalog name must be unique for the + /// Amazon Web Services account and can use a maximum of 128 alphanumeric, underscore, at + /// sign, or hyphen characters.
pub name: std::option::OptionSpecifies the type of data catalog to update. Specify LAMBDA
for a
- /// federated catalog or HIVE
for an external hive metastore.
Do not use the GLUE
type. This refers to the
- /// AwsDataCatalog
that already exists in your account, of which you
- /// can have only one. Specifying the GLUE
type will result in an
- /// INVALID_INPUT
error.
HIVE
for an external hive metastore, or
+ /// GLUE
for an Glue Data Catalog.
pub r#type: std::option::OptionNew or modified text that describes the data catalog.
pub description: std::option::OptionSpecifies the Lambda function or functions to use for updating the data catalog. This - /// is a mapping whose values depend on the catalog type.
+ ///Specifies the Lambda function or functions to use for updating the data + /// catalog. This is a mapping whose values depend on the catalog type.
///For the HIVE
data catalog type, use the following syntax. The
@@ -5603,9 +5591,9 @@ pub struct UpdateDataCatalogInput {
/// of required parameters, but not both.
If you have one Lambda function that processes metadata and another - /// for reading the actual data, use the following syntax. Both parameters - /// are required.
+ ///If you have one Lambda function that processes metadata + /// and another for reading the actual data, use the following syntax. Both + /// parameters are required.
///
/// metadata-function=lambda_arn,
/// record-function=lambda_arn
@@ -5613,9 +5601,8 @@ pub struct UpdateDataCatalogInput {
///
If you have a composite Lambda function that processes both metadata - /// and data, use the following syntax to specify your Lambda - /// function.
+ ///If you have a composite Lambda function that processes + /// both metadata and data, use the following syntax to specify your Lambda function.
///
/// function=lambda_arn
///
@@ -5659,11 +5646,10 @@ impl std::fmt::Debug for UntagResourceInput {
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct TagResourceInput {
- ///
Specifies the ARN of the Athena resource (workgroup or data catalog) to which tags are - /// to be added.
+ ///Specifies the ARN of the Athena resource (workgroup or data catalog) to + /// which tags are to be added.
pub resource_arn: std::option::OptionA collection of one or more tags, separated by commas, to be added to an Athena - /// workgroup or data catalog resource.
+ ///A collection of one or more tags, separated by commas, to be added to an Athena workgroup or data catalog resource.
pub tags: std::option::OptionQueryString
, an error is returned.
/// This token is listed as not required because AWS SDKs (for example the AWS SDK for - /// Java) auto-generate the token for users. If you are not using the AWS SDK or the AWS - /// CLI, you must provide this token or the action will fail.
+ ///This token is listed as not required because Amazon Web Services SDKs (for example + /// the Amazon Web Services SDK for Java) auto-generate the token for users. If you are + /// not using the Amazon Web Services SDK or the Amazon Web Services CLI, you must provide + /// this token or the action will fail.
///The database within which the query executes.
@@ -5729,9 +5716,9 @@ impl std::fmt::Debug for StartQueryExecutionInput { #[non_exhaustive] #[derive(std::clone::Clone, std::cmp::PartialEq)] pub struct ListWorkGroupsInput { - ///A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
The maximum number of workgroups to return in this request.
pub max_results: std::option::OptionA regex filter that pattern-matches table names. If no expression is supplied, /// metadata for all tables are listed.
pub expression: std::option::OptionA token generated by the Athena service that specifies where to continue pagination if - /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken - /// from the response object of the previous page call.
+ ///A token generated by the Athena service that specifies where to continue + /// pagination if a previous request was truncated. To obtain the next set of pages, pass in + /// the NextToken from the response object of the previous page call.
pub next_token: std::option::OptionSpecifies the maximum number of results to return.
pub max_results: std::option::OptionA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
The maximum number of query executions to return in this request.
pub max_results: std::option::OptionThe workgroup to list the prepared statements for.
pub work_group: std::option::OptionA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
The maximum number of results to return in this request.
pub max_results: std::option::OptionA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
The maximum number of queries to return in this request.
pub max_results: std::option::OptionA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
The maximum number of engine versions to return in this request.
pub max_results: std::option::OptionA token generated by the Athena service that specifies where to continue pagination if - /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken - /// from the response object of the previous page call.
+ ///A token generated by the Athena service that specifies where to continue + /// pagination if a previous request was truncated. To obtain the next set of pages, pass in + /// the NextToken from the response object of the previous page call.
pub next_token: std::option::OptionSpecifies the maximum number of data catalogs to return.
pub max_results: std::option::OptionThe name of the data catalog that contains the databases to return.
pub catalog_name: std::option::OptionA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
Specifies the maximum number of results to return.
pub max_results: std::option::OptionThe unique ID of the query execution.
pub query_execution_id: std::option::OptionA token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
The maximum number of results (rows) to return in this request.
pub max_results: std::option::OptionThe workgroup name.
pub name: std::option::OptionThe configuration for the workgroup, which includes the location in Amazon S3 where - /// query results are stored, the encryption configuration, if any, used for encrypting - /// query results, whether the Amazon CloudWatch Metrics are enabled for the workgroup, the - /// limit for the amount of bytes scanned (cutoff) per query, if it is specified, and - /// whether workgroup's settings (specified with EnforceWorkGroupConfiguration) in the - /// WorkGroupConfiguration override client-side settings. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
+ ///The configuration for the workgroup, which includes the location in Amazon S3
+ /// where query results are stored, the encryption configuration, if any, used for
+ /// encrypting query results, whether the Amazon CloudWatch Metrics are enabled for the
+ /// workgroup, the limit for the amount of bytes scanned (cutoff) per query, if it is
+ /// specified, and whether workgroup's settings (specified with
+ /// EnforceWorkGroupConfiguration
) in the
+ /// WorkGroupConfiguration
override client-side settings. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
The workgroup description.
pub description: std::option::OptionQueryString
, an error is returned.
/// This token is listed as not required because AWS SDKs (for example the AWS SDK for - /// Java) auto-generate the token for users. If you are not using the AWS SDK or the AWS - /// CLI, you must provide this token or the action will fail.
+ ///This token is listed as not required because Amazon Web Services SDKs (for example + /// the Amazon Web Services SDK for Java) auto-generate the token for users. If you are + /// not using the Amazon Web Services SDK or the Amazon Web Services CLI, you must provide + /// this token or the action will fail.
///The name of the workgroup in which the named query is being created.
@@ -6215,23 +6204,18 @@ impl std::fmt::Debug for CreateNamedQueryInput { #[non_exhaustive] #[derive(std::clone::Clone, std::cmp::PartialEq)] pub struct CreateDataCatalogInput { - ///The name of the data catalog to create. The catalog name must be unique for the AWS - /// account and can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen - /// characters.
+ ///The name of the data catalog to create. The catalog name must be unique for the + /// Amazon Web Services account and can use a maximum of 128 alphanumeric, underscore, at + /// sign, or hyphen characters.
pub name: std::option::OptionThe type of data catalog to create: LAMBDA
for a federated catalog or
- /// HIVE
for an external hive metastore.
Do not use the GLUE
type. This refers to the
- /// AwsDataCatalog
that already exists in your account, of which you
- /// can have only one. Specifying the GLUE
type will result in an
- /// INVALID_INPUT
error.
The type of data catalog to create: LAMBDA
for a federated catalog,
+ /// HIVE
for an external hive metastore, or GLUE
for an
+ /// Glue Data Catalog.
A description of the data catalog to be created.
pub description: std::option::OptionSpecifies the Lambda function or functions to use for creating the data catalog. This - /// is a mapping whose values depend on the catalog type.
+ ///Specifies the Lambda function or functions to use for creating the data + /// catalog. This is a mapping whose values depend on the catalog type.
///For the HIVE
data catalog type, use the following syntax. The
@@ -6249,9 +6233,9 @@ pub struct CreateDataCatalogInput {
/// of required parameters, but not both.
If you have one Lambda function that processes metadata and another - /// for reading the actual data, use the following syntax. Both parameters - /// are required.
+ ///If you have one Lambda function that processes metadata + /// and another for reading the actual data, use the following syntax. Both + /// parameters are required.
///
/// metadata-function=lambda_arn,
/// record-function=lambda_arn
@@ -6259,9 +6243,8 @@ pub struct CreateDataCatalogInput {
///
If you have a composite Lambda function that processes both metadata - /// and data, use the following syntax to specify your Lambda - /// function.
+ ///If you have a composite Lambda function that processes + /// both metadata and data, use the following syntax to specify your Lambda function.
///
/// function=lambda_arn
///
@@ -6269,6 +6252,35 @@ pub struct CreateDataCatalogInput {
///
The GLUE
type takes a catalog ID parameter and is required. The
+ ///
+ /// catalog_id
+ ///
is the account ID of the
+ /// Amazon Web Services account to which the Glue Data Catalog
+ /// belongs.
+ /// catalog-id=catalog_id
+ ///
+ ///
The GLUE
data catalog type also applies to the default
+ /// AwsDataCatalog
that already exists in your account, of
+ /// which you can have only one and cannot modify.
Queries that specify a Glue Data Catalog other than the default
+ /// AwsDataCatalog
must be run on Athena engine
+ /// version 2.
In Regions where Athena engine version 2 is not available,
+ /// creating new Glue data catalogs results in an
+ /// INVALID_INPUT
error.
Amazon Athena is an interactive query service that lets you use standard SQL to -//! analyze data directly in Amazon S3. You can point Athena at your data in Amazon S3 and -//! run ad-hoc queries and get results in seconds. Athena is serverless, so there is no -//! infrastructure to set up or manage. You pay only for the queries you run. Athena scales -//! automatically—executing queries in parallel—so results are fast, even with large -//! datasets and complex queries. For more information, see What is Amazon -//! Athena in the Amazon Athena User Guide.
-//!If you connect to Athena using the JDBC driver, use version 1.1.0 of the driver or
-//! later with the Amazon Athena API. Earlier version drivers do not support the API. For
-//! more information and to download the driver, see Accessing
+//! Amazon Athena is an interactive query service that lets you use standard SQL
+//! to analyze data directly in Amazon S3. You can point Athena at your
+//! data in Amazon S3 and run ad-hoc queries and get results in seconds. Athena is serverless, so there is no infrastructure to set up or manage. You pay
+//! only for the queries you run. Athena scales automatically—executing queries
+//! in parallel—so results are fast, even with large datasets and complex queries. For more
+//! information, see What is Amazon Athena in the Amazon Athena User
+//! Guide. If you connect to Athena using the JDBC driver, use version 1.1.0 of the
+//! driver or later with the Amazon Athena API. Earlier version drivers do not
+//! support the API. For more information and to download the driver, see Accessing
//! Amazon Athena with JDBC. For code samples using the AWS SDK for Java, see Examples and
-//! Code Samples in the Amazon Athena User Guide. For code samples using the Amazon Web Services SDK for Java, see Examples and
+//! Code Samples in the Amazon Athena User
+//! Guide. The configuration information that will be updated for this workgroup, which includes
-/// the location in Amazon S3 where query results are stored, the encryption option, if any,
-/// used for query results, whether the Amazon CloudWatch Metrics are enabled for the
-/// workgroup, whether the workgroup settings override the client-side settings, and the
-/// data usage limit for the amount of bytes scanned per query, if it is specified.
Indicates whether this workgroup enables publishing metrics to Amazon - /// CloudWatch.
+ ///Indicates whether this workgroup enables publishing metrics to Amazon CloudWatch.
pub publish_cloud_watch_metrics_enabled: std::option::OptionThe upper limit (cutoff) for the amount of bytes a single query in a workgroup is /// allowed to scan.
@@ -73,14 +73,16 @@ pub struct WorkGroupConfigurationUpdates { ///Indicates that the data usage control limit per query is removed. WorkGroupConfiguration$BytesScannedCutoffPerQuery ///
pub remove_bytes_scanned_cutoff_per_query: std::option::OptionIf set to true
, allows members assigned to a workgroup to specify Amazon
- /// S3 Requester Pays buckets in queries. If set to false
, workgroup members
- /// cannot query data from Requester Pays buckets, and queries that retrieve data from
- /// Requester Pays buckets cause an error. The default is false
. For more
+ ///
If set to true
, allows members assigned to a workgroup to specify Amazon S3 Requester Pays buckets in queries. If set to false
, workgroup
+ /// members cannot query data from Requester Pays buckets, and queries that retrieve data
+ /// from Requester Pays buckets cause an error. The default is false
. For more
/// information about Requester Pays buckets, see Requester Pays Buckets
/// in the Amazon Simple Storage Service Developer Guide.
The engine version requested when a workgroup is updated. After the update, all queries on the workgroup run on the requested engine version. If no value was previously set, the default is Auto. Queries on the AmazonAthenaPreviewFunctionality
workgroup run on the preview engine regardless of this setting.
The engine version requested when a workgroup is updated. After the update, all
+ /// queries on the workgroup run on the requested engine version. If no value was previously
+ /// set, the default is Auto. Queries on the AmazonAthenaPreviewFunctionality
+ /// workgroup run on the preview engine regardless of this setting.
Indicates whether this workgroup enables publishing metrics to Amazon - /// CloudWatch.
+ ///Indicates whether this workgroup enables publishing metrics to Amazon CloudWatch.
pub fn publish_cloud_watch_metrics_enabled(mut self, input: bool) -> Self { self.publish_cloud_watch_metrics_enabled = Some(input); self @@ -196,10 +197,9 @@ pub mod work_group_configuration_updates { self.remove_bytes_scanned_cutoff_per_query = input; self } - ///If set to true
, allows members assigned to a workgroup to specify Amazon
- /// S3 Requester Pays buckets in queries. If set to false
, workgroup members
- /// cannot query data from Requester Pays buckets, and queries that retrieve data from
- /// Requester Pays buckets cause an error. The default is false
. For more
+ ///
If set to true
, allows members assigned to a workgroup to specify Amazon S3 Requester Pays buckets in queries. If set to false
, workgroup
+ /// members cannot query data from Requester Pays buckets, and queries that retrieve data
+ /// from Requester Pays buckets cause an error. The default is false
. For more
/// information about Requester Pays buckets, see Requester Pays Buckets
/// in the Amazon Simple Storage Service Developer Guide.
The engine version requested when a workgroup is updated. After the update, all queries on the workgroup run on the requested engine version. If no value was previously set, the default is Auto. Queries on the AmazonAthenaPreviewFunctionality
workgroup run on the preview engine regardless of this setting.
The engine version requested when a workgroup is updated. After the update, all
+ /// queries on the workgroup run on the requested engine version. If no value was previously
+ /// set, the default is Auto. Queries on the AmazonAthenaPreviewFunctionality
+ /// workgroup run on the preview engine regardless of this setting.
The engine version requested by the user. Possible values are determined by the output of ListEngineVersions
, including Auto. The default is Auto.
The engine version requested by the user. Possible values are determined by the output
+ /// of ListEngineVersions
, including Auto. The default is Auto.
Read only. The engine version on which the query runs. If the user requests
- /// a valid engine version other than Auto, the effective engine version is the same as the
- /// engine version that the user requested. If the user requests Auto, the effective engine version is chosen by Athena. When a request to update the engine version is made by a CreateWorkGroup
or UpdateWorkGroup
operation, the
+ ///
Read only. The engine version on which the query runs. If the user requests a valid
+ /// engine version other than Auto, the effective engine version is the same as the engine
+ /// version that the user requested. If the user requests Auto, the effective engine version
+ /// is chosen by Athena. When a request to update the engine version is made by
+ /// a CreateWorkGroup
or UpdateWorkGroup
operation, the
/// EffectiveEngineVersion
field is ignored.
The engine version requested by the user. Possible values are determined by the output of ListEngineVersions
, including Auto. The default is Auto.
The engine version requested by the user. Possible values are determined by the output
+ /// of ListEngineVersions
, including Auto. The default is Auto.
Read only. The engine version on which the query runs. If the user requests
- /// a valid engine version other than Auto, the effective engine version is the same as the
- /// engine version that the user requested. If the user requests Auto, the effective engine version is chosen by Athena. When a request to update the engine version is made by a CreateWorkGroup
or UpdateWorkGroup
operation, the
+ ///
Read only. The engine version on which the query runs. If the user requests a valid
+ /// engine version other than Auto, the effective engine version is the same as the engine
+ /// version that the user requested. If the user requests Auto, the effective engine version
+ /// is chosen by Athena. When a request to update the engine version is made by
+ /// a CreateWorkGroup
or UpdateWorkGroup
operation, the
/// EffectiveEngineVersion
field is ignored.
s3://path/to/query/bucket/
. For more information, see Query Results If
/// workgroup settings override client-side settings, then the query uses the location for
/// the query results and the encryption configuration that are specified for the workgroup.
- /// The "workgroup settings override" is specified in EnforceWorkGroupConfiguration
- /// (true/false) in the WorkGroupConfiguration. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
+ /// The "workgroup settings override" is specified in
+ /// EnforceWorkGroupConfiguration
(true/false) in the
+ /// WorkGroupConfiguration
. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
pub output_location: std::option::OptionIf set to "true", indicates that the previously-specified query results location (also
/// known as a client-side setting) for queries in this workgroup should be ignored and set
- /// to null. If set to "false" or not set, and a value is present in the OutputLocation in
- /// ResultConfigurationUpdates (the client-side setting), the OutputLocation in the
- /// workgroup's ResultConfiguration will be updated with the new value. For more
+ /// to null. If set to "false" or not set, and a value is present in the
+ /// OutputLocation
in ResultConfigurationUpdates
(the
+ /// client-side setting), the OutputLocation
in the workgroup's
+ /// ResultConfiguration
will be updated with the new value. For more
/// information, see Workgroup Settings Override
/// Client-Side Settings.
If set to "true", indicates that the previously-specified encryption configuration
/// (also known as the client-side setting) for queries in this workgroup should be ignored
/// and set to null. If set to "false" or not set, and a value is present in the
- /// EncryptionConfiguration in ResultConfigurationUpdates (the client-side setting), the
- /// EncryptionConfiguration in the workgroup's ResultConfiguration will be updated with the
- /// new value. For more information, see Workgroup Settings Override
+ /// EncryptionConfiguration
in ResultConfigurationUpdates
(the
+ /// client-side setting), the EncryptionConfiguration
in the workgroup's
+ /// ResultConfiguration
will be updated with the new value. For more
+ /// information, see Workgroup Settings Override
/// Client-Side Settings.
s3://path/to/query/bucket/
. For more information, see Query Results If
/// workgroup settings override client-side settings, then the query uses the location for
/// the query results and the encryption configuration that are specified for the workgroup.
- /// The "workgroup settings override" is specified in EnforceWorkGroupConfiguration
- /// (true/false) in the WorkGroupConfiguration. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
+ /// The "workgroup settings override" is specified in
+ /// EnforceWorkGroupConfiguration
(true/false) in the
+ /// WorkGroupConfiguration
. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
pub fn output_location(mut self, input: impl IntoIf set to "true", indicates that the previously-specified query results location (also
/// known as a client-side setting) for queries in this workgroup should be ignored and set
- /// to null. If set to "false" or not set, and a value is present in the OutputLocation in
- /// ResultConfigurationUpdates (the client-side setting), the OutputLocation in the
- /// workgroup's ResultConfiguration will be updated with the new value. For more
+ /// to null. If set to "false" or not set, and a value is present in the
+ /// OutputLocation
in ResultConfigurationUpdates
(the
+ /// client-side setting), the OutputLocation
in the workgroup's
+ /// ResultConfiguration
will be updated with the new value. For more
/// information, see Workgroup Settings Override
/// Client-Side Settings.
If set to "true", indicates that the previously-specified encryption configuration
/// (also known as the client-side setting) for queries in this workgroup should be ignored
/// and set to null. If set to "false" or not set, and a value is present in the
- /// EncryptionConfiguration in ResultConfigurationUpdates (the client-side setting), the
- /// EncryptionConfiguration in the workgroup's ResultConfiguration will be updated with the
- /// new value. For more information, see Workgroup Settings Override
+ /// EncryptionConfiguration
in ResultConfigurationUpdates
(the
+ /// client-side setting), the EncryptionConfiguration
in the workgroup's
+ /// ResultConfiguration
will be updated with the new value. For more
+ /// information, see Workgroup Settings Override
/// Client-Side Settings.
If query results are encrypted in Amazon S3, indicates the encryption option used (for
-/// example, SSE-KMS
or CSE-KMS
) and key information.
If query results are encrypted in Amazon S3, indicates the encryption option
+/// used (for example, SSE-KMS
or CSE-KMS
) and key
+/// information.
Indicates whether Amazon S3 server-side encryption with Amazon S3-managed keys
- /// (SSE-S3
), server-side encryption with KMS-managed keys
+ ///
Indicates whether Amazon S3 server-side encryption with Amazon S3-managed keys (SSE-S3
), server-side encryption with KMS-managed keys
/// (SSE-KMS
), or client-side encryption with KMS-managed keys (CSE-KMS) is
/// used.
If a query runs in a workgroup and the workgroup overrides client-side settings, then
@@ -491,8 +506,7 @@ pub mod encryption_configuration {
pub(crate) kms_key: std::option::Option Indicates whether Amazon S3 server-side encryption with Amazon S3-managed keys
- /// ( Indicates whether Amazon S3 server-side encryption with Amazon S3-managed keys ( If a query runs in a workgroup and the workgroup overrides client-side settings, then
@@ -641,12 +655,12 @@ impl AsRef A label that you assign to a resource. In Athena, a resource can be a workgroup or
-/// data catalog. Each tag consists of a key and an optional value, both of which you
-/// define. For example, you can use tags to categorize Athena workgroups or data catalogs
-/// by purpose, owner, or environment. Use a consistent set of tag keys to make it easier to
-/// search and filter workgroups or data catalogs in your account. For best practices, see
-/// Tagging Best Practices. Tag keys can be from 1 to 128 UTF-8 Unicode
+/// A label that you assign to a resource. In Athena, a resource can be a
+/// workgroup or data catalog. Each tag consists of a key and an optional value, both of
+/// which you define. For example, you can use tags to categorize Athena
+/// workgroups or data catalogs by purpose, owner, or environment. Use a consistent set of
+/// tag keys to make it easier to search and filter workgroups or data catalogs in your
+/// account. For best practices, see Tagging Best Practices. Tag keys can be from 1 to 128 UTF-8 Unicode
/// characters, and tag values can be from 0 to 256 UTF-8 Unicode characters. Tags can use
/// letters and numbers representable in UTF-8, and the following characters: + - = . _ : /
/// @. Tag keys and values are case-sensitive. Tag keys must be unique per resource. If you
@@ -768,9 +782,9 @@ impl AsRef The location in Amazon S3 where query results are stored and the encryption option, if
-/// any, used for query results. These are known as "client-side settings". If workgroup
-/// settings override client-side settings, then the query uses the workgroup
+/// The location in Amazon S3 where query results are stored and the encryption
+/// option, if any, used for query results. These are known as "client-side settings". If
+/// workgroup settings override client-side settings, then the query uses the workgroup
/// settings. The location in Amazon S3 where your query results are stored, such as
/// If query results are encrypted in Amazon S3, indicates the encryption option used (for
- /// example, If query results are encrypted in Amazon S3, indicates the encryption option
+ /// used (for example, The location in Amazon S3 where your query results are stored, such as
/// If query results are encrypted in Amazon S3, indicates the encryption option used (for
- /// example, If query results are encrypted in Amazon S3, indicates the encryption option
+ /// used (for example, The name of the database used in the query execution. The name of the database used in the query execution. The database must exist in the catalog. The name of the data catalog used in the query execution. The name of the database used in the query execution. The name of the database used in the query execution. The database must exist in the catalog. The workgroup creation date and time. The engine version setting for all queries on the workgroup. Queries on the The engine version setting for all queries on the workgroup. Queries on the
+ /// The engine version setting for all queries on the workgroup. Queries on the The engine version setting for all queries on the workgroup. Queries on the
+ /// The last time the table was accessed. The type of table. In Athena, only The type of table. In Athena, only A list of the columns in the table. The type of table. In Athena, only The type of table. In Athena, only The state of the workgroup: ENABLED or DISABLED. The configuration of the workgroup, which includes the location in Amazon S3 where
- /// query results are stored, the encryption configuration, if any, used for query results;
- /// whether the Amazon CloudWatch Metrics are enabled for the workgroup; whether workgroup
- /// settings override client-side settings; and the data usage limits for the amount of data
- /// scanned per query or per workgroup. The workgroup settings override is specified in
- /// EnforceWorkGroupConfiguration (true/false) in the WorkGroupConfiguration. See WorkGroupConfiguration$EnforceWorkGroupConfiguration. The configuration of the workgroup, which includes the location in Amazon S3
+ /// where query results are stored, the encryption configuration, if any, used for query
+ /// results; whether the Amazon CloudWatch Metrics are enabled for the workgroup;
+ /// whether workgroup settings override client-side settings; and the data usage limits for
+ /// the amount of data scanned per query or per workgroup. The workgroup settings override
+ /// is specified in The workgroup description. The configuration of the workgroup, which includes the location in Amazon S3 where
- /// query results are stored, the encryption configuration, if any, used for query results;
- /// whether the Amazon CloudWatch Metrics are enabled for the workgroup; whether workgroup
- /// settings override client-side settings; and the data usage limits for the amount of data
- /// scanned per query or per workgroup. The workgroup settings override is specified in
- /// EnforceWorkGroupConfiguration (true/false) in the WorkGroupConfiguration. See WorkGroupConfiguration$EnforceWorkGroupConfiguration. The configuration of the workgroup, which includes the location in Amazon S3
+ /// where query results are stored, the encryption configuration, if any, used for query
+ /// results; whether the Amazon CloudWatch Metrics are enabled for the workgroup;
+ /// whether workgroup settings override client-side settings; and the data usage limits for
+ /// the amount of data scanned per query or per workgroup. The workgroup settings override
+ /// is specified in The configuration of the workgroup, which includes the location in Amazon S3 where
-/// query results are stored, the encryption option, if any, used for query results, whether
-/// the Amazon CloudWatch Metrics are enabled for the workgroup and whether workgroup
-/// settings override query settings, and the data usage limits for the amount of data
-/// scanned per query or per workgroup. The workgroup settings override is specified in
-/// EnforceWorkGroupConfiguration (true/false) in the WorkGroupConfiguration. See WorkGroupConfiguration$EnforceWorkGroupConfiguration. The configuration of the workgroup, which includes the location in Amazon S3
+/// where query results are stored, the encryption option, if any, used for query results,
+/// whether the Amazon CloudWatch Metrics are enabled for the workgroup and whether
+/// workgroup settings override query settings, and the data usage limits for the amount of
+/// data scanned per query or per workgroup. The workgroup settings override is specified in
+/// The configuration for the workgroup, which includes the location in Amazon S3 where
- /// query results are stored and the encryption option, if any, used for query results. To
- /// run the query, you must specify the query results location using one of the ways: either
- /// in the workgroup using this setting, or for individual queries (client-side), using
- /// ResultConfiguration$OutputLocation. If none of them is set, Athena
- /// issues an error that no output location is provided. For more information, see Query
- /// Results. The configuration for the workgroup, which includes the location in Amazon S3
+ /// where query results are stored and the encryption option, if any, used for query
+ /// results. To run the query, you must specify the query results location using one of the
+ /// ways: either in the workgroup using this setting, or for individual queries
+ /// (client-side), using ResultConfiguration$OutputLocation. If none of
+ /// them is set, Athena issues an error that no output location is provided. For
+ /// more information, see Query Results. If set to "true", the settings for the workgroup override client-side settings. If set
/// to "false", client-side settings are used. For more information, see Workgroup Settings Override Client-Side Settings.SSE-S3
), server-side encryption with KMS-managed keys
+ /// SSE-S3
), server-side encryption with KMS-managed keys
/// (SSE-KMS
), or client-side encryption with KMS-managed keys (CSE-KMS) is
/// used.s3://path/to/query/bucket/
. To run the query, you must specify the
/// query results location using one of the ways: either for individual queries using either
- /// this setting (client-side), or in the workgroup, using WorkGroupConfiguration. If none of them is set, Athena issues an error
- /// that no output location is provided. For more information, see Query Results. If
+ /// this setting (client-side), or in the workgroup, using WorkGroupConfiguration. If none of them is set, Athena
+ /// issues an error that no output location is provided. For more information, see Query Results. If
/// workgroup settings override client-side settings, then the query uses the settings
/// specified for the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.SSE-KMS
or CSE-KMS
) and key information. This is a
- /// client-side setting. If workgroup settings override client-side settings, then the query
- /// uses the encryption configuration that is specified for the workgroup, and also uses the
- /// location for storing query results specified in the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration and Workgroup Settings Override Client-Side Settings.SSE-KMS
or CSE-KMS
) and key information.
+ /// This is a client-side setting. If workgroup settings override client-side settings, then
+ /// the query uses the encryption configuration that is specified for the workgroup, and
+ /// also uses the location for storing query results specified in the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration and Workgroup Settings Override Client-Side Settings.s3://path/to/query/bucket/
. To run the query, you must specify the
/// query results location using one of the ways: either for individual queries using either
- /// this setting (client-side), or in the workgroup, using WorkGroupConfiguration. If none of them is set, Athena issues an error
- /// that no output location is provided. For more information, see Query Results. If
+ /// this setting (client-side), or in the workgroup, using WorkGroupConfiguration. If none of them is set, Athena
+ /// issues an error that no output location is provided. For more information, see Query Results. If
/// workgroup settings override client-side settings, then the query uses the settings
/// specified for the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.SSE-KMS
or CSE-KMS
) and key information. This is a
- /// client-side setting. If workgroup settings override client-side settings, then the query
- /// uses the encryption configuration that is specified for the workgroup, and also uses the
- /// location for storing query results specified in the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration and Workgroup Settings Override Client-Side Settings.SSE-KMS
or CSE-KMS
) and key information.
+ /// This is a client-side setting. If workgroup settings override client-side settings, then
+ /// the query uses the encryption configuration that is specified for the workgroup, and
+ /// also uses the location for storing query results specified in the workgroup. See WorkGroupConfiguration$EnforceWorkGroupConfiguration and Workgroup Settings Override Client-Side Settings.AmazonAthenaPreviewFunctionality
workgroup run on the preview engine regardless of this setting.AmazonAthenaPreviewFunctionality
workgroup run on the preview engine
+ /// regardless of this setting.AmazonAthenaPreviewFunctionality
workgroup run on the preview engine regardless of this setting.AmazonAthenaPreviewFunctionality
workgroup run on the preview engine
+ /// regardless of this setting.EXTERNAL_TABLE
is supported.EXTERNAL_TABLE
is
+ /// supported.EXTERNAL_TABLE
is supported.EXTERNAL_TABLE
is
+ /// supported.EnforceWorkGroupConfiguration
+/// (true/false) in the WorkGroupConfiguration
. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.EnforceWorkGroupConfiguration
(true/false) in the
+ /// WorkGroupConfiguration
. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.EnforceWorkGroupConfiguration
(true/false) in the
+ /// WorkGroupConfiguration
. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.EnforceWorkGroupConfiguration
(true/false) in the
+/// WorkGroupConfiguration
. See WorkGroupConfiguration$EnforceWorkGroupConfiguration.
If set to true
, allows members assigned to a workgroup to reference
- /// Amazon S3 Requester Pays buckets in queries. If set to false
, workgroup
- /// members cannot query data from Requester Pays buckets, and queries that retrieve data
- /// from Requester Pays buckets cause an error. The default is false
. For more
- /// information about Requester Pays buckets, see Requester Pays Buckets
- /// in the Amazon Simple Storage Service Developer Guide.
false
,
+ /// workgroup members cannot query data from Requester Pays buckets, and queries that
+ /// retrieve data from Requester Pays buckets cause an error. The default is
+ /// false
. For more information about Requester Pays buckets, see Requester
+ /// Pays Buckets in the Amazon Simple Storage Service Developer
+ /// Guide.
pub requester_pays_enabled: std::option::OptionThe engine version that all queries running on
- /// the workgroup use. Queries on the AmazonAthenaPreviewFunctionality
workgroup run on the preview engine regardless of this setting.
The engine version that all queries running on the workgroup use. Queries on the
+ /// AmazonAthenaPreviewFunctionality
workgroup run on the preview engine
+ /// regardless of this setting.
The configuration for the workgroup, which includes the location in Amazon S3 where - /// query results are stored and the encryption option, if any, used for query results. To - /// run the query, you must specify the query results location using one of the ways: either - /// in the workgroup using this setting, or for individual queries (client-side), using - /// ResultConfiguration$OutputLocation. If none of them is set, Athena - /// issues an error that no output location is provided. For more information, see Query - /// Results.
+ ///The configuration for the workgroup, which includes the location in Amazon S3 + /// where query results are stored and the encryption option, if any, used for query + /// results. To run the query, you must specify the query results location using one of the + /// ways: either in the workgroup using this setting, or for individual queries + /// (client-side), using ResultConfiguration$OutputLocation. If none of + /// them is set, Athena issues an error that no output location is provided. For + /// more information, see Query Results.
pub fn result_configuration(mut self, input: crate::model::ResultConfiguration) -> Self { self.result_configuration = Some(input); self @@ -1745,11 +1770,12 @@ pub mod work_group_configuration { self } ///If set to true
, allows members assigned to a workgroup to reference
- /// Amazon S3 Requester Pays buckets in queries. If set to false
, workgroup
- /// members cannot query data from Requester Pays buckets, and queries that retrieve data
- /// from Requester Pays buckets cause an error. The default is false
. For more
- /// information about Requester Pays buckets, see Requester Pays Buckets
- /// in the Amazon Simple Storage Service Developer Guide.
false
,
+ /// workgroup members cannot query data from Requester Pays buckets, and queries that
+ /// retrieve data from Requester Pays buckets cause an error. The default is
+ /// false
. For more information about Requester Pays buckets, see Requester
+ /// Pays Buckets in the Amazon Simple Storage Service Developer
+ /// Guide.
pub fn requester_pays_enabled(mut self, input: bool) -> Self {
self.requester_pays_enabled = Some(input);
self
@@ -1758,8 +1784,9 @@ pub mod work_group_configuration {
self.requester_pays_enabled = input;
self
}
- /// The engine version that all queries running on
- /// the workgroup use. Queries on the AmazonAthenaPreviewFunctionality
workgroup run on the preview engine regardless of this setting.
The engine version that all queries running on the workgroup use. Queries on the
+ /// AmazonAthenaPreviewFunctionality
workgroup run on the preview engine
+ /// regardless of this setting.
DML
indicates DML (Data Manipulation Language) query
/// statements, such as CREATE TABLE AS SELECT
. UTILITY
indicates
/// query statements other than DDL and DML, such as SHOW CREATE TABLE
, or
- /// DESCRIBE .
+ /// DESCRIBE TABLE
.
pub statement_type: std::option::Option,
- /// The location in Amazon S3 where query results were stored and the encryption option,
- /// if any, used for query results. These are known as "client-side settings". If workgroup
- /// settings override client-side settings, then the query uses the location for the query
- /// results and the encryption configuration that are specified for the workgroup.
+ /// The location in Amazon S3 where query results were stored and the encryption
+ /// option, if any, used for query results. These are known as "client-side settings". If
+ /// workgroup settings override client-side settings, then the query uses the location for
+ /// the query results and the encryption configuration that are specified for the
+ /// workgroup.
pub result_configuration: std::option::Option,
/// The database in which the query execution occurred.
pub query_execution_context: std::option::Option,
@@ -2338,7 +2366,7 @@ pub mod query_execution {
/// statements. DML
indicates DML (Data Manipulation Language) query
/// statements, such as CREATE TABLE AS SELECT
. UTILITY
indicates
/// query statements other than DDL and DML, such as SHOW CREATE TABLE
, or
- /// DESCRIBE .
+ /// DESCRIBE TABLE
.
pub fn statement_type(mut self, input: crate::model::StatementType) -> Self {
self.statement_type = Some(input);
self
@@ -2350,10 +2378,11 @@ pub mod query_execution {
self.statement_type = input;
self
}
- /// The location in Amazon S3 where query results were stored and the encryption option,
- /// if any, used for query results. These are known as "client-side settings". If workgroup
- /// settings override client-side settings, then the query uses the location for the query
- /// results and the encryption configuration that are specified for the workgroup.
+ /// The location in Amazon S3 where query results were stored and the encryption
+ /// option, if any, used for query results. These are known as "client-side settings". If
+ /// workgroup settings override client-side settings, then the query uses the location for
+ /// the query results and the encryption configuration that are specified for the
+ /// workgroup.
pub fn result_configuration(mut self, input: crate::model::ResultConfiguration) -> Self {
self.result_configuration = Some(input);
self
@@ -2460,25 +2489,26 @@ pub struct QueryExecutionStatistics {
/// The number of bytes in the data that was queried.
pub data_scanned_in_bytes: std::option::Option,
/// The location and file name of a data manifest file. The manifest file is saved to the
- /// Athena query results location in Amazon S3. The manifest file tracks files that the
- /// query wrote to Amazon S3. If the query fails, the manifest file also tracks files that
- /// the query intended to write. The manifest is useful for identifying orphaned files
- /// resulting from a failed query. For more information, see Working with Query Results, Output Files, and
- /// Query History in the Amazon Athena User Guide.
+ /// Athena query results location in Amazon S3. The manifest file
+ /// tracks files that the query wrote to Amazon S3. If the query fails, the manifest
+ /// file also tracks files that the query intended to write. The manifest is useful for
+ /// identifying orphaned files resulting from a failed query. For more information, see
+ /// Working with Query
+ /// Results, Output Files, and Query History in the Amazon Athena User Guide.
pub data_manifest_location: std::option::Option,
/// The number of milliseconds that Athena took to run the query.
pub total_execution_time_in_millis: std::option::Option,
/// The number of milliseconds that the query was in your query queue waiting for
- /// resources. Note that if transient errors occur, Athena might automatically add the query
- /// back to the queue.
+ /// resources. Note that if transient errors occur, Athena might automatically
+ /// add the query back to the queue.
pub query_queue_time_in_millis: std::option::Option,
- /// The number of milliseconds that Athena took to plan the query processing flow. This
- /// includes the time spent retrieving table partitions from the data source. Note that
- /// because the query engine performs the query planning, query planning time is a subset of
- /// engine processing time.
+ /// The number of milliseconds that Athena took to plan the query processing
+ /// flow. This includes the time spent retrieving table partitions from the data source.
+ /// Note that because the query engine performs the query planning, query planning time is a
+ /// subset of engine processing time.
pub query_planning_time_in_millis: std::option::Option,
- /// The number of milliseconds that Athena took to finalize and publish the query results
- /// after the query engine finished running the query.
+ /// The number of milliseconds that Athena took to finalize and publish the
+ /// query results after the query engine finished running the query.
pub service_processing_time_in_millis: std::option::Option,
}
impl std::fmt::Debug for QueryExecutionStatistics {
@@ -2546,11 +2576,12 @@ pub mod query_execution_statistics {
self
}
/// The location and file name of a data manifest file. The manifest file is saved to the
- /// Athena query results location in Amazon S3. The manifest file tracks files that the
- /// query wrote to Amazon S3. If the query fails, the manifest file also tracks files that
- /// the query intended to write. The manifest is useful for identifying orphaned files
- /// resulting from a failed query. For more information, see Working with Query Results, Output Files, and
- /// Query History in the Amazon Athena User Guide.
+ /// Athena query results location in Amazon S3. The manifest file
+ /// tracks files that the query wrote to Amazon S3. If the query fails, the manifest
+ /// file also tracks files that the query intended to write. The manifest is useful for
+ /// identifying orphaned files resulting from a failed query. For more information, see
+ /// Working with Query
+ /// Results, Output Files, and Query History in the Amazon Athena User Guide.
pub fn data_manifest_location(mut self, input: impl Into) -> Self {
self.data_manifest_location = Some(input.into());
self
@@ -2575,8 +2606,8 @@ pub mod query_execution_statistics {
self
}
/// The number of milliseconds that the query was in your query queue waiting for
- /// resources. Note that if transient errors occur, Athena might automatically add the query
- /// back to the queue.
+ /// resources. Note that if transient errors occur, Athena might automatically
+ /// add the query back to the queue.
pub fn query_queue_time_in_millis(mut self, input: i64) -> Self {
self.query_queue_time_in_millis = Some(input);
self
@@ -2585,10 +2616,10 @@ pub mod query_execution_statistics {
self.query_queue_time_in_millis = input;
self
}
- /// The number of milliseconds that Athena took to plan the query processing flow. This
- /// includes the time spent retrieving table partitions from the data source. Note that
- /// because the query engine performs the query planning, query planning time is a subset of
- /// engine processing time.
+ /// The number of milliseconds that Athena took to plan the query processing
+ /// flow. This includes the time spent retrieving table partitions from the data source.
+ /// Note that because the query engine performs the query planning, query planning time is a
+ /// subset of engine processing time.
pub fn query_planning_time_in_millis(mut self, input: i64) -> Self {
self.query_planning_time_in_millis = Some(input);
self
@@ -2600,8 +2631,8 @@ pub mod query_execution_statistics {
self.query_planning_time_in_millis = input;
self
}
- /// The number of milliseconds that Athena took to finalize and publish the query results
- /// after the query engine finished running the query.
+ /// The number of milliseconds that Athena took to finalize and publish the
+ /// query results after the query engine finished running the query.
pub fn service_processing_time_in_millis(mut self, input: i64) -> Self {
self.service_processing_time_in_millis = Some(input);
self
@@ -2640,16 +2671,16 @@ impl QueryExecutionStatistics {
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct QueryExecutionStatus {
/// The state of query execution. QUEUED
indicates that the query has been
- /// submitted to the service, and Athena will execute the query as soon as resources are
- /// available. RUNNING
indicates that the query is in execution phase.
- /// SUCCEEDED
indicates that the query completed without errors.
+ /// submitted to the service, and Athena will execute the query as soon as
+ /// resources are available. RUNNING
indicates that the query is in execution
+ /// phase. SUCCEEDED
indicates that the query completed without errors.
/// FAILED
indicates that the query experienced an error and did not
/// complete processing. CANCELLED
indicates that a user input interrupted
/// query execution.
///
- /// Athena automatically retries your queries in cases of certain transient errors. As
- /// a result, you may see the query state transition from RUNNING
or
- /// FAILED
to QUEUED
.
+ /// Athena automatically retries your queries in cases of certain
+ /// transient errors. As a result, you may see the query state transition from
+ /// RUNNING
or FAILED
to QUEUED
.
///
pub state: std::option::Option,
/// Further detail about the status of the query.
@@ -2682,16 +2713,16 @@ pub mod query_execution_status {
}
impl Builder {
/// The state of query execution. QUEUED
indicates that the query has been
- /// submitted to the service, and Athena will execute the query as soon as resources are
- /// available. RUNNING
indicates that the query is in execution phase.
- /// SUCCEEDED
indicates that the query completed without errors.
+ /// submitted to the service, and Athena will execute the query as soon as
+ /// resources are available. RUNNING
indicates that the query is in execution
+ /// phase. SUCCEEDED
indicates that the query completed without errors.
/// FAILED
indicates that the query experienced an error and did not
/// complete processing. CANCELLED
indicates that a user input interrupted
/// query execution.
///
- /// Athena automatically retries your queries in cases of certain transient errors. As
- /// a result, you may see the query state transition from RUNNING
or
- /// FAILED
to QUEUED
.
+ /// Athena automatically retries your queries in cases of certain
+ /// transient errors. As a result, you may see the query state transition from
+ /// RUNNING
or FAILED
to QUEUED
.
///
pub fn state(mut self, input: crate::model::QueryExecutionState) -> Self {
self.state = Some(input);
@@ -3106,22 +3137,21 @@ impl NamedQuery {
}
}
-/// Contains information about a data catalog in an AWS account.
+/// Contains information about a data catalog in an Amazon Web Services account.
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct DataCatalog {
- /// The name of the data catalog. The catalog name must be unique for the AWS account and
- /// can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen characters.
+ /// The name of the data catalog. The catalog name must be unique for the Amazon Web Services account and can use a maximum of 128 alphanumeric, underscore, at sign,
+ /// or hyphen characters.
pub name: std::option::Option,
/// An optional description of the data catalog.
pub description: std::option::Option,
- /// The type of data catalog: LAMBDA
for a federated catalog or
- /// HIVE
for an external hive metastore. GLUE
refers to the
- /// AwsDataCatalog
that already exists in your account, of which you can
- /// have only one.
+ /// The type of data catalog to create: LAMBDA
for a federated catalog,
+ /// HIVE
for an external hive metastore, or GLUE
for an
+ /// Glue Data Catalog.
pub r#type: std::option::Option,
- /// Specifies the Lambda function or functions to use for the data catalog. This is a
- /// mapping whose values depend on the catalog type.
+ /// Specifies the Lambda function or functions to use for the data catalog.
+ /// This is a mapping whose values depend on the catalog type.
///
/// -
///
For the HIVE
data catalog type, use the following syntax. The
@@ -3139,9 +3169,9 @@ pub struct DataCatalog {
/// of required parameters, but not both.
///
/// -
- ///
If you have one Lambda function that processes metadata and another
- /// for reading the actual data, use the following syntax. Both parameters
- /// are required.
+ /// If you have one Lambda function that processes metadata
+ /// and another for reading the actual data, use the following syntax. Both
+ /// parameters are required.
///
/// metadata-function=lambda_arn,
/// record-function=lambda_arn
@@ -3149,9 +3179,8 @@ pub struct DataCatalog {
///
///
/// -
- ///
If you have a composite Lambda function that processes both metadata
- /// and data, use the following syntax to specify your Lambda
- /// function.
+ /// If you have a composite Lambda function that processes
+ /// both metadata and data, use the following syntax to specify your Lambda function.
///
/// function=lambda_arn
///
@@ -3159,6 +3188,30 @@ pub struct DataCatalog {
///
///
///
+ /// -
+ ///
The GLUE
type takes a catalog ID parameter and is required. The
+ ///
+ /// catalog_id
+ ///
is the account ID of the
+ /// Amazon Web Services account to which the Glue catalog
+ /// belongs.
+ ///
+ /// catalog-id=catalog_id
+ ///
+ ///
+ ///
+ /// -
+ ///
The GLUE
data catalog type also applies to the default
+ /// AwsDataCatalog
that already exists in your account, of
+ /// which you can have only one and cannot modify.
+ ///
+ /// -
+ ///
Queries that specify a Glue Data Catalog other than the default
+ /// AwsDataCatalog
must be run on Athena engine
+ /// version 2.
+ ///
+ ///
+ ///
///
pub parameters:
std::option::Option>,
@@ -3187,8 +3240,8 @@ pub mod data_catalog {
>,
}
impl Builder {
- /// The name of the data catalog. The catalog name must be unique for the AWS account and
- /// can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen characters.
+ /// The name of the data catalog. The catalog name must be unique for the Amazon Web Services account and can use a maximum of 128 alphanumeric, underscore, at sign,
+ /// or hyphen characters.
pub fn name(mut self, input: impl Into) -> Self {
self.name = Some(input.into());
self
@@ -3206,10 +3259,9 @@ pub mod data_catalog {
self.description = input;
self
}
- /// The type of data catalog: LAMBDA
for a federated catalog or
- /// HIVE
for an external hive metastore. GLUE
refers to the
- /// AwsDataCatalog
that already exists in your account, of which you can
- /// have only one.
+ /// The type of data catalog to create: LAMBDA
for a federated catalog,
+ /// HIVE
for an external hive metastore, or GLUE
for an
+ /// Glue Data Catalog.
pub fn r#type(mut self, input: crate::model::DataCatalogType) -> Self {
self.r#type = Some(input);
self
diff --git a/sdk/athena/src/operation.rs b/sdk/athena/src/operation.rs
index 3ae3cc9c6043..5d166dcae663 100644
--- a/sdk/athena/src/operation.rs
+++ b/sdk/athena/src/operation.rs
@@ -69,7 +69,7 @@ impl smithy_http::response::ParseStrictResponse for BatchGetQueryExecution {
}
/// Creates (registers) a data catalog with the specified name and properties. Catalogs
-/// created are visible to all users of the same AWS account.
+/// created are visible to all users of the same Amazon Web Services account.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct CreateDataCatalog {
_private: (),
@@ -99,8 +99,9 @@ impl smithy_http::response::ParseStrictResponse for CreateDataCatalog {
/// Creates a named query in the specified workgroup. Requires that you have access to the
/// workgroup.
-/// For code samples using the AWS SDK for Java, see Examples and
-/// Code Samples in the Amazon Athena User Guide.
+/// For code samples using the Amazon Web Services SDK for Java, see Examples and
+/// Code Samples in the Amazon Athena User
+/// Guide.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct CreateNamedQuery {
_private: (),
@@ -214,8 +215,9 @@ impl smithy_http::response::ParseStrictResponse for DeleteDataCatalog {
/// Deletes the named query if you have access to the workgroup in which the query was
/// saved.
-/// For code samples using the AWS SDK for Java, see Examples and
-/// Code Samples in the Amazon Athena User Guide.
+/// For code samples using the Amazon Web Services SDK for Java, see Examples and
+/// Code Samples in the Amazon Athena User
+/// Guide.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct DeleteNamedQuery {
_private: (),
@@ -440,19 +442,19 @@ impl smithy_http::response::ParseStrictResponse for GetQueryExecution {
}
/// Streams the results of a single query execution specified by
-/// QueryExecutionId
from the Athena query results location in Amazon S3.
-/// For more information, see Query Results in the Amazon
-/// Athena User Guide. This request does not execute the query but returns
-/// results. Use StartQueryExecution to run a query.
+/// QueryExecutionId
from the Athena query results location in
+/// Amazon S3. For more information, see Query Results in the Amazon Athena User Guide. This request does not execute the query
+/// but returns results. Use StartQueryExecution to run a query.
/// To stream query results successfully, the IAM principal with permission to call
/// GetQueryResults
also must have permissions to the Amazon S3
/// GetObject
action for the Athena query results location.
///
-/// IAM principals with permission to the Amazon S3 GetObject
action for
-/// the query results location are able to retrieve query results from Amazon S3 even if
-/// permission to the GetQueryResults
action is denied. To restrict user or
-/// role access, ensure that Amazon S3 permissions to the Athena query location are
-/// denied.
+/// IAM principals with permission to the Amazon S3
+/// GetObject
action for the query results location are able to retrieve
+/// query results from Amazon S3 even if permission to the
+/// GetQueryResults
action is denied. To restrict user or role access,
+/// ensure that Amazon S3 permissions to the Athena query location
+/// are denied.
///
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct GetQueryResults {
@@ -561,7 +563,7 @@ impl smithy_http::response::ParseStrictResponse for ListDatabases {
}
}
-/// Lists the data catalogs in the current AWS account.
+/// Lists the data catalogs in the current Amazon Web Services account.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct ListDataCatalogs {
_private: (),
@@ -621,8 +623,9 @@ impl smithy_http::response::ParseStrictResponse for ListEngineVersions {
/// Provides a list of available query IDs only for queries saved in the specified
/// workgroup. Requires that you have access to the specified workgroup. If a workgroup is
/// not specified, lists the saved queries for the primary workgroup.
-/// For code samples using the AWS SDK for Java, see Examples and
-/// Code Samples in the Amazon Athena User Guide.
+/// For code samples using the Amazon Web Services SDK for Java, see Examples and
+/// Code Samples in the Amazon Athena User
+/// Guide.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct ListNamedQueries {
_private: (),
@@ -682,8 +685,9 @@ impl smithy_http::response::ParseStrictResponse for ListPreparedStatements {
/// workgroup. If a workgroup is not specified, returns a list of query execution IDs for
/// the primary workgroup. Requires you to have access to the workgroup in which the queries
/// ran.
-/// For code samples using the AWS SDK for Java, see Examples and
-/// Code Samples in the Amazon Athena User Guide.
+/// For code samples using the Amazon Web Services SDK for Java, see Examples and
+/// Code Samples in the Amazon Athena User
+/// Guide.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct ListQueryExecutions {
_private: (),
@@ -739,7 +743,8 @@ impl smithy_http::response::ParseStrictResponse for ListTableMetadata {
}
}
-/// Lists the tags associated with an Athena workgroup or data catalog resource.
+/// Lists the tags associated with an Athena workgroup or data catalog
+/// resource.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct ListTagsForResource {
_private: (),
@@ -796,8 +801,9 @@ impl smithy_http::response::ParseStrictResponse for ListWorkGroups {
/// Runs the SQL query statements contained in the Query
. Requires you to
/// have access to the workgroup in which the query ran. Running queries against an external
/// catalog requires GetDataCatalog permission to the catalog. For code
-/// samples using the AWS SDK for Java, see Examples and
-/// Code Samples in the Amazon Athena User Guide.
+/// samples using the Amazon Web Services SDK for Java, see Examples and
+/// Code Samples in the Amazon Athena User
+/// Guide.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct StartQueryExecution {
_private: (),
@@ -827,8 +833,9 @@ impl smithy_http::response::ParseStrictResponse for StartQueryExecution {
/// Stops a query execution. Requires you to have access to the workgroup in which the
/// query ran.
-/// For code samples using the AWS SDK for Java, see Examples and
-/// Code Samples in the Amazon Athena User Guide.
+/// For code samples using the Amazon Web Services SDK for Java, see Examples and
+/// Code Samples in the Amazon Athena User
+/// Guide.
#[derive(std::default::Default, std::clone::Clone, std::fmt::Debug)]
pub struct StopQueryExecution {
_private: (),
@@ -856,12 +863,13 @@ impl smithy_http::response::ParseStrictResponse for StopQueryExecution {
}
}
-/// Adds one or more tags to an Athena resource. A tag is a label that you assign to a
-/// resource. In Athena, a resource can be a workgroup or data catalog. Each tag consists of
-/// a key and an optional value, both of which you define. For example, you can use tags to
-/// categorize Athena workgroups or data catalogs by purpose, owner, or environment. Use a
-/// consistent set of tag keys to make it easier to search and filter workgroups or data
-/// catalogs in your account. For best practices, see Tagging Best Practices. Tag keys can be from 1 to 128 UTF-8 Unicode
+///
Adds one or more tags to an Athena resource. A tag is a label that you
+/// assign to a resource. In Athena, a resource can be a workgroup or data
+/// catalog. Each tag consists of a key and an optional value, both of which you define. For
+/// example, you can use tags to categorize Athena workgroups or data catalogs
+/// by purpose, owner, or environment. Use a consistent set of tag keys to make it easier to
+/// search and filter workgroups or data catalogs in your account. For best practices, see
+/// Tagging Best Practices. Tag keys can be from 1 to 128 UTF-8 Unicode
/// characters, and tag values can be from 0 to 256 UTF-8 Unicode characters. Tags can use
/// letters and numbers representable in UTF-8, and the following characters: + - = . _ : /
/// @. Tag keys and values are case-sensitive. Tag keys must be unique per resource. If you
diff --git a/sdk/athena/src/output.rs b/sdk/athena/src/output.rs
index fa7b74192e92..8783646c5624 100644
--- a/sdk/athena/src/output.rs
+++ b/sdk/athena/src/output.rs
@@ -228,9 +228,9 @@ pub struct ListWorkGroupsOutput {
///
A list of WorkGroupSummary objects that include the names,
/// descriptions, creation times, and states for each workgroup.
pub work_groups: std::option::Option>,
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub next_token: std::option::Option,
}
impl std::fmt::Debug for ListWorkGroupsOutput {
@@ -264,9 +264,9 @@ pub mod list_work_groups_output {
self.work_groups = input;
self
}
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub fn next_token(mut self, input: impl Into) -> Self {
self.next_token = Some(input.into());
self
@@ -360,9 +360,9 @@ impl ListTagsForResourceOutput {
pub struct ListTableMetadataOutput {
/// A list of table metadata.
pub table_metadata_list: std::option::Option>,
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken
- /// from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken from the response object of the previous page call.
pub next_token: std::option::Option,
}
impl std::fmt::Debug for ListTableMetadataOutput {
@@ -400,9 +400,9 @@ pub mod list_table_metadata_output {
self.table_metadata_list = input;
self
}
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken
- /// from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken from the response object of the previous page call.
pub fn next_token(mut self, input: impl Into) -> Self {
self.next_token = Some(input.into());
self
@@ -497,9 +497,9 @@ pub struct ListPreparedStatementsOutput {
/// The list of prepared statements for the workgroup.
pub prepared_statements:
std::option::Option>,
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub next_token: std::option::Option,
}
impl std::fmt::Debug for ListPreparedStatementsOutput {
@@ -537,9 +537,9 @@ pub mod list_prepared_statements_output {
self.prepared_statements = input;
self
}
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub fn next_token(mut self, input: impl Into) -> Self {
self.next_token = Some(input.into());
self
@@ -569,9 +569,9 @@ impl ListPreparedStatementsOutput {
pub struct ListNamedQueriesOutput {
/// The list of unique query IDs.
pub named_query_ids: std::option::Option>,
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub next_token: std::option::Option,
}
impl std::fmt::Debug for ListNamedQueriesOutput {
@@ -605,9 +605,9 @@ pub mod list_named_queries_output {
self.named_query_ids = input;
self
}
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub fn next_token(mut self, input: impl Into) -> Self {
self.next_token = Some(input.into());
self
@@ -637,9 +637,9 @@ impl ListNamedQueriesOutput {
pub struct ListEngineVersionsOutput {
/// A list of engine versions that are available to choose from.
pub engine_versions: std::option::Option>,
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub next_token: std::option::Option,
}
impl std::fmt::Debug for ListEngineVersionsOutput {
@@ -673,9 +673,9 @@ pub mod list_engine_versions_output {
self.engine_versions = input;
self
}
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub fn next_token(mut self, input: impl Into) -> Self {
self.next_token = Some(input.into());
self
@@ -705,9 +705,9 @@ impl ListEngineVersionsOutput {
pub struct ListDataCatalogsOutput {
/// A summary list of data catalogs.
pub data_catalogs_summary: std::option::Option>,
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken
- /// from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken from the response object of the previous page call.
pub next_token: std::option::Option,
}
impl std::fmt::Debug for ListDataCatalogsOutput {
@@ -745,9 +745,9 @@ pub mod list_data_catalogs_output {
self.data_catalogs_summary = input;
self
}
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken
- /// from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken from the response object of the previous page call.
pub fn next_token(mut self, input: impl Into) -> Self {
self.next_token = Some(input.into());
self
@@ -777,9 +777,9 @@ impl ListDataCatalogsOutput {
pub struct ListDatabasesOutput {
/// A list of databases from a data catalog.
pub database_list: std::option::Option>,
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken
- /// from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken from the response object of the previous page call.
pub next_token: std::option::Option,
}
impl std::fmt::Debug for ListDatabasesOutput {
@@ -813,9 +813,9 @@ pub mod list_databases_output {
self.database_list = input;
self
}
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the NextToken
- /// from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken from the response object of the previous page call.
pub fn next_token(mut self, input: impl Into) -> Self {
self.next_token = Some(input.into());
self
@@ -941,13 +941,14 @@ impl GetTableMetadataOutput {
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct GetQueryResultsOutput {
- /// The number of rows inserted with a CREATE TABLE AS SELECT statement.
+ /// The number of rows inserted with a CREATE TABLE AS SELECT
statement.
+ ///
pub update_count: std::option::Option,
/// The results of the query execution.
pub result_set: std::option::Option,
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub next_token: std::option::Option,
}
impl std::fmt::Debug for GetQueryResultsOutput {
@@ -970,7 +971,8 @@ pub mod get_query_results_output {
pub(crate) next_token: std::option::Option,
}
impl Builder {
- /// The number of rows inserted with a CREATE TABLE AS SELECT statement.
+ /// The number of rows inserted with a CREATE TABLE AS SELECT
statement.
+ ///
pub fn update_count(mut self, input: i64) -> Self {
self.update_count = Some(input);
self
@@ -991,9 +993,9 @@ pub mod get_query_results_output {
self.result_set = input;
self
}
- /// A token generated by the Athena service that specifies where to continue pagination if
- /// a previous request was truncated. To obtain the next set of pages, pass in the
- /// NextToken
from the response object of the previous page call.
+ /// A token generated by the Athena service that specifies where to continue
+ /// pagination if a previous request was truncated. To obtain the next set of pages, pass in
+ /// the NextToken
from the response object of the previous page call.
pub fn next_token(mut self, input: impl Into) -> Self {
self.next_token = Some(input.into());
self
diff --git a/sdk/auditmanager/Cargo.toml b/sdk/auditmanager/Cargo.toml
index da78499862f4..16dba876cfc0 100644
--- a/sdk/auditmanager/Cargo.toml
+++ b/sdk/auditmanager/Cargo.toml
@@ -1,7 +1,7 @@
# Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
[package]
name = "aws-sdk-auditmanager"
-version = "0.0.15-alpha"
+version = "0.0.16-alpha"
description = "Welcome to the Audit Manager API reference. This guide is for developers who need detailed information about the Audit Manager API operations, data types, and errors.
\n Audit Manager is a service that provides automated evidence collection so that you\n can continuously audit your Amazon Web Services usage, and assess the effectiveness of your controls to\n better manage risk and simplify compliance.
\n Audit Manager provides pre-built frameworks that structure and automate assessments\n for a given compliance standard. Frameworks include a pre-built collection of controls with\n descriptions and testing procedures, which are grouped according to the requirements of the\n specified compliance standard or regulation. You can also customize frameworks and controls\n to support internal audits with unique requirements.
\n \n Use the following links to get started with the Audit Manager API:
\n \n - \n
\n Actions: An alphabetical list of all Audit Manager API operations.
\n \n - \n
\n Data types: An alphabetical list of all Audit Manager data types.
\n \n - \n
\n Common parameters: Parameters that all Query operations can use.
\n \n - \n
\n Common errors: Client and server errors that all operations can return.
\n \n
\n \n If you're new to Audit Manager, we recommend that you review the Audit Manager User Guide.
"
authors = ["AWS Rust SDK Team ", "Russell Cohen "]
license = "Apache-2.0"
diff --git a/sdk/autoscaling/Cargo.toml b/sdk/autoscaling/Cargo.toml
index b660c880d861..675d65b1cd65 100644
--- a/sdk/autoscaling/Cargo.toml
+++ b/sdk/autoscaling/Cargo.toml
@@ -1,7 +1,7 @@
# Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
[package]
name = "aws-sdk-autoscaling"
-version = "0.0.15-alpha"
+version = "0.0.16-alpha"
description = "Amazon EC2 Auto Scaling \n \n \n \n \n \n\n \n Amazon EC2 Auto Scaling is designed to automatically launch or terminate EC2 instances\n based on user-defined scaling policies, scheduled actions, and health checks.
\n For more information about Amazon EC2 Auto Scaling, see the Amazon EC2 Auto Scaling User Guide. For information about granting IAM users required\n permissions for calls to Amazon EC2 Auto Scaling, see Granting\n IAM users required permissions for Amazon EC2 Auto Scaling resources in the\n Amazon EC2 Auto Scaling API Reference.
"
authors = ["AWS Rust SDK Team ", "Russell Cohen "]
license = "Apache-2.0"
diff --git a/sdk/autoscaling/src/client.rs b/sdk/autoscaling/src/client.rs
index c1f1becdc14b..1ed65ca1046d 100644
--- a/sdk/autoscaling/src/client.rs
+++ b/sdk/autoscaling/src/client.rs
@@ -5000,11 +5000,10 @@ pub mod fluent_builders {
}
/// The strategy to use for the instance refresh. The only valid value is
/// Rolling
.
- /// A rolling update is an update that is applied to all instances in an Auto Scaling group until
- /// all instances have been updated. A rolling update can fail due to failed health checks
- /// or if instances are on standby or are protected from scale in. If the rolling update
- /// process fails, any instances that were already replaced are not rolled back to their
- /// previous configuration.
+ /// A rolling update helps you update your instances gradually. A rolling update can fail
+ /// due to failed health checks or if instances are on standby or are protected from scale
+ /// in. If the rolling update process fails, any instances that are replaced are not rolled
+ /// back to their previous configuration.
pub fn strategy(mut self, input: crate::model::RefreshStrategy) -> Self {
self.inner = self.inner.strategy(input);
self
@@ -5016,12 +5015,31 @@ pub mod fluent_builders {
self.inner = self.inner.set_strategy(input);
self
}
- /// Set of preferences associated with the instance refresh request.
- /// If not provided, the default values are used. For MinHealthyPercentage
,
- /// the default value is 90
. For InstanceWarmup
, the default is to
- /// use the value specified for the health check grace period for the Auto Scaling group.
- /// For more information, see RefreshPreferences in the Amazon EC2 Auto Scaling API
- /// Reference.
+ /// The desired configuration. For example, the desired configuration can specify a new
+ /// launch template or a new version of the current launch template.
+ /// Once the instance refresh succeeds, Amazon EC2 Auto Scaling updates the settings of the Auto Scaling group to
+ /// reflect the new desired configuration.
+ ///
+ /// When you specify a new launch template or a new version of the current launch
+ /// template for your desired configuration, consider enabling the
+ /// SkipMatching
property in preferences. If it's enabled, Amazon EC2 Auto Scaling
+ /// skips replacing instances that already use the specified launch template and
+ /// version. This can help you reduce the number of replacements that are required to
+ /// apply updates.
+ ///
+ pub fn desired_configuration(mut self, input: crate::model::DesiredConfiguration) -> Self {
+ self.inner = self.inner.desired_configuration(input);
+ self
+ }
+ pub fn set_desired_configuration(
+ mut self,
+ input: std::option::Option,
+ ) -> Self {
+ self.inner = self.inner.set_desired_configuration(input);
+ self
+ }
+ /// Set of preferences associated with the instance refresh request. If not provided, the
+ /// default values are used.
pub fn preferences(mut self, input: crate::model::RefreshPreferences) -> Self {
self.inner = self.inner.preferences(input);
self
diff --git a/sdk/autoscaling/src/input.rs b/sdk/autoscaling/src/input.rs
index 1085995779c0..d6250ef7f1f7 100644
--- a/sdk/autoscaling/src/input.rs
+++ b/sdk/autoscaling/src/input.rs
@@ -10038,6 +10038,7 @@ pub mod start_instance_refresh_input {
pub struct Builder {
pub(crate) auto_scaling_group_name: std::option::Option,
pub(crate) strategy: std::option::Option,
+ pub(crate) desired_configuration: std::option::Option,
pub(crate) preferences: std::option::Option,
}
impl Builder {
@@ -10055,11 +10056,10 @@ pub mod start_instance_refresh_input {
}
/// The strategy to use for the instance refresh. The only valid value is
/// Rolling
.
- /// A rolling update is an update that is applied to all instances in an Auto Scaling group until
- /// all instances have been updated. A rolling update can fail due to failed health checks
- /// or if instances are on standby or are protected from scale in. If the rolling update
- /// process fails, any instances that were already replaced are not rolled back to their
- /// previous configuration.
+ /// A rolling update helps you update your instances gradually. A rolling update can fail
+ /// due to failed health checks or if instances are on standby or are protected from scale
+ /// in. If the rolling update process fails, any instances that are replaced are not rolled
+ /// back to their previous configuration.
pub fn strategy(mut self, input: crate::model::RefreshStrategy) -> Self {
self.strategy = Some(input);
self
@@ -10071,12 +10071,31 @@ pub mod start_instance_refresh_input {
self.strategy = input;
self
}
- /// Set of preferences associated with the instance refresh request.
- /// If not provided, the default values are used. For MinHealthyPercentage
,
- /// the default value is 90
. For InstanceWarmup
, the default is to
- /// use the value specified for the health check grace period for the Auto Scaling group.
- /// For more information, see RefreshPreferences in the Amazon EC2 Auto Scaling API
- /// Reference.
+ /// The desired configuration. For example, the desired configuration can specify a new
+ /// launch template or a new version of the current launch template.
+ /// Once the instance refresh succeeds, Amazon EC2 Auto Scaling updates the settings of the Auto Scaling group to
+ /// reflect the new desired configuration.
+ ///
+ /// When you specify a new launch template or a new version of the current launch
+ /// template for your desired configuration, consider enabling the
+ /// SkipMatching
property in preferences. If it's enabled, Amazon EC2 Auto Scaling
+ /// skips replacing instances that already use the specified launch template and
+ /// version. This can help you reduce the number of replacements that are required to
+ /// apply updates.
+ ///
+ pub fn desired_configuration(mut self, input: crate::model::DesiredConfiguration) -> Self {
+ self.desired_configuration = Some(input);
+ self
+ }
+ pub fn set_desired_configuration(
+ mut self,
+ input: std::option::Option,
+ ) -> Self {
+ self.desired_configuration = input;
+ self
+ }
+ /// Set of preferences associated with the instance refresh request. If not provided, the
+ /// default values are used.
pub fn preferences(mut self, input: crate::model::RefreshPreferences) -> Self {
self.preferences = Some(input);
self
@@ -10098,6 +10117,7 @@ pub mod start_instance_refresh_input {
Ok(crate::input::StartInstanceRefreshInput {
auto_scaling_group_name: self.auto_scaling_group_name,
strategy: self.strategy,
+ desired_configuration: self.desired_configuration,
preferences: self.preferences,
})
}
@@ -11136,18 +11156,26 @@ pub struct StartInstanceRefreshInput {
pub auto_scaling_group_name: std::option::Option,
/// The strategy to use for the instance refresh. The only valid value is
/// Rolling
.
- /// A rolling update is an update that is applied to all instances in an Auto Scaling group until
- /// all instances have been updated. A rolling update can fail due to failed health checks
- /// or if instances are on standby or are protected from scale in. If the rolling update
- /// process fails, any instances that were already replaced are not rolled back to their
- /// previous configuration.
+ /// A rolling update helps you update your instances gradually. A rolling update can fail
+ /// due to failed health checks or if instances are on standby or are protected from scale
+ /// in. If the rolling update process fails, any instances that are replaced are not rolled
+ /// back to their previous configuration.
pub strategy: std::option::Option,
- /// Set of preferences associated with the instance refresh request.
- /// If not provided, the default values are used. For MinHealthyPercentage
,
- /// the default value is 90
. For InstanceWarmup
, the default is to
- /// use the value specified for the health check grace period for the Auto Scaling group.
- /// For more information, see RefreshPreferences in the Amazon EC2 Auto Scaling API
- /// Reference.
+ /// The desired configuration. For example, the desired configuration can specify a new
+ /// launch template or a new version of the current launch template.
+ /// Once the instance refresh succeeds, Amazon EC2 Auto Scaling updates the settings of the Auto Scaling group to
+ /// reflect the new desired configuration.
+ ///
+ /// When you specify a new launch template or a new version of the current launch
+ /// template for your desired configuration, consider enabling the
+ /// SkipMatching
property in preferences. If it's enabled, Amazon EC2 Auto Scaling
+ /// skips replacing instances that already use the specified launch template and
+ /// version. This can help you reduce the number of replacements that are required to
+ /// apply updates.
+ ///
+ pub desired_configuration: std::option::Option,
+ /// Set of preferences associated with the instance refresh request. If not provided, the
+ /// default values are used.
pub preferences: std::option::Option,
}
impl std::fmt::Debug for StartInstanceRefreshInput {
@@ -11155,6 +11183,7 @@ impl std::fmt::Debug for StartInstanceRefreshInput {
let mut formatter = f.debug_struct("StartInstanceRefreshInput");
formatter.field("auto_scaling_group_name", &self.auto_scaling_group_name);
formatter.field("strategy", &self.strategy);
+ formatter.field("desired_configuration", &self.desired_configuration);
formatter.field("preferences", &self.preferences);
formatter.finish()
}
diff --git a/sdk/autoscaling/src/model.rs b/sdk/autoscaling/src/model.rs
index 917f00c3e19a..af9698cf4488 100644
--- a/sdk/autoscaling/src/model.rs
+++ b/sdk/autoscaling/src/model.rs
@@ -1,18 +1,15 @@
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
-/// Describes a mixed instances policy for an Auto Scaling group. With mixed instances, your Auto Scaling
-/// group can provision a combination of On-Demand Instances and Spot Instances across
-/// multiple instance types. For more information, see Auto Scaling groups with multiple
+/// Describes a mixed instances policy. A mixed instances policy contains the instance
+/// types Amazon EC2 Auto Scaling can launch, and other information Amazon EC2 Auto Scaling can use to launch instances to
+/// help you optimize your costs. For more information, see Auto Scaling groups with multiple
/// instance types and purchase options in the Amazon EC2 Auto Scaling User
/// Guide.
-/// You can create a mixed instances policy for a new Auto Scaling group, or you can create it for
-/// an existing group by updating the group to specify MixedInstancesPolicy
as
-/// the top-level property instead of a launch configuration or launch template.
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct MixedInstancesPolicy {
- /// Specifies the launch template to use and optionally the instance types (overrides)
- /// that are used to provision EC2 instances to fulfill On-Demand and Spot capacities.
- /// Required when creating a mixed instances policy.
+ /// Specifies the launch template to use and the instance types (overrides) that are used
+ /// to provision EC2 instances to fulfill On-Demand and Spot capacities. Required when
+ /// creating a mixed instances policy.
pub launch_template: std::option::Option,
/// Specifies the instances distribution. If not provided, the value for each property in
/// InstancesDistribution
uses a default value.
@@ -36,9 +33,9 @@ pub mod mixed_instances_policy {
pub(crate) instances_distribution: std::option::Option,
}
impl Builder {
- /// Specifies the launch template to use and optionally the instance types (overrides)
- /// that are used to provision EC2 instances to fulfill On-Demand and Spot capacities.
- /// Required when creating a mixed instances policy.
+ /// Specifies the launch template to use and the instance types (overrides) that are used
+ /// to provision EC2 instances to fulfill On-Demand and Spot capacities. Required when
+ /// creating a mixed instances policy.
pub fn launch_template(mut self, input: crate::model::LaunchTemplate) -> Self {
self.launch_template = Some(input);
self
@@ -86,13 +83,13 @@ impl MixedInstancesPolicy {
/// The instances distribution specifies the distribution of On-Demand Instances and Spot
/// Instances, the maximum price to pay for Spot Instances, and how the Auto Scaling group allocates
/// instance types to fulfill On-Demand and Spot capacities.
-/// When you update SpotAllocationStrategy
, SpotInstancePools
,
-/// or SpotMaxPrice
, this update action does not deploy any changes across the
-/// running Amazon EC2 instances in the group. Your existing Spot Instances continue to run
-/// as long as the maximum price for those instances is higher than the current Spot price.
-/// When scale out occurs, Amazon EC2 Auto Scaling launches instances based on the new settings. When scale
-/// in occurs, Amazon EC2 Auto Scaling terminates instances according to the group's termination
-/// policies.
+/// When you modify SpotAllocationStrategy
, SpotInstancePools
,
+/// or SpotMaxPrice
in the UpdateAutoScalingGroup API call,
+/// this update action does not deploy any changes across the running Amazon EC2 instances
+/// in the group. Your existing Spot Instances continue to run as long as the maximum price
+/// for those instances is higher than the current Spot price. When scale out occurs,
+/// Amazon EC2 Auto Scaling launches instances based on the new settings. When scale in occurs, Amazon EC2 Auto Scaling
+/// terminates instances according to the group's termination policies.
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct InstancesDistribution {
@@ -293,7 +290,7 @@ impl InstancesDistribution {
/// Describes a launch template and overrides.
/// You specify these properties as part of a mixed instances policy.
-/// When you update the launch template or overrides, existing Amazon EC2 instances continue to
+///
When you update the launch template or overrides in the UpdateAutoScalingGroup API call, existing Amazon EC2 instances continue to
/// run. When scale out occurs, Amazon EC2 Auto Scaling launches instances to match the new settings. When
/// scale in occurs, Amazon EC2 Auto Scaling terminates instances according to the group's termination
/// policies.
@@ -506,11 +503,9 @@ impl LaunchTemplateOverrides {
}
}
-/// Describes the Amazon EC2 launch template and the launch template version that can be used
-/// by an Auto Scaling group to configure Amazon EC2 instances.
-/// The launch template that is specified must be configured for use with an Auto Scaling group.
-/// For more information, see Creating a launch
-/// template for an Auto Scaling group in the Amazon EC2 Auto Scaling User Guide.
+/// Describes the launch template and the version of the launch template that Amazon EC2 Auto Scaling
+/// uses to launch Amazon EC2 instances. For more information about launch templates, see Launch
+/// templates in the Amazon EC2 Auto Scaling User Guide.
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct LaunchTemplateSpecification {
@@ -933,17 +928,17 @@ impl AsRef for ScalingActivityStatusCode {
}
}
-/// Describes information used to start an instance refresh.
-/// All properties are optional. However, if you specify a value for
-/// CheckpointDelay
, you must also provide a value for
-/// CheckpointPercentages
.
+/// Describes the preferences for an instance refresh.
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct RefreshPreferences {
/// The amount of capacity in the Auto Scaling group that must remain healthy during an instance
- /// refresh to allow the operation to continue, as a percentage of the desired capacity of
- /// the Auto Scaling group (rounded up to the nearest integer). The default is 90
.
- ///
+ /// refresh to allow the operation to continue. The value is expressed as a percentage of
+ /// the desired capacity of the Auto Scaling group (rounded up to the nearest integer). The default
+ /// is 90
.
+ /// Setting the minimum healthy percentage to 100 percent limits the rate of replacement
+ /// to one instance at a time. In contrast, setting it to 0 percent has the effect of
+ /// replacing all instances at the same time.
pub min_healthy_percentage: std::option::Option,
/// The number of seconds until a newly launched instance is configured and ready to use.
/// During this time, Amazon EC2 Auto Scaling does not immediately move on to the next replacement. The
@@ -963,6 +958,12 @@ pub struct RefreshPreferences {
/// CheckpointPercentages
and not for CheckpointDelay
, the
/// CheckpointDelay
defaults to 3600
(1 hour).
pub checkpoint_delay: std::option::Option,
+ /// A boolean value that indicates whether skip matching is enabled. If true, then
+ /// Amazon EC2 Auto Scaling skips replacing instances that match the desired configuration. If no desired
+ /// configuration is specified, then it skips replacing instances that have the same
+ /// configuration that is already set on the group. The default is
+ /// false
.
+ pub skip_matching: std::option::Option,
}
impl std::fmt::Debug for RefreshPreferences {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
@@ -971,6 +972,7 @@ impl std::fmt::Debug for RefreshPreferences {
formatter.field("instance_warmup", &self.instance_warmup);
formatter.field("checkpoint_percentages", &self.checkpoint_percentages);
formatter.field("checkpoint_delay", &self.checkpoint_delay);
+ formatter.field("skip_matching", &self.skip_matching);
formatter.finish()
}
}
@@ -984,12 +986,16 @@ pub mod refresh_preferences {
pub(crate) instance_warmup: std::option::Option,
pub(crate) checkpoint_percentages: std::option::Option>,
pub(crate) checkpoint_delay: std::option::Option,
+ pub(crate) skip_matching: std::option::Option,
}
impl Builder {
/// The amount of capacity in the Auto Scaling group that must remain healthy during an instance
- /// refresh to allow the operation to continue, as a percentage of the desired capacity of
- /// the Auto Scaling group (rounded up to the nearest integer). The default is 90
.
- ///
+ /// refresh to allow the operation to continue. The value is expressed as a percentage of
+ /// the desired capacity of the Auto Scaling group (rounded up to the nearest integer). The default
+ /// is 90
.
+ /// Setting the minimum healthy percentage to 100 percent limits the rate of replacement
+ /// to one instance at a time. In contrast, setting it to 0 percent has the effect of
+ /// replacing all instances at the same time.
pub fn min_healthy_percentage(mut self, input: i32) -> Self {
self.min_healthy_percentage = Some(input);
self
@@ -1036,6 +1042,19 @@ pub mod refresh_preferences {
self.checkpoint_delay = input;
self
}
+ /// A boolean value that indicates whether skip matching is enabled. If true, then
+ /// Amazon EC2 Auto Scaling skips replacing instances that match the desired configuration. If no desired
+ /// configuration is specified, then it skips replacing instances that have the same
+ /// configuration that is already set on the group. The default is
+ /// false
.
+ pub fn skip_matching(mut self, input: bool) -> Self {
+ self.skip_matching = Some(input);
+ self
+ }
+ pub fn set_skip_matching(mut self, input: std::option::Option) -> Self {
+ self.skip_matching = input;
+ self
+ }
/// Consumes the builder and constructs a [`RefreshPreferences`](crate::model::RefreshPreferences)
pub fn build(self) -> crate::model::RefreshPreferences {
crate::model::RefreshPreferences {
@@ -1043,6 +1062,7 @@ pub mod refresh_preferences {
instance_warmup: self.instance_warmup,
checkpoint_percentages: self.checkpoint_percentages,
checkpoint_delay: self.checkpoint_delay,
+ skip_matching: self.skip_matching,
}
}
}
@@ -1054,6 +1074,87 @@ impl RefreshPreferences {
}
}
+/// Describes the desired configuration for an instance refresh.
+/// If you specify a desired configuration, you must specify either a
+/// LaunchTemplate
or a MixedInstancesPolicy
.
+#[non_exhaustive]
+#[derive(std::clone::Clone, std::cmp::PartialEq)]
+pub struct DesiredConfiguration {
+ /// Describes the launch template and the version of the launch template that Amazon EC2 Auto Scaling
+ /// uses to launch Amazon EC2 instances. For more information about launch templates, see Launch
+ /// templates in the Amazon EC2 Auto Scaling User Guide.
+ pub launch_template: std::option::Option,
+ /// Describes a mixed instances policy. A mixed instances policy contains the instance
+ /// types Amazon EC2 Auto Scaling can launch, and other information Amazon EC2 Auto Scaling can use to launch instances to
+ /// help you optimize your costs. For more information, see Auto Scaling groups with multiple
+ /// instance types and purchase options in the Amazon EC2 Auto Scaling User
+ /// Guide.
+ pub mixed_instances_policy: std::option::Option,
+}
+impl std::fmt::Debug for DesiredConfiguration {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ let mut formatter = f.debug_struct("DesiredConfiguration");
+ formatter.field("launch_template", &self.launch_template);
+ formatter.field("mixed_instances_policy", &self.mixed_instances_policy);
+ formatter.finish()
+ }
+}
+/// See [`DesiredConfiguration`](crate::model::DesiredConfiguration)
+pub mod desired_configuration {
+ /// A builder for [`DesiredConfiguration`](crate::model::DesiredConfiguration)
+ #[non_exhaustive]
+ #[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
+ pub struct Builder {
+ pub(crate) launch_template: std::option::Option,
+ pub(crate) mixed_instances_policy: std::option::Option,
+ }
+ impl Builder {
+ /// Describes the launch template and the version of the launch template that Amazon EC2 Auto Scaling
+ /// uses to launch Amazon EC2 instances. For more information about launch templates, see Launch
+ /// templates in the Amazon EC2 Auto Scaling User Guide.
+ pub fn launch_template(mut self, input: crate::model::LaunchTemplateSpecification) -> Self {
+ self.launch_template = Some(input);
+ self
+ }
+ pub fn set_launch_template(
+ mut self,
+ input: std::option::Option,
+ ) -> Self {
+ self.launch_template = input;
+ self
+ }
+ /// Describes a mixed instances policy. A mixed instances policy contains the instance
+ /// types Amazon EC2 Auto Scaling can launch, and other information Amazon EC2 Auto Scaling can use to launch instances to
+ /// help you optimize your costs. For more information, see Auto Scaling groups with multiple
+ /// instance types and purchase options in the Amazon EC2 Auto Scaling User
+ /// Guide.
+ pub fn mixed_instances_policy(mut self, input: crate::model::MixedInstancesPolicy) -> Self {
+ self.mixed_instances_policy = Some(input);
+ self
+ }
+ pub fn set_mixed_instances_policy(
+ mut self,
+ input: std::option::Option,
+ ) -> Self {
+ self.mixed_instances_policy = input;
+ self
+ }
+ /// Consumes the builder and constructs a [`DesiredConfiguration`](crate::model::DesiredConfiguration)
+ pub fn build(self) -> crate::model::DesiredConfiguration {
+ crate::model::DesiredConfiguration {
+ launch_template: self.launch_template,
+ mixed_instances_policy: self.mixed_instances_policy,
+ }
+ }
+ }
+}
+impl DesiredConfiguration {
+ /// Creates a new builder-style object to manufacture [`DesiredConfiguration`](crate::model::DesiredConfiguration)
+ pub fn builder() -> crate::model::desired_configuration::Builder {
+ crate::model::desired_configuration::Builder::default()
+ }
+}
+
#[non_exhaustive]
#[derive(
std::clone::Clone,
@@ -6512,6 +6613,10 @@ pub struct InstanceRefresh {
pub instances_to_update: std::option::Option,
/// Additional progress details for an Auto Scaling group that has a warm pool.
pub progress_details: std::option::Option,
+ /// Describes the preferences for an instance refresh.
+ pub preferences: std::option::Option,
+ /// Describes the specific update you want to deploy.
+ pub desired_configuration: std::option::Option,
}
impl std::fmt::Debug for InstanceRefresh {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
@@ -6525,6 +6630,8 @@ impl std::fmt::Debug for InstanceRefresh {
formatter.field("percentage_complete", &self.percentage_complete);
formatter.field("instances_to_update", &self.instances_to_update);
formatter.field("progress_details", &self.progress_details);
+ formatter.field("preferences", &self.preferences);
+ formatter.field("desired_configuration", &self.desired_configuration);
formatter.finish()
}
}
@@ -6544,6 +6651,8 @@ pub mod instance_refresh {
pub(crate) instances_to_update: std::option::Option,
pub(crate) progress_details:
std::option::Option,
+ pub(crate) preferences: std::option::Option,
+ pub(crate) desired_configuration: std::option::Option,
}
impl Builder {
/// The instance refresh ID.
@@ -6679,6 +6788,30 @@ pub mod instance_refresh {
self.progress_details = input;
self
}
+ /// Describes the preferences for an instance refresh.
+ pub fn preferences(mut self, input: crate::model::RefreshPreferences) -> Self {
+ self.preferences = Some(input);
+ self
+ }
+ pub fn set_preferences(
+ mut self,
+ input: std::option::Option,
+ ) -> Self {
+ self.preferences = input;
+ self
+ }
+ /// Describes the specific update you want to deploy.
+ pub fn desired_configuration(mut self, input: crate::model::DesiredConfiguration) -> Self {
+ self.desired_configuration = Some(input);
+ self
+ }
+ pub fn set_desired_configuration(
+ mut self,
+ input: std::option::Option,
+ ) -> Self {
+ self.desired_configuration = input;
+ self
+ }
/// Consumes the builder and constructs a [`InstanceRefresh`](crate::model::InstanceRefresh)
pub fn build(self) -> crate::model::InstanceRefresh {
crate::model::InstanceRefresh {
@@ -6691,6 +6824,8 @@ pub mod instance_refresh {
percentage_complete: self.percentage_complete,
instances_to_update: self.instances_to_update,
progress_details: self.progress_details,
+ preferences: self.preferences,
+ desired_configuration: self.desired_configuration,
}
}
}
diff --git a/sdk/autoscaling/src/operation.rs b/sdk/autoscaling/src/operation.rs
index 37798919167b..ebc8077c9b31 100644
--- a/sdk/autoscaling/src/operation.rs
+++ b/sdk/autoscaling/src/operation.rs
@@ -1999,11 +1999,16 @@ impl smithy_http::response::ParseStrictResponse for SetInstanceProtection {
}
}
-/// Starts a new instance refresh operation, which triggers a rolling replacement of
-/// previously launched instances in the Auto Scaling group with a new group of instances.
+/// Starts a new instance refresh operation. An instance refresh performs a rolling
+/// replacement of all or some instances in an Auto Scaling group. Each instance is terminated first
+/// and then replaced, which temporarily reduces the capacity available within your Auto Scaling
+/// group.
/// This operation is part of the instance refresh
-/// feature in Amazon EC2 Auto Scaling, which helps you update instances in your Auto Scaling group
-/// after you make configuration changes.
+/// feature in Amazon EC2 Auto Scaling, which helps you update instances in your Auto Scaling group.
+/// This feature is helpful, for example, when you have a new AMI or a new user data script.
+/// You just need to create a new launch template that specifies the new AMI or user data
+/// script. Then start an instance refresh to immediately begin the process of updating
+/// instances in the group.
/// If the call succeeds, it creates a new instance refresh request with a unique ID that
/// you can use to track its progress. To query its status, call the DescribeInstanceRefreshes API. To describe the instance refreshes that
/// have already run, call the DescribeInstanceRefreshes API. To cancel an
diff --git a/sdk/autoscaling/src/operation_ser.rs b/sdk/autoscaling/src/operation_ser.rs
index 0d87860c3f70..2177df7ca428 100644
--- a/sdk/autoscaling/src/operation_ser.rs
+++ b/sdk/autoscaling/src/operation_ser.rs
@@ -1964,9 +1964,14 @@ pub fn serialize_operation_start_instance_refresh(
scope_517.string(var_518.as_str());
}
#[allow(unused_mut)]
- let mut scope_519 = writer.prefix("Preferences");
- if let Some(var_520) = &input.preferences {
- crate::query_ser::serialize_structure_refresh_preferences(scope_519, var_520);
+ let mut scope_519 = writer.prefix("DesiredConfiguration");
+ if let Some(var_520) = &input.desired_configuration {
+ crate::query_ser::serialize_structure_desired_configuration(scope_519, var_520);
+ }
+ #[allow(unused_mut)]
+ let mut scope_521 = writer.prefix("Preferences");
+ if let Some(var_522) = &input.preferences {
+ crate::query_ser::serialize_structure_refresh_preferences(scope_521, var_522);
}
writer.finish();
Ok(smithy_http::body::SdkBody::from(out))
@@ -1979,20 +1984,20 @@ pub fn serialize_operation_suspend_processes(
#[allow(unused_mut)]
let mut writer = smithy_query::QueryWriter::new(&mut out, "SuspendProcesses", "2011-01-01");
#[allow(unused_mut)]
- let mut scope_521 = writer.prefix("AutoScalingGroupName");
- if let Some(var_522) = &input.auto_scaling_group_name {
- scope_521.string(var_522);
+ let mut scope_523 = writer.prefix("AutoScalingGroupName");
+ if let Some(var_524) = &input.auto_scaling_group_name {
+ scope_523.string(var_524);
}
#[allow(unused_mut)]
- let mut scope_523 = writer.prefix("ScalingProcesses");
- if let Some(var_524) = &input.scaling_processes {
- let mut list_526 = scope_523.start_list(false, None);
- for item_525 in var_524 {
+ let mut scope_525 = writer.prefix("ScalingProcesses");
+ if let Some(var_526) = &input.scaling_processes {
+ let mut list_528 = scope_525.start_list(false, None);
+ for item_527 in var_526 {
#[allow(unused_mut)]
- let mut entry_527 = list_526.entry();
- entry_527.string(item_525);
+ let mut entry_529 = list_528.entry();
+ entry_529.string(item_527);
}
- list_526.finish();
+ list_528.finish();
}
writer.finish();
Ok(smithy_http::body::SdkBody::from(out))
@@ -2009,14 +2014,14 @@ pub fn serialize_operation_terminate_instance_in_auto_scaling_group(
"2011-01-01",
);
#[allow(unused_mut)]
- let mut scope_528 = writer.prefix("InstanceId");
- if let Some(var_529) = &input.instance_id {
- scope_528.string(var_529);
+ let mut scope_530 = writer.prefix("InstanceId");
+ if let Some(var_531) = &input.instance_id {
+ scope_530.string(var_531);
}
#[allow(unused_mut)]
- let mut scope_530 = writer.prefix("ShouldDecrementDesiredCapacity");
- if let Some(var_531) = &input.should_decrement_desired_capacity {
- scope_530.boolean(*var_531);
+ let mut scope_532 = writer.prefix("ShouldDecrementDesiredCapacity");
+ if let Some(var_533) = &input.should_decrement_desired_capacity {
+ scope_532.boolean(*var_533);
}
writer.finish();
Ok(smithy_http::body::SdkBody::from(out))
@@ -2030,129 +2035,129 @@ pub fn serialize_operation_update_auto_scaling_group(
let mut writer =
smithy_query::QueryWriter::new(&mut out, "UpdateAutoScalingGroup", "2011-01-01");
#[allow(unused_mut)]
- let mut scope_532 = writer.prefix("AutoScalingGroupName");
- if let Some(var_533) = &input.auto_scaling_group_name {
- scope_532.string(var_533);
- }
- #[allow(unused_mut)]
- let mut scope_534 = writer.prefix("LaunchConfigurationName");
- if let Some(var_535) = &input.launch_configuration_name {
+ let mut scope_534 = writer.prefix("AutoScalingGroupName");
+ if let Some(var_535) = &input.auto_scaling_group_name {
scope_534.string(var_535);
}
#[allow(unused_mut)]
- let mut scope_536 = writer.prefix("LaunchTemplate");
- if let Some(var_537) = &input.launch_template {
- crate::query_ser::serialize_structure_launch_template_specification(scope_536, var_537);
+ let mut scope_536 = writer.prefix("LaunchConfigurationName");
+ if let Some(var_537) = &input.launch_configuration_name {
+ scope_536.string(var_537);
}
#[allow(unused_mut)]
- let mut scope_538 = writer.prefix("MixedInstancesPolicy");
- if let Some(var_539) = &input.mixed_instances_policy {
- crate::query_ser::serialize_structure_mixed_instances_policy(scope_538, var_539);
+ let mut scope_538 = writer.prefix("LaunchTemplate");
+ if let Some(var_539) = &input.launch_template {
+ crate::query_ser::serialize_structure_launch_template_specification(scope_538, var_539);
}
#[allow(unused_mut)]
- let mut scope_540 = writer.prefix("MinSize");
- if let Some(var_541) = &input.min_size {
- scope_540.number(
- #[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_541).into()),
- );
+ let mut scope_540 = writer.prefix("MixedInstancesPolicy");
+ if let Some(var_541) = &input.mixed_instances_policy {
+ crate::query_ser::serialize_structure_mixed_instances_policy(scope_540, var_541);
}
#[allow(unused_mut)]
- let mut scope_542 = writer.prefix("MaxSize");
- if let Some(var_543) = &input.max_size {
+ let mut scope_542 = writer.prefix("MinSize");
+ if let Some(var_543) = &input.min_size {
scope_542.number(
#[allow(clippy::useless_conversion)]
smithy_types::Number::NegInt((*var_543).into()),
);
}
#[allow(unused_mut)]
- let mut scope_544 = writer.prefix("DesiredCapacity");
- if let Some(var_545) = &input.desired_capacity {
+ let mut scope_544 = writer.prefix("MaxSize");
+ if let Some(var_545) = &input.max_size {
scope_544.number(
#[allow(clippy::useless_conversion)]
smithy_types::Number::NegInt((*var_545).into()),
);
}
#[allow(unused_mut)]
- let mut scope_546 = writer.prefix("DefaultCooldown");
- if let Some(var_547) = &input.default_cooldown {
+ let mut scope_546 = writer.prefix("DesiredCapacity");
+ if let Some(var_547) = &input.desired_capacity {
scope_546.number(
#[allow(clippy::useless_conversion)]
smithy_types::Number::NegInt((*var_547).into()),
);
}
#[allow(unused_mut)]
- let mut scope_548 = writer.prefix("AvailabilityZones");
- if let Some(var_549) = &input.availability_zones {
- let mut list_551 = scope_548.start_list(false, None);
- for item_550 in var_549 {
+ let mut scope_548 = writer.prefix("DefaultCooldown");
+ if let Some(var_549) = &input.default_cooldown {
+ scope_548.number(
+ #[allow(clippy::useless_conversion)]
+ smithy_types::Number::NegInt((*var_549).into()),
+ );
+ }
+ #[allow(unused_mut)]
+ let mut scope_550 = writer.prefix("AvailabilityZones");
+ if let Some(var_551) = &input.availability_zones {
+ let mut list_553 = scope_550.start_list(false, None);
+ for item_552 in var_551 {
#[allow(unused_mut)]
- let mut entry_552 = list_551.entry();
- entry_552.string(item_550);
+ let mut entry_554 = list_553.entry();
+ entry_554.string(item_552);
}
- list_551.finish();
+ list_553.finish();
}
#[allow(unused_mut)]
- let mut scope_553 = writer.prefix("HealthCheckType");
- if let Some(var_554) = &input.health_check_type {
- scope_553.string(var_554);
+ let mut scope_555 = writer.prefix("HealthCheckType");
+ if let Some(var_556) = &input.health_check_type {
+ scope_555.string(var_556);
}
#[allow(unused_mut)]
- let mut scope_555 = writer.prefix("HealthCheckGracePeriod");
- if let Some(var_556) = &input.health_check_grace_period {
- scope_555.number(
+ let mut scope_557 = writer.prefix("HealthCheckGracePeriod");
+ if let Some(var_558) = &input.health_check_grace_period {
+ scope_557.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_556).into()),
+ smithy_types::Number::NegInt((*var_558).into()),
);
}
#[allow(unused_mut)]
- let mut scope_557 = writer.prefix("PlacementGroup");
- if let Some(var_558) = &input.placement_group {
- scope_557.string(var_558);
+ let mut scope_559 = writer.prefix("PlacementGroup");
+ if let Some(var_560) = &input.placement_group {
+ scope_559.string(var_560);
}
#[allow(unused_mut)]
- let mut scope_559 = writer.prefix("VPCZoneIdentifier");
- if let Some(var_560) = &input.vpc_zone_identifier {
- scope_559.string(var_560);
+ let mut scope_561 = writer.prefix("VPCZoneIdentifier");
+ if let Some(var_562) = &input.vpc_zone_identifier {
+ scope_561.string(var_562);
}
#[allow(unused_mut)]
- let mut scope_561 = writer.prefix("TerminationPolicies");
- if let Some(var_562) = &input.termination_policies {
- let mut list_564 = scope_561.start_list(false, None);
- for item_563 in var_562 {
+ let mut scope_563 = writer.prefix("TerminationPolicies");
+ if let Some(var_564) = &input.termination_policies {
+ let mut list_566 = scope_563.start_list(false, None);
+ for item_565 in var_564 {
#[allow(unused_mut)]
- let mut entry_565 = list_564.entry();
- entry_565.string(item_563);
+ let mut entry_567 = list_566.entry();
+ entry_567.string(item_565);
}
- list_564.finish();
+ list_566.finish();
}
#[allow(unused_mut)]
- let mut scope_566 = writer.prefix("NewInstancesProtectedFromScaleIn");
- if let Some(var_567) = &input.new_instances_protected_from_scale_in {
- scope_566.boolean(*var_567);
+ let mut scope_568 = writer.prefix("NewInstancesProtectedFromScaleIn");
+ if let Some(var_569) = &input.new_instances_protected_from_scale_in {
+ scope_568.boolean(*var_569);
}
#[allow(unused_mut)]
- let mut scope_568 = writer.prefix("ServiceLinkedRoleARN");
- if let Some(var_569) = &input.service_linked_role_arn {
- scope_568.string(var_569);
+ let mut scope_570 = writer.prefix("ServiceLinkedRoleARN");
+ if let Some(var_571) = &input.service_linked_role_arn {
+ scope_570.string(var_571);
}
#[allow(unused_mut)]
- let mut scope_570 = writer.prefix("MaxInstanceLifetime");
- if let Some(var_571) = &input.max_instance_lifetime {
- scope_570.number(
+ let mut scope_572 = writer.prefix("MaxInstanceLifetime");
+ if let Some(var_573) = &input.max_instance_lifetime {
+ scope_572.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_571).into()),
+ smithy_types::Number::NegInt((*var_573).into()),
);
}
#[allow(unused_mut)]
- let mut scope_572 = writer.prefix("CapacityRebalance");
- if let Some(var_573) = &input.capacity_rebalance {
- scope_572.boolean(*var_573);
+ let mut scope_574 = writer.prefix("CapacityRebalance");
+ if let Some(var_575) = &input.capacity_rebalance {
+ scope_574.boolean(*var_575);
}
#[allow(unused_mut)]
- let mut scope_574 = writer.prefix("Context");
- if let Some(var_575) = &input.context {
- scope_574.string(var_575);
+ let mut scope_576 = writer.prefix("Context");
+ if let Some(var_577) = &input.context {
+ scope_576.string(var_577);
}
writer.finish();
Ok(smithy_http::body::SdkBody::from(out))
diff --git a/sdk/autoscaling/src/query_ser.rs b/sdk/autoscaling/src/query_ser.rs
index f3faae218e80..516d9bb0d595 100644
--- a/sdk/autoscaling/src/query_ser.rs
+++ b/sdk/autoscaling/src/query_ser.rs
@@ -365,49 +365,71 @@ pub fn serialize_structure_predictive_scaling_configuration(
}
}
+#[allow(unused_mut)]
+pub fn serialize_structure_desired_configuration(
+ mut writer: smithy_query::QueryValueWriter,
+ input: &crate::model::DesiredConfiguration,
+) {
+ #[allow(unused_mut)]
+ let mut scope_101 = writer.prefix("LaunchTemplate");
+ if let Some(var_102) = &input.launch_template {
+ crate::query_ser::serialize_structure_launch_template_specification(scope_101, var_102);
+ }
+ #[allow(unused_mut)]
+ let mut scope_103 = writer.prefix("MixedInstancesPolicy");
+ if let Some(var_104) = &input.mixed_instances_policy {
+ crate::query_ser::serialize_structure_mixed_instances_policy(scope_103, var_104);
+ }
+}
+
#[allow(unused_mut)]
pub fn serialize_structure_refresh_preferences(
mut writer: smithy_query::QueryValueWriter,
input: &crate::model::RefreshPreferences,
) {
#[allow(unused_mut)]
- let mut scope_101 = writer.prefix("MinHealthyPercentage");
- if let Some(var_102) = &input.min_healthy_percentage {
- scope_101.number(
+ let mut scope_105 = writer.prefix("MinHealthyPercentage");
+ if let Some(var_106) = &input.min_healthy_percentage {
+ scope_105.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_102).into()),
+ smithy_types::Number::NegInt((*var_106).into()),
);
}
#[allow(unused_mut)]
- let mut scope_103 = writer.prefix("InstanceWarmup");
- if let Some(var_104) = &input.instance_warmup {
- scope_103.number(
+ let mut scope_107 = writer.prefix("InstanceWarmup");
+ if let Some(var_108) = &input.instance_warmup {
+ scope_107.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_104).into()),
+ smithy_types::Number::NegInt((*var_108).into()),
);
}
#[allow(unused_mut)]
- let mut scope_105 = writer.prefix("CheckpointPercentages");
- if let Some(var_106) = &input.checkpoint_percentages {
- let mut list_108 = scope_105.start_list(false, None);
- for item_107 in var_106 {
+ let mut scope_109 = writer.prefix("CheckpointPercentages");
+ if let Some(var_110) = &input.checkpoint_percentages {
+ let mut list_112 = scope_109.start_list(false, None);
+ for item_111 in var_110 {
#[allow(unused_mut)]
- let mut entry_109 = list_108.entry();
- entry_109.number(
+ let mut entry_113 = list_112.entry();
+ entry_113.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*item_107).into()),
+ smithy_types::Number::NegInt((*item_111).into()),
);
}
- list_108.finish();
+ list_112.finish();
}
#[allow(unused_mut)]
- let mut scope_110 = writer.prefix("CheckpointDelay");
- if let Some(var_111) = &input.checkpoint_delay {
- scope_110.number(
+ let mut scope_114 = writer.prefix("CheckpointDelay");
+ if let Some(var_115) = &input.checkpoint_delay {
+ scope_114.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_111).into()),
+ smithy_types::Number::NegInt((*var_115).into()),
);
}
+ #[allow(unused_mut)]
+ let mut scope_116 = writer.prefix("SkipMatching");
+ if let Some(var_117) = &input.skip_matching {
+ scope_116.boolean(*var_117);
+ }
}
#[allow(unused_mut)]
@@ -416,20 +438,20 @@ pub fn serialize_structure_launch_template(
input: &crate::model::LaunchTemplate,
) {
#[allow(unused_mut)]
- let mut scope_112 = writer.prefix("LaunchTemplateSpecification");
- if let Some(var_113) = &input.launch_template_specification {
- crate::query_ser::serialize_structure_launch_template_specification(scope_112, var_113);
+ let mut scope_118 = writer.prefix("LaunchTemplateSpecification");
+ if let Some(var_119) = &input.launch_template_specification {
+ crate::query_ser::serialize_structure_launch_template_specification(scope_118, var_119);
}
#[allow(unused_mut)]
- let mut scope_114 = writer.prefix("Overrides");
- if let Some(var_115) = &input.overrides {
- let mut list_117 = scope_114.start_list(false, None);
- for item_116 in var_115 {
+ let mut scope_120 = writer.prefix("Overrides");
+ if let Some(var_121) = &input.overrides {
+ let mut list_123 = scope_120.start_list(false, None);
+ for item_122 in var_121 {
#[allow(unused_mut)]
- let mut entry_118 = list_117.entry();
- crate::query_ser::serialize_structure_launch_template_overrides(entry_118, item_116);
+ let mut entry_124 = list_123.entry();
+ crate::query_ser::serialize_structure_launch_template_overrides(entry_124, item_122);
}
- list_117.finish();
+ list_123.finish();
}
}
@@ -439,43 +461,43 @@ pub fn serialize_structure_instances_distribution(
input: &crate::model::InstancesDistribution,
) {
#[allow(unused_mut)]
- let mut scope_119 = writer.prefix("OnDemandAllocationStrategy");
- if let Some(var_120) = &input.on_demand_allocation_strategy {
- scope_119.string(var_120);
+ let mut scope_125 = writer.prefix("OnDemandAllocationStrategy");
+ if let Some(var_126) = &input.on_demand_allocation_strategy {
+ scope_125.string(var_126);
}
#[allow(unused_mut)]
- let mut scope_121 = writer.prefix("OnDemandBaseCapacity");
- if let Some(var_122) = &input.on_demand_base_capacity {
- scope_121.number(
+ let mut scope_127 = writer.prefix("OnDemandBaseCapacity");
+ if let Some(var_128) = &input.on_demand_base_capacity {
+ scope_127.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_122).into()),
+ smithy_types::Number::NegInt((*var_128).into()),
);
}
#[allow(unused_mut)]
- let mut scope_123 = writer.prefix("OnDemandPercentageAboveBaseCapacity");
- if let Some(var_124) = &input.on_demand_percentage_above_base_capacity {
- scope_123.number(
+ let mut scope_129 = writer.prefix("OnDemandPercentageAboveBaseCapacity");
+ if let Some(var_130) = &input.on_demand_percentage_above_base_capacity {
+ scope_129.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_124).into()),
+ smithy_types::Number::NegInt((*var_130).into()),
);
}
#[allow(unused_mut)]
- let mut scope_125 = writer.prefix("SpotAllocationStrategy");
- if let Some(var_126) = &input.spot_allocation_strategy {
- scope_125.string(var_126);
+ let mut scope_131 = writer.prefix("SpotAllocationStrategy");
+ if let Some(var_132) = &input.spot_allocation_strategy {
+ scope_131.string(var_132);
}
#[allow(unused_mut)]
- let mut scope_127 = writer.prefix("SpotInstancePools");
- if let Some(var_128) = &input.spot_instance_pools {
- scope_127.number(
+ let mut scope_133 = writer.prefix("SpotInstancePools");
+ if let Some(var_134) = &input.spot_instance_pools {
+ scope_133.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_128).into()),
+ smithy_types::Number::NegInt((*var_134).into()),
);
}
#[allow(unused_mut)]
- let mut scope_129 = writer.prefix("SpotMaxPrice");
- if let Some(var_130) = &input.spot_max_price {
- scope_129.string(var_130);
+ let mut scope_135 = writer.prefix("SpotMaxPrice");
+ if let Some(var_136) = &input.spot_max_price {
+ scope_135.string(var_136);
}
}
@@ -485,47 +507,47 @@ pub fn serialize_structure_ebs(
input: &crate::model::Ebs,
) {
#[allow(unused_mut)]
- let mut scope_131 = writer.prefix("SnapshotId");
- if let Some(var_132) = &input.snapshot_id {
- scope_131.string(var_132);
+ let mut scope_137 = writer.prefix("SnapshotId");
+ if let Some(var_138) = &input.snapshot_id {
+ scope_137.string(var_138);
}
#[allow(unused_mut)]
- let mut scope_133 = writer.prefix("VolumeSize");
- if let Some(var_134) = &input.volume_size {
- scope_133.number(
+ let mut scope_139 = writer.prefix("VolumeSize");
+ if let Some(var_140) = &input.volume_size {
+ scope_139.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_134).into()),
+ smithy_types::Number::NegInt((*var_140).into()),
);
}
#[allow(unused_mut)]
- let mut scope_135 = writer.prefix("VolumeType");
- if let Some(var_136) = &input.volume_type {
- scope_135.string(var_136);
+ let mut scope_141 = writer.prefix("VolumeType");
+ if let Some(var_142) = &input.volume_type {
+ scope_141.string(var_142);
}
#[allow(unused_mut)]
- let mut scope_137 = writer.prefix("DeleteOnTermination");
- if let Some(var_138) = &input.delete_on_termination {
- scope_137.boolean(*var_138);
+ let mut scope_143 = writer.prefix("DeleteOnTermination");
+ if let Some(var_144) = &input.delete_on_termination {
+ scope_143.boolean(*var_144);
}
#[allow(unused_mut)]
- let mut scope_139 = writer.prefix("Iops");
- if let Some(var_140) = &input.iops {
- scope_139.number(
+ let mut scope_145 = writer.prefix("Iops");
+ if let Some(var_146) = &input.iops {
+ scope_145.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_140).into()),
+ smithy_types::Number::NegInt((*var_146).into()),
);
}
#[allow(unused_mut)]
- let mut scope_141 = writer.prefix("Encrypted");
- if let Some(var_142) = &input.encrypted {
- scope_141.boolean(*var_142);
+ let mut scope_147 = writer.prefix("Encrypted");
+ if let Some(var_148) = &input.encrypted {
+ scope_147.boolean(*var_148);
}
#[allow(unused_mut)]
- let mut scope_143 = writer.prefix("Throughput");
- if let Some(var_144) = &input.throughput {
- scope_143.number(
+ let mut scope_149 = writer.prefix("Throughput");
+ if let Some(var_150) = &input.throughput {
+ scope_149.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::NegInt((*var_144).into()),
+ smithy_types::Number::NegInt((*var_150).into()),
);
}
}
@@ -536,14 +558,14 @@ pub fn serialize_structure_predefined_metric_specification(
input: &crate::model::PredefinedMetricSpecification,
) {
#[allow(unused_mut)]
- let mut scope_145 = writer.prefix("PredefinedMetricType");
- if let Some(var_146) = &input.predefined_metric_type {
- scope_145.string(var_146.as_str());
+ let mut scope_151 = writer.prefix("PredefinedMetricType");
+ if let Some(var_152) = &input.predefined_metric_type {
+ scope_151.string(var_152.as_str());
}
#[allow(unused_mut)]
- let mut scope_147 = writer.prefix("ResourceLabel");
- if let Some(var_148) = &input.resource_label {
- scope_147.string(var_148);
+ let mut scope_153 = writer.prefix("ResourceLabel");
+ if let Some(var_154) = &input.resource_label {
+ scope_153.string(var_154);
}
}
@@ -553,35 +575,35 @@ pub fn serialize_structure_customized_metric_specification(
input: &crate::model::CustomizedMetricSpecification,
) {
#[allow(unused_mut)]
- let mut scope_149 = writer.prefix("MetricName");
- if let Some(var_150) = &input.metric_name {
- scope_149.string(var_150);
+ let mut scope_155 = writer.prefix("MetricName");
+ if let Some(var_156) = &input.metric_name {
+ scope_155.string(var_156);
}
#[allow(unused_mut)]
- let mut scope_151 = writer.prefix("Namespace");
- if let Some(var_152) = &input.namespace {
- scope_151.string(var_152);
+ let mut scope_157 = writer.prefix("Namespace");
+ if let Some(var_158) = &input.namespace {
+ scope_157.string(var_158);
}
#[allow(unused_mut)]
- let mut scope_153 = writer.prefix("Dimensions");
- if let Some(var_154) = &input.dimensions {
- let mut list_156 = scope_153.start_list(false, None);
- for item_155 in var_154 {
+ let mut scope_159 = writer.prefix("Dimensions");
+ if let Some(var_160) = &input.dimensions {
+ let mut list_162 = scope_159.start_list(false, None);
+ for item_161 in var_160 {
#[allow(unused_mut)]
- let mut entry_157 = list_156.entry();
- crate::query_ser::serialize_structure_metric_dimension(entry_157, item_155);
+ let mut entry_163 = list_162.entry();
+ crate::query_ser::serialize_structure_metric_dimension(entry_163, item_161);
}
- list_156.finish();
+ list_162.finish();
}
#[allow(unused_mut)]
- let mut scope_158 = writer.prefix("Statistic");
- if let Some(var_159) = &input.statistic {
- scope_158.string(var_159.as_str());
+ let mut scope_164 = writer.prefix("Statistic");
+ if let Some(var_165) = &input.statistic {
+ scope_164.string(var_165.as_str());
}
#[allow(unused_mut)]
- let mut scope_160 = writer.prefix("Unit");
- if let Some(var_161) = &input.unit {
- scope_160.string(var_161);
+ let mut scope_166 = writer.prefix("Unit");
+ if let Some(var_167) = &input.unit {
+ scope_166.string(var_167);
}
}
@@ -591,32 +613,32 @@ pub fn serialize_structure_predictive_scaling_metric_specification(
input: &crate::model::PredictiveScalingMetricSpecification,
) {
#[allow(unused_mut)]
- let mut scope_162 = writer.prefix("TargetValue");
- if let Some(var_163) = &input.target_value {
- scope_162.number(
+ let mut scope_168 = writer.prefix("TargetValue");
+ if let Some(var_169) = &input.target_value {
+ scope_168.number(
#[allow(clippy::useless_conversion)]
- smithy_types::Number::Float((*var_163).into()),
+ smithy_types::Number::Float((*var_169).into()),
);
}
#[allow(unused_mut)]
- let mut scope_164 = writer.prefix("PredefinedMetricPairSpecification");
- if let Some(var_165) = &input.predefined_metric_pair_specification {
+ let mut scope_170 = writer.prefix("PredefinedMetricPairSpecification");
+ if let Some(var_171) = &input.predefined_metric_pair_specification {
crate::query_ser::serialize_structure_predictive_scaling_predefined_metric_pair(
- scope_164, var_165,
+ scope_170, var_171,
);
}
#[allow(unused_mut)]
- let mut scope_166 = writer.prefix("PredefinedScalingMetricSpecification");
- if let Some(var_167) = &input.predefined_scaling_metric_specification {
+ let mut scope_172 = writer.prefix("PredefinedScalingMetricSpecification");
+ if let Some(var_173) = &input.predefined_scaling_metric_specification {
crate::query_ser::serialize_structure_predictive_scaling_predefined_scaling_metric(
- scope_166, var_167,
+ scope_172, var_173,
);
}
#[allow(unused_mut)]
- let mut scope_168 = writer.prefix("PredefinedLoadMetricSpecification");
- if let Some(var_169) = &input.predefined_load_metric_specification {
+ let mut scope_174 = writer.prefix("PredefinedLoadMetricSpecification");
+ if let Some(var_175) = &input.predefined_load_metric_specification {
crate::query_ser::serialize_structure_predictive_scaling_predefined_load_metric(
- scope_168, var_169,
+ scope_174, var_175,
);
}
}
@@ -627,19 +649,19 @@ pub fn serialize_structure_launch_template_overrides(
input: &crate::model::LaunchTemplateOverrides,
) {
#[allow(unused_mut)]
- let mut scope_170 = writer.prefix("InstanceType");
- if let Some(var_171) = &input.instance_type {
- scope_170.string(var_171);
+ let mut scope_176 = writer.prefix("InstanceType");
+ if let Some(var_177) = &input.instance_type {
+ scope_176.string(var_177);
}
#[allow(unused_mut)]
- let mut scope_172 = writer.prefix("WeightedCapacity");
- if let Some(var_173) = &input.weighted_capacity {
- scope_172.string(var_173);
+ let mut scope_178 = writer.prefix("WeightedCapacity");
+ if let Some(var_179) = &input.weighted_capacity {
+ scope_178.string(var_179);
}
#[allow(unused_mut)]
- let mut scope_174 = writer.prefix("LaunchTemplateSpecification");
- if let Some(var_175) = &input.launch_template_specification {
- crate::query_ser::serialize_structure_launch_template_specification(scope_174, var_175);
+ let mut scope_180 = writer.prefix("LaunchTemplateSpecification");
+ if let Some(var_181) = &input.launch_template_specification {
+ crate::query_ser::serialize_structure_launch_template_specification(scope_180, var_181);
}
}
@@ -649,14 +671,14 @@ pub fn serialize_structure_metric_dimension(
input: &crate::model::MetricDimension,
) {
#[allow(unused_mut)]
- let mut scope_176 = writer.prefix("Name");
- if let Some(var_177) = &input.name {
- scope_176.string(var_177);
+ let mut scope_182 = writer.prefix("Name");
+ if let Some(var_183) = &input.name {
+ scope_182.string(var_183);
}
#[allow(unused_mut)]
- let mut scope_178 = writer.prefix("Value");
- if let Some(var_179) = &input.value {
- scope_178.string(var_179);
+ let mut scope_184 = writer.prefix("Value");
+ if let Some(var_185) = &input.value {
+ scope_184.string(var_185);
}
}
@@ -666,14 +688,14 @@ pub fn serialize_structure_predictive_scaling_predefined_metric_pair(
input: &crate::model::PredictiveScalingPredefinedMetricPair,
) {
#[allow(unused_mut)]
- let mut scope_180 = writer.prefix("PredefinedMetricType");
- if let Some(var_181) = &input.predefined_metric_type {
- scope_180.string(var_181.as_str());
+ let mut scope_186 = writer.prefix("PredefinedMetricType");
+ if let Some(var_187) = &input.predefined_metric_type {
+ scope_186.string(var_187.as_str());
}
#[allow(unused_mut)]
- let mut scope_182 = writer.prefix("ResourceLabel");
- if let Some(var_183) = &input.resource_label {
- scope_182.string(var_183);
+ let mut scope_188 = writer.prefix("ResourceLabel");
+ if let Some(var_189) = &input.resource_label {
+ scope_188.string(var_189);
}
}
@@ -683,14 +705,14 @@ pub fn serialize_structure_predictive_scaling_predefined_scaling_metric(
input: &crate::model::PredictiveScalingPredefinedScalingMetric,
) {
#[allow(unused_mut)]
- let mut scope_184 = writer.prefix("PredefinedMetricType");
- if let Some(var_185) = &input.predefined_metric_type {
- scope_184.string(var_185.as_str());
+ let mut scope_190 = writer.prefix("PredefinedMetricType");
+ if let Some(var_191) = &input.predefined_metric_type {
+ scope_190.string(var_191.as_str());
}
#[allow(unused_mut)]
- let mut scope_186 = writer.prefix("ResourceLabel");
- if let Some(var_187) = &input.resource_label {
- scope_186.string(var_187);
+ let mut scope_192 = writer.prefix("ResourceLabel");
+ if let Some(var_193) = &input.resource_label {
+ scope_192.string(var_193);
}
}
@@ -700,13 +722,13 @@ pub fn serialize_structure_predictive_scaling_predefined_load_metric(
input: &crate::model::PredictiveScalingPredefinedLoadMetric,
) {
#[allow(unused_mut)]
- let mut scope_188 = writer.prefix("PredefinedMetricType");
- if let Some(var_189) = &input.predefined_metric_type {
- scope_188.string(var_189.as_str());
+ let mut scope_194 = writer.prefix("PredefinedMetricType");
+ if let Some(var_195) = &input.predefined_metric_type {
+ scope_194.string(var_195.as_str());
}
#[allow(unused_mut)]
- let mut scope_190 = writer.prefix("ResourceLabel");
- if let Some(var_191) = &input.resource_label {
- scope_190.string(var_191);
+ let mut scope_196 = writer.prefix("ResourceLabel");
+ if let Some(var_197) = &input.resource_label {
+ scope_196.string(var_197);
}
}
diff --git a/sdk/autoscaling/src/xml_deser.rs b/sdk/autoscaling/src/xml_deser.rs
index 515bcb4e0076..1aa2c077f992 100644
--- a/sdk/autoscaling/src/xml_deser.rs
+++ b/sdk/autoscaling/src/xml_deser.rs
@@ -3518,6 +3518,26 @@ pub fn deser_structure_instance_refresh(
builder = builder.set_progress_details(var_131);
}
,
+ s if s.matches("Preferences") /* Preferences com.amazonaws.autoscaling#InstanceRefresh$Preferences */ => {
+ let var_132 =
+ Some(
+ crate::xml_deser::deser_structure_refresh_preferences(&mut tag)
+ ?
+ )
+ ;
+ builder = builder.set_preferences(var_132);
+ }
+ ,
+ s if s.matches("DesiredConfiguration") /* DesiredConfiguration com.amazonaws.autoscaling#InstanceRefresh$DesiredConfiguration */ => {
+ let var_133 =
+ Some(
+ crate::xml_deser::deser_structure_desired_configuration(&mut tag)
+ ?
+ )
+ ;
+ builder = builder.set_desired_configuration(var_133);
+ }
+ ,
_ => {}
}
}
@@ -3532,7 +3552,7 @@ pub fn deser_structure_launch_configuration(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("LaunchConfigurationName") /* LaunchConfigurationName com.amazonaws.autoscaling#LaunchConfiguration$LaunchConfigurationName */ => {
- let var_132 =
+ let var_134 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3541,11 +3561,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_launch_configuration_name(var_132);
+ builder = builder.set_launch_configuration_name(var_134);
}
,
s if s.matches("LaunchConfigurationARN") /* LaunchConfigurationARN com.amazonaws.autoscaling#LaunchConfiguration$LaunchConfigurationARN */ => {
- let var_133 =
+ let var_135 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3554,11 +3574,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_launch_configuration_arn(var_133);
+ builder = builder.set_launch_configuration_arn(var_135);
}
,
s if s.matches("ImageId") /* ImageId com.amazonaws.autoscaling#LaunchConfiguration$ImageId */ => {
- let var_134 =
+ let var_136 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3567,11 +3587,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_image_id(var_134);
+ builder = builder.set_image_id(var_136);
}
,
s if s.matches("KeyName") /* KeyName com.amazonaws.autoscaling#LaunchConfiguration$KeyName */ => {
- let var_135 =
+ let var_137 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3580,21 +3600,21 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_key_name(var_135);
+ builder = builder.set_key_name(var_137);
}
,
s if s.matches("SecurityGroups") /* SecurityGroups com.amazonaws.autoscaling#LaunchConfiguration$SecurityGroups */ => {
- let var_136 =
+ let var_138 =
Some(
crate::xml_deser::deser_list_security_groups(&mut tag)
?
)
;
- builder = builder.set_security_groups(var_136);
+ builder = builder.set_security_groups(var_138);
}
,
s if s.matches("ClassicLinkVPCId") /* ClassicLinkVPCId com.amazonaws.autoscaling#LaunchConfiguration$ClassicLinkVPCId */ => {
- let var_137 =
+ let var_139 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3603,21 +3623,21 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_classic_link_vpc_id(var_137);
+ builder = builder.set_classic_link_vpc_id(var_139);
}
,
s if s.matches("ClassicLinkVPCSecurityGroups") /* ClassicLinkVPCSecurityGroups com.amazonaws.autoscaling#LaunchConfiguration$ClassicLinkVPCSecurityGroups */ => {
- let var_138 =
+ let var_140 =
Some(
crate::xml_deser::deser_list_classic_link_vpc_security_groups(&mut tag)
?
)
;
- builder = builder.set_classic_link_vpc_security_groups(var_138);
+ builder = builder.set_classic_link_vpc_security_groups(var_140);
}
,
s if s.matches("UserData") /* UserData com.amazonaws.autoscaling#LaunchConfiguration$UserData */ => {
- let var_139 =
+ let var_141 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3626,11 +3646,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_user_data(var_139);
+ builder = builder.set_user_data(var_141);
}
,
s if s.matches("InstanceType") /* InstanceType com.amazonaws.autoscaling#LaunchConfiguration$InstanceType */ => {
- let var_140 =
+ let var_142 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3639,11 +3659,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_instance_type(var_140);
+ builder = builder.set_instance_type(var_142);
}
,
s if s.matches("KernelId") /* KernelId com.amazonaws.autoscaling#LaunchConfiguration$KernelId */ => {
- let var_141 =
+ let var_143 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3652,11 +3672,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_kernel_id(var_141);
+ builder = builder.set_kernel_id(var_143);
}
,
s if s.matches("RamdiskId") /* RamdiskId com.amazonaws.autoscaling#LaunchConfiguration$RamdiskId */ => {
- let var_142 =
+ let var_144 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3665,31 +3685,31 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_ramdisk_id(var_142);
+ builder = builder.set_ramdisk_id(var_144);
}
,
s if s.matches("BlockDeviceMappings") /* BlockDeviceMappings com.amazonaws.autoscaling#LaunchConfiguration$BlockDeviceMappings */ => {
- let var_143 =
+ let var_145 =
Some(
crate::xml_deser::deser_list_block_device_mappings(&mut tag)
?
)
;
- builder = builder.set_block_device_mappings(var_143);
+ builder = builder.set_block_device_mappings(var_145);
}
,
s if s.matches("InstanceMonitoring") /* InstanceMonitoring com.amazonaws.autoscaling#LaunchConfiguration$InstanceMonitoring */ => {
- let var_144 =
+ let var_146 =
Some(
crate::xml_deser::deser_structure_instance_monitoring(&mut tag)
?
)
;
- builder = builder.set_instance_monitoring(var_144);
+ builder = builder.set_instance_monitoring(var_146);
}
,
s if s.matches("SpotPrice") /* SpotPrice com.amazonaws.autoscaling#LaunchConfiguration$SpotPrice */ => {
- let var_145 =
+ let var_147 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3698,11 +3718,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_spot_price(var_145);
+ builder = builder.set_spot_price(var_147);
}
,
s if s.matches("IamInstanceProfile") /* IamInstanceProfile com.amazonaws.autoscaling#LaunchConfiguration$IamInstanceProfile */ => {
- let var_146 =
+ let var_148 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3711,11 +3731,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_iam_instance_profile(var_146);
+ builder = builder.set_iam_instance_profile(var_148);
}
,
s if s.matches("CreatedTime") /* CreatedTime com.amazonaws.autoscaling#LaunchConfiguration$CreatedTime */ => {
- let var_147 =
+ let var_149 =
Some(
smithy_types::Instant::from_str(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3725,11 +3745,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_created_time(var_147);
+ builder = builder.set_created_time(var_149);
}
,
s if s.matches("EbsOptimized") /* EbsOptimized com.amazonaws.autoscaling#LaunchConfiguration$EbsOptimized */ => {
- let var_148 =
+ let var_150 =
Some(
{
::parse_smithy_primitive(
@@ -3740,11 +3760,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_ebs_optimized(var_148);
+ builder = builder.set_ebs_optimized(var_150);
}
,
s if s.matches("AssociatePublicIpAddress") /* AssociatePublicIpAddress com.amazonaws.autoscaling#LaunchConfiguration$AssociatePublicIpAddress */ => {
- let var_149 =
+ let var_151 =
Some(
{
::parse_smithy_primitive(
@@ -3755,11 +3775,11 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_associate_public_ip_address(var_149);
+ builder = builder.set_associate_public_ip_address(var_151);
}
,
s if s.matches("PlacementTenancy") /* PlacementTenancy com.amazonaws.autoscaling#LaunchConfiguration$PlacementTenancy */ => {
- let var_150 =
+ let var_152 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3768,17 +3788,17 @@ pub fn deser_structure_launch_configuration(
?
)
;
- builder = builder.set_placement_tenancy(var_150);
+ builder = builder.set_placement_tenancy(var_152);
}
,
s if s.matches("MetadataOptions") /* MetadataOptions com.amazonaws.autoscaling#LaunchConfiguration$MetadataOptions */ => {
- let var_151 =
+ let var_153 =
Some(
crate::xml_deser::deser_structure_instance_metadata_options(&mut tag)
?
)
;
- builder = builder.set_metadata_options(var_151);
+ builder = builder.set_metadata_options(var_153);
}
,
_ => {}
@@ -3795,7 +3815,7 @@ pub fn deser_structure_lifecycle_hook(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("LifecycleHookName") /* LifecycleHookName com.amazonaws.autoscaling#LifecycleHook$LifecycleHookName */ => {
- let var_152 =
+ let var_154 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3804,11 +3824,11 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_lifecycle_hook_name(var_152);
+ builder = builder.set_lifecycle_hook_name(var_154);
}
,
s if s.matches("AutoScalingGroupName") /* AutoScalingGroupName com.amazonaws.autoscaling#LifecycleHook$AutoScalingGroupName */ => {
- let var_153 =
+ let var_155 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3817,11 +3837,11 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_auto_scaling_group_name(var_153);
+ builder = builder.set_auto_scaling_group_name(var_155);
}
,
s if s.matches("LifecycleTransition") /* LifecycleTransition com.amazonaws.autoscaling#LifecycleHook$LifecycleTransition */ => {
- let var_154 =
+ let var_156 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3830,11 +3850,11 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_lifecycle_transition(var_154);
+ builder = builder.set_lifecycle_transition(var_156);
}
,
s if s.matches("NotificationTargetARN") /* NotificationTargetARN com.amazonaws.autoscaling#LifecycleHook$NotificationTargetARN */ => {
- let var_155 =
+ let var_157 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3843,11 +3863,11 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_notification_target_arn(var_155);
+ builder = builder.set_notification_target_arn(var_157);
}
,
s if s.matches("RoleARN") /* RoleARN com.amazonaws.autoscaling#LifecycleHook$RoleARN */ => {
- let var_156 =
+ let var_158 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3856,11 +3876,11 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_role_arn(var_156);
+ builder = builder.set_role_arn(var_158);
}
,
s if s.matches("NotificationMetadata") /* NotificationMetadata com.amazonaws.autoscaling#LifecycleHook$NotificationMetadata */ => {
- let var_157 =
+ let var_159 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3869,11 +3889,11 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_notification_metadata(var_157);
+ builder = builder.set_notification_metadata(var_159);
}
,
s if s.matches("HeartbeatTimeout") /* HeartbeatTimeout com.amazonaws.autoscaling#LifecycleHook$HeartbeatTimeout */ => {
- let var_158 =
+ let var_160 =
Some(
{
::parse_smithy_primitive(
@@ -3884,11 +3904,11 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_heartbeat_timeout(var_158);
+ builder = builder.set_heartbeat_timeout(var_160);
}
,
s if s.matches("GlobalTimeout") /* GlobalTimeout com.amazonaws.autoscaling#LifecycleHook$GlobalTimeout */ => {
- let var_159 =
+ let var_161 =
Some(
{
::parse_smithy_primitive(
@@ -3899,11 +3919,11 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_global_timeout(var_159);
+ builder = builder.set_global_timeout(var_161);
}
,
s if s.matches("DefaultResult") /* DefaultResult com.amazonaws.autoscaling#LifecycleHook$DefaultResult */ => {
- let var_160 =
+ let var_162 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3912,7 +3932,7 @@ pub fn deser_structure_lifecycle_hook(
?
)
;
- builder = builder.set_default_result(var_160);
+ builder = builder.set_default_result(var_162);
}
,
_ => {}
@@ -3929,7 +3949,7 @@ pub fn deser_structure_load_balancer_state(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("LoadBalancerName") /* LoadBalancerName com.amazonaws.autoscaling#LoadBalancerState$LoadBalancerName */ => {
- let var_161 =
+ let var_163 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3938,11 +3958,11 @@ pub fn deser_structure_load_balancer_state(
?
)
;
- builder = builder.set_load_balancer_name(var_161);
+ builder = builder.set_load_balancer_name(var_163);
}
,
s if s.matches("State") /* State com.amazonaws.autoscaling#LoadBalancerState$State */ => {
- let var_162 =
+ let var_164 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3951,7 +3971,7 @@ pub fn deser_structure_load_balancer_state(
?
)
;
- builder = builder.set_state(var_162);
+ builder = builder.set_state(var_164);
}
,
_ => {}
@@ -3968,7 +3988,7 @@ pub fn deser_structure_load_balancer_target_group_state(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("LoadBalancerTargetGroupARN") /* LoadBalancerTargetGroupARN com.amazonaws.autoscaling#LoadBalancerTargetGroupState$LoadBalancerTargetGroupARN */ => {
- let var_163 =
+ let var_165 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3977,11 +3997,11 @@ pub fn deser_structure_load_balancer_target_group_state(
?
)
;
- builder = builder.set_load_balancer_target_group_arn(var_163);
+ builder = builder.set_load_balancer_target_group_arn(var_165);
}
,
s if s.matches("State") /* State com.amazonaws.autoscaling#LoadBalancerTargetGroupState$State */ => {
- let var_164 =
+ let var_166 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -3990,7 +4010,7 @@ pub fn deser_structure_load_balancer_target_group_state(
?
)
;
- builder = builder.set_state(var_164);
+ builder = builder.set_state(var_166);
}
,
_ => {}
@@ -4007,7 +4027,7 @@ pub fn deser_structure_metric_collection_type(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("Metric") /* Metric com.amazonaws.autoscaling#MetricCollectionType$Metric */ => {
- let var_165 =
+ let var_167 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4016,7 +4036,7 @@ pub fn deser_structure_metric_collection_type(
?
)
;
- builder = builder.set_metric(var_165);
+ builder = builder.set_metric(var_167);
}
,
_ => {}
@@ -4033,7 +4053,7 @@ pub fn deser_structure_metric_granularity_type(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("Granularity") /* Granularity com.amazonaws.autoscaling#MetricGranularityType$Granularity */ => {
- let var_166 =
+ let var_168 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4042,7 +4062,7 @@ pub fn deser_structure_metric_granularity_type(
?
)
;
- builder = builder.set_granularity(var_166);
+ builder = builder.set_granularity(var_168);
}
,
_ => {}
@@ -4059,7 +4079,7 @@ pub fn deser_structure_notification_configuration(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("AutoScalingGroupName") /* AutoScalingGroupName com.amazonaws.autoscaling#NotificationConfiguration$AutoScalingGroupName */ => {
- let var_167 =
+ let var_169 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4068,11 +4088,11 @@ pub fn deser_structure_notification_configuration(
?
)
;
- builder = builder.set_auto_scaling_group_name(var_167);
+ builder = builder.set_auto_scaling_group_name(var_169);
}
,
s if s.matches("TopicARN") /* TopicARN com.amazonaws.autoscaling#NotificationConfiguration$TopicARN */ => {
- let var_168 =
+ let var_170 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4081,11 +4101,11 @@ pub fn deser_structure_notification_configuration(
?
)
;
- builder = builder.set_topic_arn(var_168);
+ builder = builder.set_topic_arn(var_170);
}
,
s if s.matches("NotificationType") /* NotificationType com.amazonaws.autoscaling#NotificationConfiguration$NotificationType */ => {
- let var_169 =
+ let var_171 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4094,7 +4114,7 @@ pub fn deser_structure_notification_configuration(
?
)
;
- builder = builder.set_notification_type(var_169);
+ builder = builder.set_notification_type(var_171);
}
,
_ => {}
@@ -4111,7 +4131,7 @@ pub fn deser_structure_scaling_policy(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("AutoScalingGroupName") /* AutoScalingGroupName com.amazonaws.autoscaling#ScalingPolicy$AutoScalingGroupName */ => {
- let var_170 =
+ let var_172 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4120,11 +4140,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_auto_scaling_group_name(var_170);
+ builder = builder.set_auto_scaling_group_name(var_172);
}
,
s if s.matches("PolicyName") /* PolicyName com.amazonaws.autoscaling#ScalingPolicy$PolicyName */ => {
- let var_171 =
+ let var_173 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4133,11 +4153,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_policy_name(var_171);
+ builder = builder.set_policy_name(var_173);
}
,
s if s.matches("PolicyARN") /* PolicyARN com.amazonaws.autoscaling#ScalingPolicy$PolicyARN */ => {
- let var_172 =
+ let var_174 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4146,11 +4166,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_policy_arn(var_172);
+ builder = builder.set_policy_arn(var_174);
}
,
s if s.matches("PolicyType") /* PolicyType com.amazonaws.autoscaling#ScalingPolicy$PolicyType */ => {
- let var_173 =
+ let var_175 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4159,11 +4179,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_policy_type(var_173);
+ builder = builder.set_policy_type(var_175);
}
,
s if s.matches("AdjustmentType") /* AdjustmentType com.amazonaws.autoscaling#ScalingPolicy$AdjustmentType */ => {
- let var_174 =
+ let var_176 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4172,11 +4192,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_adjustment_type(var_174);
+ builder = builder.set_adjustment_type(var_176);
}
,
s if s.matches("MinAdjustmentStep") /* MinAdjustmentStep com.amazonaws.autoscaling#ScalingPolicy$MinAdjustmentStep */ => {
- let var_175 =
+ let var_177 =
Some(
{
::parse_smithy_primitive(
@@ -4187,11 +4207,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_min_adjustment_step(var_175);
+ builder = builder.set_min_adjustment_step(var_177);
}
,
s if s.matches("MinAdjustmentMagnitude") /* MinAdjustmentMagnitude com.amazonaws.autoscaling#ScalingPolicy$MinAdjustmentMagnitude */ => {
- let var_176 =
+ let var_178 =
Some(
{
::parse_smithy_primitive(
@@ -4202,11 +4222,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_min_adjustment_magnitude(var_176);
+ builder = builder.set_min_adjustment_magnitude(var_178);
}
,
s if s.matches("ScalingAdjustment") /* ScalingAdjustment com.amazonaws.autoscaling#ScalingPolicy$ScalingAdjustment */ => {
- let var_177 =
+ let var_179 =
Some(
{
::parse_smithy_primitive(
@@ -4217,11 +4237,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_scaling_adjustment(var_177);
+ builder = builder.set_scaling_adjustment(var_179);
}
,
s if s.matches("Cooldown") /* Cooldown com.amazonaws.autoscaling#ScalingPolicy$Cooldown */ => {
- let var_178 =
+ let var_180 =
Some(
{
::parse_smithy_primitive(
@@ -4232,21 +4252,21 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_cooldown(var_178);
+ builder = builder.set_cooldown(var_180);
}
,
s if s.matches("StepAdjustments") /* StepAdjustments com.amazonaws.autoscaling#ScalingPolicy$StepAdjustments */ => {
- let var_179 =
+ let var_181 =
Some(
crate::xml_deser::deser_list_step_adjustments(&mut tag)
?
)
;
- builder = builder.set_step_adjustments(var_179);
+ builder = builder.set_step_adjustments(var_181);
}
,
s if s.matches("MetricAggregationType") /* MetricAggregationType com.amazonaws.autoscaling#ScalingPolicy$MetricAggregationType */ => {
- let var_180 =
+ let var_182 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4255,11 +4275,11 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_metric_aggregation_type(var_180);
+ builder = builder.set_metric_aggregation_type(var_182);
}
,
s if s.matches("EstimatedInstanceWarmup") /* EstimatedInstanceWarmup com.amazonaws.autoscaling#ScalingPolicy$EstimatedInstanceWarmup */ => {
- let var_181 =
+ let var_183 =
Some(
{
::parse_smithy_primitive(
@@ -4270,31 +4290,31 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_estimated_instance_warmup(var_181);
+ builder = builder.set_estimated_instance_warmup(var_183);
}
,
s if s.matches("Alarms") /* Alarms com.amazonaws.autoscaling#ScalingPolicy$Alarms */ => {
- let var_182 =
+ let var_184 =
Some(
crate::xml_deser::deser_list_alarms(&mut tag)
?
)
;
- builder = builder.set_alarms(var_182);
+ builder = builder.set_alarms(var_184);
}
,
s if s.matches("TargetTrackingConfiguration") /* TargetTrackingConfiguration com.amazonaws.autoscaling#ScalingPolicy$TargetTrackingConfiguration */ => {
- let var_183 =
+ let var_185 =
Some(
crate::xml_deser::deser_structure_target_tracking_configuration(&mut tag)
?
)
;
- builder = builder.set_target_tracking_configuration(var_183);
+ builder = builder.set_target_tracking_configuration(var_185);
}
,
s if s.matches("Enabled") /* Enabled com.amazonaws.autoscaling#ScalingPolicy$Enabled */ => {
- let var_184 =
+ let var_186 =
Some(
{
::parse_smithy_primitive(
@@ -4305,17 +4325,17 @@ pub fn deser_structure_scaling_policy(
?
)
;
- builder = builder.set_enabled(var_184);
+ builder = builder.set_enabled(var_186);
}
,
s if s.matches("PredictiveScalingConfiguration") /* PredictiveScalingConfiguration com.amazonaws.autoscaling#ScalingPolicy$PredictiveScalingConfiguration */ => {
- let var_185 =
+ let var_187 =
Some(
crate::xml_deser::deser_structure_predictive_scaling_configuration(&mut tag)
?
)
;
- builder = builder.set_predictive_scaling_configuration(var_185);
+ builder = builder.set_predictive_scaling_configuration(var_187);
}
,
_ => {}
@@ -4332,7 +4352,7 @@ pub fn deser_structure_process_type(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("ProcessName") /* ProcessName com.amazonaws.autoscaling#ProcessType$ProcessName */ => {
- let var_186 =
+ let var_188 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4341,7 +4361,7 @@ pub fn deser_structure_process_type(
?
)
;
- builder = builder.set_process_name(var_186);
+ builder = builder.set_process_name(var_188);
}
,
_ => {}
@@ -4358,7 +4378,7 @@ pub fn deser_structure_scheduled_update_group_action(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("AutoScalingGroupName") /* AutoScalingGroupName com.amazonaws.autoscaling#ScheduledUpdateGroupAction$AutoScalingGroupName */ => {
- let var_187 =
+ let var_189 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4367,11 +4387,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_auto_scaling_group_name(var_187);
+ builder = builder.set_auto_scaling_group_name(var_189);
}
,
s if s.matches("ScheduledActionName") /* ScheduledActionName com.amazonaws.autoscaling#ScheduledUpdateGroupAction$ScheduledActionName */ => {
- let var_188 =
+ let var_190 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4380,11 +4400,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_scheduled_action_name(var_188);
+ builder = builder.set_scheduled_action_name(var_190);
}
,
s if s.matches("ScheduledActionARN") /* ScheduledActionARN com.amazonaws.autoscaling#ScheduledUpdateGroupAction$ScheduledActionARN */ => {
- let var_189 =
+ let var_191 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4393,11 +4413,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_scheduled_action_arn(var_189);
+ builder = builder.set_scheduled_action_arn(var_191);
}
,
s if s.matches("Time") /* Time com.amazonaws.autoscaling#ScheduledUpdateGroupAction$Time */ => {
- let var_190 =
+ let var_192 =
Some(
smithy_types::Instant::from_str(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4407,11 +4427,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_time(var_190);
+ builder = builder.set_time(var_192);
}
,
s if s.matches("StartTime") /* StartTime com.amazonaws.autoscaling#ScheduledUpdateGroupAction$StartTime */ => {
- let var_191 =
+ let var_193 =
Some(
smithy_types::Instant::from_str(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4421,11 +4441,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_start_time(var_191);
+ builder = builder.set_start_time(var_193);
}
,
s if s.matches("EndTime") /* EndTime com.amazonaws.autoscaling#ScheduledUpdateGroupAction$EndTime */ => {
- let var_192 =
+ let var_194 =
Some(
smithy_types::Instant::from_str(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4435,11 +4455,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_end_time(var_192);
+ builder = builder.set_end_time(var_194);
}
,
s if s.matches("Recurrence") /* Recurrence com.amazonaws.autoscaling#ScheduledUpdateGroupAction$Recurrence */ => {
- let var_193 =
+ let var_195 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4448,11 +4468,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_recurrence(var_193);
+ builder = builder.set_recurrence(var_195);
}
,
s if s.matches("MinSize") /* MinSize com.amazonaws.autoscaling#ScheduledUpdateGroupAction$MinSize */ => {
- let var_194 =
+ let var_196 =
Some(
{
::parse_smithy_primitive(
@@ -4463,11 +4483,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_min_size(var_194);
+ builder = builder.set_min_size(var_196);
}
,
s if s.matches("MaxSize") /* MaxSize com.amazonaws.autoscaling#ScheduledUpdateGroupAction$MaxSize */ => {
- let var_195 =
+ let var_197 =
Some(
{
::parse_smithy_primitive(
@@ -4478,11 +4498,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_max_size(var_195);
+ builder = builder.set_max_size(var_197);
}
,
s if s.matches("DesiredCapacity") /* DesiredCapacity com.amazonaws.autoscaling#ScheduledUpdateGroupAction$DesiredCapacity */ => {
- let var_196 =
+ let var_198 =
Some(
{
::parse_smithy_primitive(
@@ -4493,11 +4513,11 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_desired_capacity(var_196);
+ builder = builder.set_desired_capacity(var_198);
}
,
s if s.matches("TimeZone") /* TimeZone com.amazonaws.autoscaling#ScheduledUpdateGroupAction$TimeZone */ => {
- let var_197 =
+ let var_199 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4506,7 +4526,7 @@ pub fn deser_structure_scheduled_update_group_action(
?
)
;
- builder = builder.set_time_zone(var_197);
+ builder = builder.set_time_zone(var_199);
}
,
_ => {}
@@ -4523,7 +4543,7 @@ pub fn deser_structure_tag_description(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("ResourceId") /* ResourceId com.amazonaws.autoscaling#TagDescription$ResourceId */ => {
- let var_198 =
+ let var_200 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4532,11 +4552,11 @@ pub fn deser_structure_tag_description(
?
)
;
- builder = builder.set_resource_id(var_198);
+ builder = builder.set_resource_id(var_200);
}
,
s if s.matches("ResourceType") /* ResourceType com.amazonaws.autoscaling#TagDescription$ResourceType */ => {
- let var_199 =
+ let var_201 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4545,11 +4565,11 @@ pub fn deser_structure_tag_description(
?
)
;
- builder = builder.set_resource_type(var_199);
+ builder = builder.set_resource_type(var_201);
}
,
s if s.matches("Key") /* Key com.amazonaws.autoscaling#TagDescription$Key */ => {
- let var_200 =
+ let var_202 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4558,11 +4578,11 @@ pub fn deser_structure_tag_description(
?
)
;
- builder = builder.set_key(var_200);
+ builder = builder.set_key(var_202);
}
,
s if s.matches("Value") /* Value com.amazonaws.autoscaling#TagDescription$Value */ => {
- let var_201 =
+ let var_203 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4571,11 +4591,11 @@ pub fn deser_structure_tag_description(
?
)
;
- builder = builder.set_value(var_201);
+ builder = builder.set_value(var_203);
}
,
s if s.matches("PropagateAtLaunch") /* PropagateAtLaunch com.amazonaws.autoscaling#TagDescription$PropagateAtLaunch */ => {
- let var_202 =
+ let var_204 =
Some(
{
::parse_smithy_primitive(
@@ -4586,7 +4606,7 @@ pub fn deser_structure_tag_description(
?
)
;
- builder = builder.set_propagate_at_launch(var_202);
+ builder = builder.set_propagate_at_launch(var_204);
}
,
_ => {}
@@ -4603,7 +4623,7 @@ pub fn deser_structure_instance(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("InstanceId") /* InstanceId com.amazonaws.autoscaling#Instance$InstanceId */ => {
- let var_203 =
+ let var_205 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4612,11 +4632,11 @@ pub fn deser_structure_instance(
?
)
;
- builder = builder.set_instance_id(var_203);
+ builder = builder.set_instance_id(var_205);
}
,
s if s.matches("InstanceType") /* InstanceType com.amazonaws.autoscaling#Instance$InstanceType */ => {
- let var_204 =
+ let var_206 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4625,11 +4645,11 @@ pub fn deser_structure_instance(
?
)
;
- builder = builder.set_instance_type(var_204);
+ builder = builder.set_instance_type(var_206);
}
,
s if s.matches("AvailabilityZone") /* AvailabilityZone com.amazonaws.autoscaling#Instance$AvailabilityZone */ => {
- let var_205 =
+ let var_207 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4638,11 +4658,11 @@ pub fn deser_structure_instance(
?
)
;
- builder = builder.set_availability_zone(var_205);
+ builder = builder.set_availability_zone(var_207);
}
,
s if s.matches("LifecycleState") /* LifecycleState com.amazonaws.autoscaling#Instance$LifecycleState */ => {
- let var_206 =
+ let var_208 =
Some(
Result::::Ok(
crate::model::LifecycleState::from(
@@ -4652,11 +4672,11 @@ pub fn deser_structure_instance(
?
)
;
- builder = builder.set_lifecycle_state(var_206);
+ builder = builder.set_lifecycle_state(var_208);
}
,
s if s.matches("HealthStatus") /* HealthStatus com.amazonaws.autoscaling#Instance$HealthStatus */ => {
- let var_207 =
+ let var_209 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4665,11 +4685,11 @@ pub fn deser_structure_instance(
?
)
;
- builder = builder.set_health_status(var_207);
+ builder = builder.set_health_status(var_209);
}
,
s if s.matches("LaunchConfigurationName") /* LaunchConfigurationName com.amazonaws.autoscaling#Instance$LaunchConfigurationName */ => {
- let var_208 =
+ let var_210 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4678,21 +4698,21 @@ pub fn deser_structure_instance(
?
)
;
- builder = builder.set_launch_configuration_name(var_208);
+ builder = builder.set_launch_configuration_name(var_210);
}
,
s if s.matches("LaunchTemplate") /* LaunchTemplate com.amazonaws.autoscaling#Instance$LaunchTemplate */ => {
- let var_209 =
+ let var_211 =
Some(
crate::xml_deser::deser_structure_launch_template_specification(&mut tag)
?
)
;
- builder = builder.set_launch_template(var_209);
+ builder = builder.set_launch_template(var_211);
}
,
s if s.matches("ProtectedFromScaleIn") /* ProtectedFromScaleIn com.amazonaws.autoscaling#Instance$ProtectedFromScaleIn */ => {
- let var_210 =
+ let var_212 =
Some(
{
::parse_smithy_primitive(
@@ -4703,11 +4723,11 @@ pub fn deser_structure_instance(
?
)
;
- builder = builder.set_protected_from_scale_in(var_210);
+ builder = builder.set_protected_from_scale_in(var_212);
}
,
s if s.matches("WeightedCapacity") /* WeightedCapacity com.amazonaws.autoscaling#Instance$WeightedCapacity */ => {
- let var_211 =
+ let var_213 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4716,7 +4736,7 @@ pub fn deser_structure_instance(
?
)
;
- builder = builder.set_weighted_capacity(var_211);
+ builder = builder.set_weighted_capacity(var_213);
}
,
_ => {}
@@ -4733,33 +4753,33 @@ pub fn deser_structure_load_forecast(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("Timestamps") /* Timestamps com.amazonaws.autoscaling#LoadForecast$Timestamps */ => {
- let var_212 =
+ let var_214 =
Some(
crate::xml_deser::deser_list_predictive_scaling_forecast_timestamps(&mut tag)
?
)
;
- builder = builder.set_timestamps(var_212);
+ builder = builder.set_timestamps(var_214);
}
,
s if s.matches("Values") /* Values com.amazonaws.autoscaling#LoadForecast$Values */ => {
- let var_213 =
+ let var_215 =
Some(
crate::xml_deser::deser_list_predictive_scaling_forecast_values(&mut tag)
?
)
;
- builder = builder.set_values(var_213);
+ builder = builder.set_values(var_215);
}
,
s if s.matches("MetricSpecification") /* MetricSpecification com.amazonaws.autoscaling#LoadForecast$MetricSpecification */ => {
- let var_214 =
+ let var_216 =
Some(
crate::xml_deser::deser_structure_predictive_scaling_metric_specification(&mut tag)
?
)
;
- builder = builder.set_metric_specification(var_214);
+ builder = builder.set_metric_specification(var_216);
}
,
_ => {}
@@ -4823,7 +4843,7 @@ pub fn deser_structure_alarm(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("AlarmName") /* AlarmName com.amazonaws.autoscaling#Alarm$AlarmName */ => {
- let var_215 =
+ let var_217 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4832,11 +4852,11 @@ pub fn deser_structure_alarm(
?
)
;
- builder = builder.set_alarm_name(var_215);
+ builder = builder.set_alarm_name(var_217);
}
,
s if s.matches("AlarmARN") /* AlarmARN com.amazonaws.autoscaling#Alarm$AlarmARN */ => {
- let var_216 =
+ let var_218 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4845,7 +4865,7 @@ pub fn deser_structure_alarm(
?
)
;
- builder = builder.set_alarm_arn(var_216);
+ builder = builder.set_alarm_arn(var_218);
}
,
_ => {}
@@ -4862,7 +4882,7 @@ pub fn deser_structure_launch_template_specification(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("LaunchTemplateId") /* LaunchTemplateId com.amazonaws.autoscaling#LaunchTemplateSpecification$LaunchTemplateId */ => {
- let var_217 =
+ let var_219 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4871,11 +4891,11 @@ pub fn deser_structure_launch_template_specification(
?
)
;
- builder = builder.set_launch_template_id(var_217);
+ builder = builder.set_launch_template_id(var_219);
}
,
s if s.matches("LaunchTemplateName") /* LaunchTemplateName com.amazonaws.autoscaling#LaunchTemplateSpecification$LaunchTemplateName */ => {
- let var_218 =
+ let var_220 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4884,11 +4904,11 @@ pub fn deser_structure_launch_template_specification(
?
)
;
- builder = builder.set_launch_template_name(var_218);
+ builder = builder.set_launch_template_name(var_220);
}
,
s if s.matches("Version") /* Version com.amazonaws.autoscaling#LaunchTemplateSpecification$Version */ => {
- let var_219 =
+ let var_221 =
Some(
Result::::Ok(
smithy_xml::decode::try_data(&mut tag)?.as_ref()
@@ -4897,7 +4917,7 @@ pub fn deser_structure_launch_template_specification(
?
)
;
- builder = builder.set_version(var_219);
+ builder = builder.set_version(var_221);
}
,
_ => {}
@@ -4914,23 +4934,23 @@ pub fn deser_structure_mixed_instances_policy(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("LaunchTemplate") /* LaunchTemplate com.amazonaws.autoscaling#MixedInstancesPolicy$LaunchTemplate */ => {
- let var_220 =
+ let var_222 =
Some(
crate::xml_deser::deser_structure_launch_template(&mut tag)
?
)
;
- builder = builder.set_launch_template(var_220);
+ builder = builder.set_launch_template(var_222);
}
,
s if s.matches("InstancesDistribution") /* InstancesDistribution com.amazonaws.autoscaling#MixedInstancesPolicy$InstancesDistribution */ => {
- let var_221 =
+ let var_223 =
Some(
crate::xml_deser::deser_structure_instances_distribution(&mut tag)
?
)
;
- builder = builder.set_instances_distribution(var_221);
+ builder = builder.set_instances_distribution(var_223);
}
,
_ => {}
@@ -5051,23 +5071,139 @@ pub fn deser_structure_instance_refresh_progress_details(
while let Some(mut tag) = decoder.next_tag() {
match tag.start_el() {
s if s.matches("LivePoolProgress") /* LivePoolProgress com.amazonaws.autoscaling#InstanceRefreshProgressDetails$LivePoolProgress */ => {
- let var_222 =
+ let var_224 =
Some(
crate::xml_deser::deser_structure_instance_refresh_live_pool_progress(&mut tag)
?
)
;
- builder = builder.set_live_pool_progress(var_222);
+ builder = builder.set_live_pool_progress(var_224);
}
,
s if s.matches("WarmPoolProgress") /* WarmPoolProgress com.amazonaws.autoscaling#InstanceRefreshProgressDetails$WarmPoolProgress */ => {
- let var_223 =
+ let var_225 =
Some(
crate::xml_deser::deser_structure_instance_refresh_warm_pool_progress(&mut tag)
?
)
;
- builder = builder.set_warm_pool_progress(var_223);
+ builder = builder.set_warm_pool_progress(var_225);
+ }
+ ,
+ _ => {}
+ }
+ }
+ Ok(builder.build())
+}
+
+pub fn deser_structure_refresh_preferences(
+ decoder: &mut smithy_xml::decode::ScopedDecoder,
+) -> Result {
+ #[allow(unused_mut)]
+ let mut builder = crate::model::RefreshPreferences::builder();
+ while let Some(mut tag) = decoder.next_tag() {
+ match tag.start_el() {
+ s if s.matches("MinHealthyPercentage") /* MinHealthyPercentage com.amazonaws.autoscaling#RefreshPreferences$MinHealthyPercentage */ => {
+ let var_226 =
+ Some(
+ {
+ ::parse_smithy_primitive(
+ smithy_xml::decode::try_data(&mut tag)?.as_ref()
+ )
+ .map_err(|_|smithy_xml::decode::XmlError::custom("expected (integer: `com.amazonaws.autoscaling#IntPercent`)"))
+ }
+ ?
+ )
+ ;
+ builder = builder.set_min_healthy_percentage(var_226);
+ }
+ ,
+ s if s.matches("InstanceWarmup") /* InstanceWarmup com.amazonaws.autoscaling#RefreshPreferences$InstanceWarmup */ => {
+ let var_227 =
+ Some(
+ {
+